Sample records for high performance modeling

  1. Mental models of audit and feedback in primary care settings.

    PubMed

    Hysong, Sylvia J; Smitham, Kristen; SoRelle, Richard; Amspoker, Amber; Hughes, Ashley M; Haidet, Paul

    2018-05-30

    Audit and feedback has been shown to be instrumental in improving quality of care, particularly in outpatient settings. The mental model individuals and organizations hold regarding audit and feedback can moderate its effectiveness, yet this has received limited study in the quality improvement literature. In this study we sought to uncover patterns in mental models of current feedback practices within high- and low-performing healthcare facilities. We purposively sampled 16 geographically dispersed VA hospitals based on high and low performance on a set of chronic and preventive care measures. We interviewed up to 4 personnel from each location (n = 48) to determine the facility's receptivity to audit and feedback practices. Interview transcripts were analyzed via content and framework analysis to identify emergent themes. We found high variability in the mental models of audit and feedback, which we organized into positive and negative themes. We were unable to associate mental models of audit and feedback with clinical performance due to high variance in facility performance over time. Positive mental models exhibit perceived utility of audit and feedback practices in improving performance; whereas, negative mental models did not. Results speak to the variability of mental models of feedback, highlighting how facilities perceive current audit and feedback practices. Findings are consistent with prior research  in that variability in feedback mental models is associated with lower performance.; Future research should seek to empirically link mental models revealed in this paper to high and low levels of clinical performance.

  2. A new rate-dependent model for high-frequency tracking performance enhancement of piezoactuator system

    NASA Astrophysics Data System (ADS)

    Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han

    2017-05-01

    Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.

  3. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  4. High dimensional biological data retrieval optimization with NoSQL technology.

    PubMed

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data.

  5. High dimensional biological data retrieval optimization with NoSQL technology

    PubMed Central

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data. PMID:25435347

  6. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.

  7. Surrogate based wind farm layout optimization using manifold mapping

    NASA Astrophysics Data System (ADS)

    Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester

    2016-09-01

    High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.

  8. Simulating Effects of High Angle of Attack on Turbofan Engine Performance

    NASA Technical Reports Server (NTRS)

    Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei

    2013-01-01

    A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rainer, Leo I.; Hoeschele, Marc A.; Apte, Michael G.

    This report addresses the results of detailed monitoring completed under Program Element 6 of Lawrence Berkeley National Laboratory's High Performance Commercial Building Systems (HPCBS) PIER program. The purpose of the Energy Simulations and Projected State-Wide Energy Savings project is to develop reasonable energy performance and cost models for high performance relocatable classrooms (RCs) across California climates. A key objective of the energy monitoring was to validate DOE2 simulations for comparison to initial DOE2 performance projections. The validated DOE2 model was then used to develop statewide savings projections by modeling base case and high performance RC operation in the 16 Californiamore » climate zones. The primary objective of this phase of work was to utilize detailed field monitoring data to modify DOE2 inputs and generate performance projections based on a validated simulation model. Additional objectives include the following: (1) Obtain comparative performance data on base case and high performance HVAC systems to determine how they are operated, how they perform, and how the occupants respond to the advanced systems. This was accomplished by installing both HVAC systems side-by-side (i.e., one per module of a standard two module, 24 ft by 40 ft RC) on the study RCs and switching HVAC operating modes on a weekly basis. (2) Develop projected statewide energy and demand impacts based on the validated DOE2 model. (3) Develop cost effectiveness projections for the high performance HVAC system in the 16 California climate zones.« less

  10. Whole School Improvement and Restructuring as Prevention and Promotion: Lessons from STEP and the Project on High Performance Learning Communities.

    ERIC Educational Resources Information Center

    Felner, Robert D.; Favazza, Antoinette; Shim, Minsuk; Brand, Stephen; Gu, Kenneth; Noonan, Nancy

    2001-01-01

    Describes the School Transitional Environment Project and its successor, the Project on High Performance Learning Communities, that have contributed to building a model for school improvement called the High Performance Learning Communities. The model seeks to build the principles of prevention into whole school change. Presents findings from…

  11. Modeling and simulation of continuous wave velocity radar based on third-order DPLL

    NASA Astrophysics Data System (ADS)

    Di, Yan; Zhu, Chen; Hong, Ma

    2015-02-01

    Second-order digital phase-locked-loop (DPLL) is widely used in traditional Continuous wave (CW) velocity radar with poor performance in high dynamic conditions. Using the third-order DPLL can improve the performance. Firstly, the echo signal model of CW radar is given. Secondly, theoretical derivations of the tracking performance in different velocity conditions are given. Finally, simulation model of CW radar is established based on Simulink tool. Tracking performance of the two kinds of DPLL in different acceleration and jerk conditions is studied by this model. The results show that third-order PLL has better performance in high dynamic conditions. This model provides a platform for further research of CW radar.

  12. Turbulence modeling of free shear layers for high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas L.

    1993-01-01

    The High Performance Aircraft (HPA) Grand Challenge of the High Performance Computing and Communications (HPCC) program involves the computation of the flow over a high performance aircraft. A variety of free shear layers, including mixing layers over cavities, impinging jets, blown flaps, and exhaust plumes, may be encountered in such flowfields. Since these free shear layers are usually turbulent, appropriate turbulence models must be utilized in computations in order to accurately simulate these flow features. The HPCC program is relying heavily on parallel computers. A Navier-Stokes solver (POVERFLOW) utilizing the Baldwin-Lomax algebraic turbulence model was developed and tested on a 128-node Intel iPSC/860. Algebraic turbulence models run very fast, and give good results for many flowfields. For complex flowfields such as those mentioned above, however, they are often inadequate. It was therefore deemed that a two-equation turbulence model will be required for the HPA computations. The k-epsilon two-equation turbulence model was implemented on the Intel iPSC/860. Both the Chien low-Reynolds-number model and a generalized wall-function formulation were included.

  13. Landslide model performance in a high resolution small-scale landscape

    NASA Astrophysics Data System (ADS)

    De Sy, V.; Schoorl, J. M.; Keesstra, S. D.; Jones, K. E.; Claessens, L.

    2013-05-01

    The frequency and severity of shallow landslides in New Zealand threatens life and property, both on- and off-site. The physically-based shallow landslide model LAPSUS-LS is tested for its performance in simulating shallow landslide locations induced by a high intensity rain event in a small-scale landscape. Furthermore, the effect of high resolution digital elevation models on the performance was tested. The performance of the model was optimised by calibrating different parameter values. A satisfactory result was achieved with a high resolution (1 m) DEM. Landslides, however, were generally predicted lower on the slope than mapped erosion scars. This discrepancy could be due to i) inaccuracies in the DEM or in other model input data such as soil strength properties; ii) relevant processes for this environmental context that are not included in the model; or iii) the limited validity of the infinite length assumption in the infinite slope stability model embedded in the LAPSUS-LS. The trade-off between a correct prediction of landslides versus stable cells becomes increasingly worse with coarser resolutions; and model performance decreases mainly due to altering slope characteristics. The optimal parameter combinations differ per resolution. In this environmental context the 1 m resolution topography resembles actual topography most closely and landslide locations are better distinguished from stable areas than for coarser resolutions. More gain in model performance could be achieved by adding landslide process complexities and parameter heterogeneity of the catchment.

  14. Comparison and Analysis of Steel Frame Based on High Strength Column and Normal Strength Column

    NASA Astrophysics Data System (ADS)

    Liu, Taiyu; An, Yuwei

    2018-01-01

    The anti-seismic performance of high strength steel has restricted its industrialization in civil buildings. In order to study the influence of high strength steel column on frame structure, three models are designed through MIDAS/GEN finite element software. By comparing the seismic performance and economic performance of the three models, the three different structures are comprehensively evaluated to provide some references for the development of high strength steel in steel structure.

  15. FLAME: A platform for high performance computing of complex systems, applied for three case studies

    DOE PAGES

    Kiran, Mariam; Bicak, Mesude; Maleki-Dizaji, Saeedeh; ...

    2011-01-01

    FLAME allows complex models to be automatically parallelised on High Performance Computing (HPC) grids enabling large number of agents to be simulated over short periods of time. Modellers are hindered by complexities of porting models on parallel platforms and time taken to run large simulations on a single machine, which FLAME overcomes. Three case studies from different disciplines were modelled using FLAME, and are presented along with their performance results on a grid.

  16. Optimal design of high-speed loading spindle based on ABAQUS

    NASA Astrophysics Data System (ADS)

    Yang, Xudong; Dong, Yu; Ge, Qingkuan; Yang, Hai

    2017-12-01

    The three-dimensional model of high-speed loading spindle is established by using ABAQUS’s modeling module. A finite element analysis model of high-speed loading spindle was established by using spring element to simulate bearing boundary condition. The static and dynamic performance of the spindle structure with different specifications of the rectangular spline and the different diameter neck of axle are studied in depth, and the influence of different spindle span on the static and dynamic performance of the high-speed loading spindle is studied. Finally, the optimal structure of the high-speed loading spindle is obtained. The results provide a theoretical basis for improving the overall performance of the test-bed

  17. The effects of teacher anxiety and modeling on the acquisition of a science teaching skill and concomitant student performance

    NASA Astrophysics Data System (ADS)

    Koran, John J., Jr.; Koran, Mary Lou

    In a study designed to explore the effects of teacher anxiety and modeling on acquisition of a science teaching skill and concomitant student performance, 69 preservice secondary teachers and 295 eighth grade students were randomly assigned to microteaching sessions. Prior to microteaching, teachers were given an anxiety test, then randomly assigned to one of three treatments; a transcript model, a protocol model, or a control condition. Subsequently both teacher and student performance was assessed using written and behavioral measures. Analysis of variance indicated that subjects in the two modeling treatments significantly exceeded performance of control group subjects on all measures of the dependent variable, with the protocol model being generally superior to the transcript model. The differential effects of the modeling treatments were further reflected in student performance. Regression analysis of aptitude-treatment interactions indicated that teacher anxiety scores interacted significantly with instructional treatments, with high anxiety teachers performing best in the protocol modeling treatment. Again, this interaction was reflected in student performance, where students taught by highly anxious teachers performed significantly better when their teachers had received the protocol model. These results were discussed in terms of teacher concerns and a memory model of the effects of anxiety on performance.

  18. High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    NASA Technical Reports Server (NTRS)

    Whorton, M. S.

    1998-01-01

    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.

  19. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025

  20. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment.

    PubMed

    Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott

    2011-07-28

    Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.

  1. A corpus of full-text journal articles is a robust evaluation tool for revealing differences in performance of biomedical natural language processing tools

    PubMed Central

    2012-01-01

    Background We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Results Many biomedical natural language processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. Conclusions The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable natural language processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical natural language processing community for evaluation and training of new models for biomedical full text publications. PMID:22901054

  2. Aerodynamics of High-Lift Configuration Civil Aircraft Model in JAXA

    NASA Astrophysics Data System (ADS)

    Yokokawa, Yuzuru; Murayama, Mitsuhiro; Ito, Takeshi; Yamamoto, Kazuomi

    This paper presents basic aerodynamics and stall characteristics of the high-lift configuration aircraft model JSM (JAXA Standard Model). During research process of developing high-lift system design method, wind tunnel testing at JAXA 6.5m by 5.5m low-speed wind tunnel and Navier-Stokes computation on unstructured hybrid mesh were performed for a realistic configuration aircraft model equipped with high-lift devices, fuselage, nacelle-pylon, slat tracks and Flap Track Fairings (FTF), which was assumed 100 passenger class modern commercial transport aircraft. The testing and the computation aimed to understand flow physics and then to obtain some guidelines for designing a high performance high-lift system. As a result of the testing, Reynolds number effects within linear region and stall region were observed. Analysis of static pressure distribution and flow visualization gave the knowledge to understand the aerodynamic performance. CFD could capture the whole characteristics of basic aerodynamics and clarify flow mechanism which governs stall characteristics even for complicated geometry and its flow field. This collaborative work between wind tunnel testing and CFD is advantageous for improving or has improved the aerodynamic performance.

  3. High fidelity quasi steady-state aerodynamic model effects on race vehicle performance predictions using multi-body simulation

    NASA Astrophysics Data System (ADS)

    Mohrfeld-Halterman, J. A.; Uddin, M.

    2016-07-01

    We described in this paper the development of a high fidelity vehicle aerodynamic model to fit wind tunnel test data over a wide range of vehicle orientations. We also present a comparison between the effects of this proposed model and a conventional quasi steady-state aerodynamic model on race vehicle simulation results. This is done by implementing both of these models independently in multi-body quasi steady-state simulations to determine the effects of the high fidelity aerodynamic model on race vehicle performance metrics. The quasi steady state vehicle simulation is developed with a multi-body NASCAR Truck vehicle model, and simulations are conducted for three different types of NASCAR race tracks, a short track, a one and a half mile intermediate track, and a higher speed, two mile intermediate race track. For each track simulation, the effects of the aerodynamic model on handling, maximum corner speed, and drive force metrics are analysed. The accuracy of the high-fidelity model is shown to reduce the aerodynamic model error relative to the conventional aerodynamic model, and the increased accuracy of the high fidelity aerodynamic model is found to have realisable effects on the performance metric predictions on the intermediate tracks resulting from the quasi steady-state simulation.

  4. The effects of deep level traps on the electrical properties of semi-insulating CdZnTe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zha, Gangqiang; Yang, Jian; Xu, Lingyan

    2014-01-28

    Deep level traps have considerable effects on the electrical properties and radiation detection performance of high resistivity CdZnTe. A deep-trap model for high resistivity CdZnTe was proposed in this paper. The high resistivity mechanism and the electrical properties were analyzed based on this model. High resistivity CdZnTe with high trap ionization energy E{sub t} can withstand high bias voltages. The leakage current is dependent on both the deep traps and the shallow impurities. The performance of a CdZnTe radiation detector will deteriorate at low temperatures, and the way in which sub-bandgap light excitation could improve the low temperature performance canmore » be explained using the deep trap model.« less

  5. High-Performance Computing for the Electromagnetic Modeling and Simulation of Interconnects

    NASA Technical Reports Server (NTRS)

    Schutt-Aine, Jose E.

    1996-01-01

    The electromagnetic modeling of packages and interconnects plays a very important role in the design of high-speed digital circuits, and is most efficiently performed by using computer-aided design algorithms. In recent years, packaging has become a critical area in the design of high-speed communication systems and fast computers, and the importance of the software support for their development has increased accordingly. Throughout this project, our efforts have focused on the development of modeling and simulation techniques and algorithms that permit the fast computation of the electrical parameters of interconnects and the efficient simulation of their electrical performance.

  6. Academic Motivation, Self-Concept, Engagement, and Performance in High School: Key Processes from a Longitudinal Perspective

    ERIC Educational Resources Information Center

    Green, Jasmine; Liem, Gregory Arief D.; Martin, Andrew J.; Colmar, Susan; Marsh, Herbert W.; McInerney, Dennis

    2012-01-01

    The study tested three theoretically/conceptually hypothesized longitudinal models of academic processes leading to academic performance. Based on a longitudinal sample of 1866 high-school students across two consecutive years of high school (Time 1 and Time 2), the model with the most superior heuristic value demonstrated: (a) academic motivation…

  7. Discrete tyre model application for evaluation of vehicle limit handling performance

    NASA Astrophysics Data System (ADS)

    Siramdasu, Y.; Taheri, S.

    2016-11-01

    The goal of this study is twofold, first, to understand the transient and nonlinear effects of anti-lock braking systems (ABS), road undulations and driving dynamics on lateral performance of tyre and second, to develop objective handling manoeuvres and respective metrics to characterise these effects on vehicle behaviour. For studying the transient and nonlinear handling performance of the vehicle, the variations of relaxation length of tyre and tyre inertial properties play significant roles [Pacejka HB. Tire and vehicle dynamics. 3rd ed. Butterworth-Heinemann; 2012]. To accurately simulate these nonlinear effects during high-frequency vehicle dynamic manoeuvres, requires a high-frequency dynamic tyre model (? Hz). A 6 DOF dynamic tyre model integrated with enveloping model is developed and validated using fixed axle high-speed oblique cleat experimental data. Commercially available vehicle dynamics software CarSim® is used for vehicle simulation. The vehicle model was validated by comparing simulation results with experimental sinusoidal steering tests. The validated tyre model is then integrated with vehicle model and a commercial grade rule-based ABS model to perform various objective simulations. Two test scenarios of ABS braking in turn on a smooth road and accelerating in a turn on uneven and smooth roads are considered. Both test cases reiterated that while the tyre is operating in the nonlinear region of slip or slip angle, any road disturbance or high-frequency brake torque input variations can excite the inertial belt vibrations of the tyre. It is shown that these inertial vibrations can directly affect the developed performance metrics and potentially degrade the handling performance of the vehicle.

  8. Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    NASA Technical Reports Server (NTRS)

    Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.

    1992-01-01

    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.

  9. Predicting Student Academic Performance in an Engineering Dynamics Course: A Comparison of Four Types of Predictive Mathematical Models

    ERIC Educational Resources Information Center

    Huang, Shaobo; Fang, Ning

    2013-01-01

    Predicting student academic performance has long been an important research topic in many academic disciplines. The present study is the first study that develops and compares four types of mathematical models to predict student academic performance in engineering dynamics--a high-enrollment, high-impact, and core course that many engineering…

  10. Calibration of X-Ray Observatories

    NASA Technical Reports Server (NTRS)

    Weisskopf, Martin C.; L'Dell, Stephen L.

    2011-01-01

    Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th

  11. Nested Interrupt Analysis of Low Cost and High Performance Embedded Systems Using GSPN Framework

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Interrupt service routines are a key technology for embedded systems. In this paper, we introduce the standard approach for using Generalized Stochastic Petri Nets (GSPNs) as a high-level model for generating CTMC Continuous-Time Markov Chains (CTMCs) and then use Markov Reward Models (MRMs) to compute the performance for embedded systems. This framework is employed to analyze two embedded controllers with low cost and high performance, ARM7 and Cortex-M3. Cortex-M3 is designed with a tail-chaining mechanism to improve the performance of ARM7 when a nested interrupt occurs on an embedded controller. The Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption and interrupt overhead performance. Using numerical results, in spite of the power consumption or interrupt overhead, Cortex-M3 performs better than ARM7.

  12. Nonlinear stability and control study of highly maneuverable high performance aircraft

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.

    1993-01-01

    This project is intended to research and develop new nonlinear methodologies for the control and stability analysis of high-performance, high angle-of-attack aircraft such as HARV (F18). Past research (reported in our Phase 1, 2, and 3 progress reports) is summarized and more details of final Phase 3 research is provided. While research emphasis is on nonlinear control, other tasks such as associated model development, system identification, stability analysis, and simulation are performed in some detail as well. An overview of various models that were investigated for different purposes such as an approximate model reference for control adaptation, as well as another model for accurate rigid-body longitudinal motion is provided. Only a very cursory analysis was made relative to type 8 (flexible body dynamics). Standard nonlinear longitudinal airframe dynamics (type 7) with the available modified F18 stability derivatives, thrust vectoring, actuator dynamics, and control constraints are utilized for simulated flight evaluation of derived controller performance in all cases studied.

  13. High subsonic flow tests of a parallel pipe followed by a large area ratio diffuser

    NASA Technical Reports Server (NTRS)

    Barna, P. S.

    1975-01-01

    Experiments were performed on a pilot model duct system in order to explore its aerodynamic characteristics. The model was scaled from a design projected for the high speed operation mode of the Aircraft Noise Reduction Laboratory. The test results show that the model performed satisfactorily and therefore the projected design will most likely meet the specifications.

  14. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  15. Modeling of ground based laser propagation to low Earth orbit object for maneuver

    NASA Astrophysics Data System (ADS)

    Smith, Liam C.; Allen, Jeffrey H.; Bold, Matthew M.

    2017-08-01

    The Space Environment Research Centre (SERC) endeavors to demonstrate the ability to maneuver high area to mass ratio objects using ground based lasers. Lockheed Martin has been leading system performance modeling for this project that includes high power laser propagation through the atmosphere, target interactions and subsequent orbital maneuver of the object. This paper will describe the models used, model assumptions and performance estimates for laser maneuver demonstration.

  16. Academic motivation, self-concept, engagement, and performance in high school: key processes from a longitudinal perspective.

    PubMed

    Green, Jasmine; Liem, Gregory Arief D; Martin, Andrew J; Colmar, Susan; Marsh, Herbert W; McInerney, Dennis

    2012-10-01

    The study tested three theoretically/conceptually hypothesized longitudinal models of academic processes leading to academic performance. Based on a longitudinal sample of 1866 high-school students across two consecutive years of high school (Time 1 and Time 2), the model with the most superior heuristic value demonstrated: (a) academic motivation and self-concept positively predicted attitudes toward school; (b) attitudes toward school positively predicted class participation and homework completion and negatively predicted absenteeism; and (c) class participation and homework completion positively predicted test performance whilst absenteeism negatively predicted test performance. Taken together, these findings provide support for the relevance of the self-system model and, particularly, the importance of examining the dynamic relationships amongst engagement factors of the model. The study highlights implications for educational and psychological theory, measurement, and intervention. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  17. A High-Performance Cellular Automaton Model of Tumor Growth with Dynamically Growing Domains

    PubMed Central

    Poleszczuk, Jan; Enderling, Heiko

    2014-01-01

    Tumor growth from a single transformed cancer cell up to a clinically apparent mass spans many spatial and temporal orders of magnitude. Implementation of cellular automata simulations of such tumor growth can be straightforward but computing performance often counterbalances simplicity. Computationally convenient simulation times can be achieved by choosing appropriate data structures, memory and cell handling as well as domain setup. We propose a cellular automaton model of tumor growth with a domain that expands dynamically as the tumor population increases. We discuss memory access, data structures and implementation techniques that yield high-performance multi-scale Monte Carlo simulations of tumor growth. We discuss tumor properties that favor the proposed high-performance design and present simulation results of the tumor growth model. We estimate to which parameters the model is the most sensitive, and show that tumor volume depends on a number of parameters in a non-monotonic manner. PMID:25346862

  18. Model-as-a-service (MaaS) using the cloud service innovation platform (CSIP)

    USDA-ARS?s Scientific Manuscript database

    Cloud infrastructures for modelling activities such as data processing, performing environmental simulations, or conducting model calibrations/optimizations provide a cost effective alternative to traditional high performance computing approaches. Cloud-based modelling examples emerged into the more...

  19. Unfulfilled Potential: High-Achieving Minority Students and the High School Achievement Gap in Math

    ERIC Educational Resources Information Center

    Kotok, Stephen

    2017-01-01

    This study uses multilevel modeling to examine a subset of the highest performing 9th graders and explores the extent that achievement gaps in math widen for high performing African American and Latino students and their high performing White and Asian peers during high school. Using nationally representative data from the High School Longitudinal…

  20. Evolution and revolution: gauging the impact of technological and technical innovation on Olympic performance.

    PubMed

    Balmer, Nigel; Pleasence, Pascoe; Nevill, Alan

    2012-01-01

    A number of studies have pointed to a plateauing of athletic performance, with the suggestion that further improvements will need to be driven by revolutions in technology or technique. In the present study, we examine post-war men's Olympic performance in jumping events (pole vault, long jump, high jump, triple jump) to determine whether performance has indeed plateaued and to present techniques, derived from models of human growth, for assessing the impact of technological and technical innovation over time (logistic and double logistic models of growth). Significantly, two of the events involve well-documented changes in technology (pole material in pole vault) or technique (the Fosbury Flop in high jump), while the other two do not. We find that in all four cases, performance appears to have plateaued and that no further "general" improvement should be expected. In the case of high jump, the double logistic model provides a convenient method for modelling and quantifying a performance intervention (in this case the Fosbury Flop). However, some shortcomings are revealed for pole vault, where evolutionary post-war improvements and innovation (fibre glass poles) were concurrent, preventing their separate identification in the model. In all four events, it is argued that further general growth in performance will indeed need to rely predominantly on technological or technical innovation.

  1. Impact of high-performance work systems on individual- and branch-level performance: test of a multilevel model of intermediate linkages.

    PubMed

    Aryee, Samuel; Walumbwa, Fred O; Seidu, Emmanuel Y M; Otaye, Lilian E

    2012-03-01

    We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance.

  2. High performance HRM: NHS employee perspectives.

    PubMed

    Hyde, Paula; Sparrow, Paul; Boaden, Ruth; Harris, Claire

    2013-01-01

    The purpose of this paper is to examine National Health Service (NHS) employee perspectives of how high performance human resource (HR) practices contribute to their performance. The paper draws on an extensive qualitative study of the NHS. A novel two-part method was used; the first part used focus group data from managers to identify high-performance HR practices specific to the NHS. Employees then conducted a card-sort exercise where they were asked how or whether the practices related to each other and how each practice affected their work. In total, 11 high performance HR practices relevant to the NHS were identified. Also identified were four reactions to a range of HR practices, which the authors developed into a typology according to anticipated beneficiaries (personal gain, organisation gain, both gain and no-one gains). Employees were able to form their own patterns (mental models) of performance contribution for a range of HR practices (60 interviewees produced 91 groupings). These groupings indicated three bundles particular to the NHS (professional development, employee contribution and NHS deal). These mental models indicate employee perceptions about how health services are organised and delivered in the NHS and illustrate the extant mental models of health care workers. As health services are rearranged and financial pressures begin to bite, these mental models will affect employee reactions to changes both positively and negatively. The novel method allows for identification of mental models that explain how NHS workers understand service delivery. It also delineates the complex and varied relationships between HR practices and individual performance.

  3. Themes Found in High Performing Schools: The CAB Model

    ERIC Educational Resources Information Center

    Sanders, Brenda

    2010-01-01

    This study examines the CAB [Cooperativeness, Accountability, and Boundlessness] model of high performing schools by developing case studies of two Portland, Oregon area schools. In pursuing this purpose, this study answers the following three research questions: 1) To what extent is the common correlate cooperativeness demonstrated or absent in…

  4. Ion thruster performance model

    NASA Technical Reports Server (NTRS)

    Brophy, J. R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.

  5. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  6. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less

  7. Regionalized PM2.5 Community Multiscale Air Quality model performance evaluation across a continuous spatiotemporal domain.

    PubMed

    Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L

    2017-01-01

    The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.

  8. Enhancing GIS Capabilities for High Resolution Earth Science Grids

    NASA Astrophysics Data System (ADS)

    Koziol, B. W.; Oehmke, R.; Li, P.; O'Kuinghttons, R.; Theurich, G.; DeLuca, C.

    2017-12-01

    Applications for high performance GIS will continue to increase as Earth system models pursue more realistic representations of Earth system processes. Finer spatial resolution model input and output, unstructured or irregular modeling grids, data assimilation, and regional coordinate systems present novel challenges for GIS frameworks operating in the Earth system modeling domain. This presentation provides an overview of two GIS-driven applications that combine high performance software with big geospatial datasets to produce value-added tools for the modeling and geoscientific community. First, a large-scale interpolation experiment using National Hydrography Dataset (NHD) catchments, a high resolution rectilinear CONUS grid, and the Earth System Modeling Framework's (ESMF) conservative interpolation capability will be described. ESMF is a parallel, high-performance software toolkit that provides capabilities (e.g. interpolation) for building and coupling Earth science applications. ESMF is developed primarily by the NOAA Environmental Software Infrastructure and Interoperability (NESII) group. The purpose of this experiment was to test and demonstrate the utility of high performance scientific software in traditional GIS domains. Special attention will be paid to the nuanced requirements for dealing with high resolution, unstructured grids in scientific data formats. Second, a chunked interpolation application using ESMF and OpenClimateGIS (OCGIS) will demonstrate how spatial subsetting can virtually remove computing resource ceilings for very high spatial resolution interpolation operations. OCGIS is a NESII-developed Python software package designed for the geospatial manipulation of high-dimensional scientific datasets. An overview of the data processing workflow, why a chunked approach is required, and how the application could be adapted to meet operational requirements will be discussed here. In addition, we'll provide a general overview of OCGIS's parallel subsetting capabilities including challenges in the design and implementation of a scientific data subsetter.

  9. Performance Models for the Spike Banded Linear System Solver

    DOE PAGES

    Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...

    2011-01-01

    With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less

  10. Performance Steel Castings

    DTIC Science & Technology

    2012-09-30

    Development of Sand Properties 103 Advanced Modeling Dataset.. 105 High Strength Low Alloy (HSLA) Steels 107 Steel Casting and Engineering Support...to achieve the performance goals required for new systems. The dramatic reduction in weight and increase in capability will require high performance...for improved weapon system reliability. SFSA developed innovative casting design and manufacturing processes for high performance parts. SFSA is

  11. High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics

    DOE PAGES

    Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis

    2014-07-28

    The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

  12. Real-time flood forecasting by employing artificial neural network based model with zoning matching approach

    NASA Astrophysics Data System (ADS)

    Sulaiman, M.; El-Shafie, A.; Karim, O.; Basri, H.

    2011-10-01

    Flood forecasting models are a necessity, as they help in planning for flood events, and thus help prevent loss of lives and minimize damage. At present, artificial neural networks (ANN) have been successfully applied in river flow and water level forecasting studies. ANN requires historical data to develop a forecasting model. However, long-term historical water level data, such as hourly data, poses two crucial problems in data training. First is that the high volume of data slows the computation process. Second is that data training reaches its optimal performance within a few cycles of data training, due to there being a high volume of normal water level data in the data training, while the forecasting performance for high water level events is still poor. In this study, the zoning matching approach (ZMA) is used in ANN to accurately monitor flood events in real time by focusing the development of the forecasting model on high water level zones. ZMA is a trial and error approach, where several training datasets using high water level data are tested to find the best training dataset for forecasting high water level events. The advantage of ZMA is that relevant knowledge of water level patterns in historical records is used. Importantly, the forecasting model developed based on ZMA successfully achieves high accuracy forecasting results at 1 to 3 h ahead and satisfactory performance results at 6 h. Seven performance measures are adopted in this study to describe the accuracy and reliability of the forecasting model developed.

  13. Performance Engineering Research Institute SciDAC-2 Enabling Technologies Institute Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Mary

    2014-09-19

    Enhancing the performance of SciDAC applications on petascale systems has high priority within DOE SC. As we look to the future, achieving expected levels of performance on high-end com-puting (HEC) systems is growing ever more challenging due to enormous scale, increasing archi-tectural complexity, and increasing application complexity. To address these challenges, PERI has implemented a unified, tripartite research plan encompassing: (1) performance modeling and prediction; (2) automatic performance tuning; and (3) performance engineering of high profile applications. The PERI performance modeling and prediction activity is developing and refining performance models, significantly reducing the cost of collecting the data upon whichmore » the models are based, and increasing model fidelity, speed and generality. Our primary research activity is automatic tuning (autotuning) of scientific software. This activity is spurred by the strong user preference for automatic tools and is based on previous successful activities such as ATLAS, which has automatically tuned components of the LAPACK linear algebra library, and other re-cent work on autotuning domain-specific libraries. Our third major component is application en-gagement, to which we are devoting approximately 30% of our effort to work directly with Sci-DAC-2 applications. This last activity not only helps DOE scientists meet their near-term per-formance goals, but also helps keep PERI research focused on the real challenges facing DOE computational scientists as they enter the Petascale Era.« less

  14. Stutter-Step Models of Performance in School

    ERIC Educational Resources Information Center

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  15. Improved models of cable-to-post attachments for high-tension cable barriers.

    DOT National Transportation Integrated Search

    2012-05-01

    Computer simulation models were developed to analyze and evaluate a new cable-to-post attachment for high-tension cable : barriers. The models replicated the performance of a keyway bolt currently used in the design of a high-tension cable : median b...

  16. Analysis of axial-induction-based wind plant control using an engineering and a high-order wind plant model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter M. O.; Scholbrock, Andrew K.

    2015-08-14

    Wind turbines are typically operated to maximize their performance without considering the impact of wake effects on nearby turbines. Wind plant control concepts aim to increase overall wind plant performance by coordinating the operation of the turbines. This paper focuses on axial-induction-based wind plant control techniques, in which the generator torque or blade pitch degrees of freedom of the wind turbines are adjusted. The paper addresses discrepancies between a high-order wind plant model and an engineering wind plant model. Changes in the engineering model are proposed to better capture the effects of axial-induction-based control shown in the high-order model.

  17. Wind tunnel performance results of an aeroelastically scaled 2/9 model of the PTA flight test prop-fan

    NASA Technical Reports Server (NTRS)

    Stefko, George L.; Rose, Gayle E.; Podboy, Gary G.

    1987-01-01

    High speed wind tunnel aerodynamic performance tests of the SR-7A advanced prop-fan have been completed in support of the Prop-Fan Test Assessment (PTA) flight test program. The test showed that the SR-7A model performed aerodynamically very well. At the cruise design condition, the SR-7A prop fan had a high measured net efficiency of 79.3 percent.

  18. Performance Dependences of Multiplication Layer Thickness for InP/InGaAs Avalanche Photodiodes Based on Time Domain Modeling

    NASA Technical Reports Server (NTRS)

    Xiao, Yegao; Bhat, Ishwara; Abedin, M. Nurul

    2005-01-01

    InP/InGaAs avalanche photodiodes (APDs) are being widely utilized in optical receivers for modern long haul and high bit-rate optical fiber communication systems. The separate absorption, grading, charge, and multiplication (SAGCM) structure is an important design consideration for APDs with high performance characteristics. Time domain modeling techniques have been previously developed to provide better understanding and optimize design issues by saving time and cost for the APD research and development. In this work, performance dependences on multiplication layer thickness have been investigated by time domain modeling. These performance characteristics include breakdown field and breakdown voltage, multiplication gain, excess noise factor, frequency response and bandwidth etc. The simulations are performed versus various multiplication layer thicknesses with certain fixed values for the areal charge sheet density whereas the values for the other structure and material parameters are kept unchanged. The frequency response is obtained from the impulse response by fast Fourier transformation. The modeling results are presented and discussed, and design considerations, especially for high speed operation at 10 Gbit/s, are further analyzed.

  19. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    NASA Astrophysics Data System (ADS)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  20. Marc Henry de Frahan | NREL

    Science.gov Websites

    Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid

  1. Development and Validation of High Precision Thermal, Mechanical, and Optical Models for the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles

    2006-01-01

    SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.

  2. The Effect of Covert Modeling on Communication Apprehension, Communication Confidence, and Performance.

    ERIC Educational Resources Information Center

    Nimocks, Mittie J.; Bromley, Patricia L.; Parsons, Theron E.; Enright, Corinne S.; Gates, Elizabeth A.

    This study examined the effect of covert modeling on communication apprehension, public speaking anxiety, and communication competence. Students identified as highly communication apprehensive received covert modeling, a technique in which one first observes a model doing a behavior, then visualizes oneself performing the behavior and obtaining a…

  3. Teaching elliptical excision skills to novice medical students: a randomized controlled study comparing low- and high-fidelity bench models.

    PubMed

    Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério

    2014-03-01

    The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills' training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs' skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills' training) was considered large (>0.80) in all measurements. The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials.

  4. Thermomechanical simulations and experimental validation for high speed incremental forming

    NASA Astrophysics Data System (ADS)

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  5. Is it better to be average? High and low performance as predictors of employee victimization.

    PubMed

    Jensen, Jaclyn M; Patel, Pankaj C; Raver, Jana L

    2014-03-01

    Given increased interest in whether targets' behaviors at work are related to their victimization, we investigated employees' job performance level as a precipitating factor for being victimized by peers in one's work group. Drawing on rational choice theory and the victim precipitation model, we argue that perpetrators take into consideration the risks of aggressing against particular targets, such that high performers tend to experience covert forms of victimization from peers, whereas low performers tend to experience overt forms of victimization. We further contend that the motivation to punish performance deviants will be higher when performance differentials are salient, such that the effects of job performance on covert and overt victimization will be exacerbated by group performance polarization, yet mitigated when the target has high equity sensitivity (benevolence). Finally, we investigate whether victimization is associated with future performance impairments. Results from data collected at 3 time points from 576 individuals in 62 work groups largely support the proposed model. The findings suggest that job performance is a precipitating factor to covert victimization for high performers and overt victimization for low performers in the workplace with implications for subsequent performance.

  6. An Approach to Integrate a Space-Time GIS Data Model with High Performance Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Zhao, Ziliang; Shaw, Shih-Lung

    2011-01-01

    In this paper, we describe an approach to integrate a Space-Time GIS data model on a high performance computing platform. The Space-Time GIS data model has been developed on a desktop computing environment. We use the Space-Time GIS data model to generate GIS module, which organizes a series of remote sensing data. We are in the process of porting the GIS module into an HPC environment, in which the GIS modules handle large dataset directly via parallel file system. Although it is an ongoing project, authors hope this effort can inspire further discussions on the integration of GIS on highmore » performance computing platforms.« less

  7. Summary of photovoltaic system performance models

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. J.

    1984-01-01

    A detailed overview of photovoltaics (PV) performance modeling capabilities developed for analyzing PV system and component design and policy issues is provided. A set of 10 performance models are selected which span a representative range of capabilities from generalized first order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. The issues are discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. The models are grouped into categories to illustrate their purposes and perspectives.

  8. LADAR Performance Simulations with a High Spectral Resolution Atmospheric Transmittance and Radiance Model-LEEDR

    DTIC Science & Technology

    2012-03-01

    such as FASCODE is accomplished. The assessment is limited by the correctness of the models used; validating the models is beyond the scope of this...comparisons with other models and validation against data sets (Snell et al. 2000). 2.3.2 Previous Research Several LADAR simulations have been produced...performance models would better capture the atmosphere physics and climatological effects on these systems. Also, further validation needs to be performed

  9. Modeling Materials: Design for Planetary Entry, Electric Aircraft, and Beyond

    NASA Technical Reports Server (NTRS)

    Thompson, Alexander; Lawson, John W.

    2014-01-01

    NASA missions push the limits of what is possible. The development of high-performance materials must keep pace with the agency's demanding, cutting-edge applications. Researchers at NASA's Ames Research Center are performing multiscale computational modeling to accelerate development times and further the design of next-generation aerospace materials. Multiscale modeling combines several computationally intensive techniques ranging from the atomic level to the macroscale, passing output from one level as input to the next level. These methods are applicable to a wide variety of materials systems. For example: (a) Ultra-high-temperature ceramics for hypersonic aircraft-we utilized the full range of multiscale modeling to characterize thermal protection materials for faster, safer air- and spacecraft, (b) Planetary entry heat shields for space vehicles-we computed thermal and mechanical properties of ablative composites by combining several methods, from atomistic simulations to macroscale computations, (c) Advanced batteries for electric aircraft-we performed large-scale molecular dynamics simulations of advanced electrolytes for ultra-high-energy capacity batteries to enable long-distance electric aircraft service; and (d) Shape-memory alloys for high-efficiency aircraft-we used high-fidelity electronic structure calculations to determine phase diagrams in shape-memory transformations. Advances in high-performance computing have been critical to the development of multiscale materials modeling. We used nearly one million processor hours on NASA's Pleiades supercomputer to characterize electrolytes with a fidelity that would be otherwise impossible. For this and other projects, Pleiades enables us to push the physics and accuracy of our calculations to new levels.

  10. High capacity demonstration of honeycomb panel heat pipes

    NASA Technical Reports Server (NTRS)

    Tanzer, H. J.

    1989-01-01

    The feasibility of performance enhancing the sandwich panel heat pipe was investigated for moderate temperature range heat rejection radiators on future-high-power spacecraft. The hardware development program consisted of performance prediction modeling, fabrication, ground test, and data correlation. Using available sandwich panel materials, a series of subscale test panels were augumented with high-capacity sideflow and temperature control variable conductance features, and test evaluated for correlation with performance prediction codes. Using the correlated prediction model, a 50-kW full size radiator was defined using methanol working fluid and closely spaced sideflows. A new concept called the hybrid radiator individually optimizes heat pipe components. A 2.44-m long hybrid test vehicle demonstrated proof-of-principle performance.

  11. Propulsion and Power Rapid Response R&D Support Delivery Order 0041: Power Dense Solid Oxide Fuel Cell Systems: High Performance, High Power Density Solid Oxide Fuel Cells - Materials and Load Control

    DTIC Science & Technology

    2008-12-01

    respectively. 2.3.1.2 Brushless DC Motor Brushless direct current ( BLDC ) motors feature high efficiency, ease of control , and astonishingly high power...modeling purposes, we ignore the modeling complexity of the BLDC controller and treat the motor and controller “as commutated”, i.e. we assume the...High Performance, High Power Density Solid Oxide Fuel Cells− Materials and Load Control Stephen W. Sofie, Steven R. Shaw, Peter A. Lindahl, and Lee H

  12. High Performance Computing Software Applications for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  13. Advances In High Temperature (Viscoelastoplastic) Material Modeling for Thermal Structural Analysis

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Saleeb, Atef F.

    2005-01-01

    Typical High Temperature Applications High Temperature Applications Demand High Performance Materials: 1) Complex Thermomechanical Loading; 2) Complex Material response requires Time-Dependent/Hereditary Models: Viscoelastic/Viscoplastic; and 3) Comprehensive Characterization (Tensile, Creep, Relaxation) for a variety of material systems.

  14. Effects of high-latitude drivers on Ionosphere/Thermosphere parameters

    NASA Astrophysics Data System (ADS)

    Shim, J.; Kuznetsova, M. M.; Rastaetter, L.; Berrios, D.; Codrescu, M.; Emery, B. A.; Fedrizzi, M.; Foerster, M.; Foster, B. T.; Fuller-Rowell, T. J.; Mannucci, A.; Negrea, C.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Coster, A. J.; Goncharenko, L.; Lomidze, L.; Scherliess, L.

    2012-12-01

    In order to study effects of high-latitude drivers, we compared Ionosphere/Thermosphere (IT) model performance for predicting IT parameters, which were obtained using different models for the high-latitude ionospheric electric potential including Weimer 2005, AMIE (assimilative mapping of ionospheric electrodynamics) and global magnetosphere models (e.g. Space Weather Modeling Framework). For this study, the physical parameters selected are Total Electron Content (TEC) obtained by GPS ground stations, and NmF2 and hmF2 from COSMIC LEO satellites in the selected 5 degree eight longitude sectors. In addition, Ne, Te, Ti, and Tn at about 300 km height from ISRs are considered. We compared the modeled values with the observations for the 2006 AGU storm period and quantified the performance of the models using skill scores. Furthermore, the skill scores are obtained for three latitude regions (low, middle and high latitudes) in order to investigate latitudinal dependence of the models' performance. This study is supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. The CCMC converted ionosphere drivers from a variety of sources and developed an interpolation tool that can be employed by any modelers for easy driver swapping. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) as a resource for the space science communities to use.

  15. Surface knowledge and risks to landing and roving - The scale problem

    NASA Technical Reports Server (NTRS)

    Bourke, Roger D.

    1991-01-01

    The role of surface information in the performance of surface exploration missions is discussed. Accurate surface models based on direct measurements or inference are considered to be an important component in mission risk management. These models can be obtained using high resolution orbital photography or a combination of laser profiling, thermal inertia measurements, and/or radar. It is concluded that strategies for Martian exploration should use high confidence models to achieve maximum performance and low risk.

  16. Influence of Lift Offset on Rotorcraft Performance

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2009-01-01

    The influence of lift offset on the performance of several rotorcraft configurations is explored. A lift-offset rotor, or advancing blade concept, is a hingeless rotor that can attain good efficiency at high speed by operating with more lift on the advancing side than on the retreating side of the rotor disk. The calculated performance capability of modern-technology coaxial rotors utilizing a lift offset is examined, including rotor performance optimized for hover and high-speed cruise. The ideal induced power loss of coaxial rotors in hover and twin rotors in forward flight is presented. The aerodynamic modeling requirements for performance calculations are evaluated, including wake and drag models for the high-speed flight condition. The influence of configuration on the performance of rotorcraft with lift-offset rotors is explored, considering tandem and side-by-side rotorcraft as well as wing-rotor lift share.

  17. Influence of Lift Offset on Rotorcraft Performance

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2008-01-01

    The influence of lift offset on the performance of several rotorcraft configurations is explored. A lift-offset rotor, or advancing blade concept, is a hingeless rotor that can attain good efficiency at high speed, by operating with more lift on the advancing side than on the retreating side of the rotor disk. The calculated performance capability of modern-technology coaxial rotors utilizing a lift offset is examined, including rotor performance optimized for hover and high-speed cruise. The ideal induced power loss of coaxial rotors in hover and twin rotors in forward flight is presented. The aerodynamic modeling requirements for performance calculations are evaluated, including wake and drag models for the high speed flight condition. The influence of configuration on the performance of rotorcraft with lift-offset rotors is explored, considering tandem and side-by-side rotorcraft as well as wing-rotor lift share.

  18. Scaling predictive modeling in drug development with cloud computing.

    PubMed

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  19. Successful physiological aging and episodic memory: a brain stimulation study.

    PubMed

    Manenti, Rosa; Cotelli, Maria; Miniussi, Carlo

    2011-01-01

    Functional neuroimaging studies have shown that younger adults tend to asymmetrically recruit specific regions of an hemisphere in an episodic memory task (Hemispheric Encoding Retrieval Asymmetry-HERA model). In older adults, this hemispheric asymmetry is generally reduced as suggested by the Hemispheric Asymmetry Reduction for OLDer Adults-HAROLD-model. Recent works suggest that while low-performing older adults do not show this reduced asymmetry, high-performing older adults counteract age-related neural decline through a plastic reorganization of cerebral networks that results in reduced functional asymmetry. However, the issue of whether high- and low-performing older adults show different degrees of asymmetry and the relevance of this process for counteracting aging have not been clarified. We used transcranial magnetic stimulation (TMS) to transiently interfere with the function of the dorsolateral prefrontal cortex (DLPFC) during encoding or retrieval of associated and non-associated word pairs. A group of healthy older adults was studied during encoding and retrieval of word pairs. The subjects were divided in two subgroups according to their experimental performance (i.e., high- and low-performing). TMS effects on retrieval differed according to the subject's subgroup. In particular, the predominance of left vs. right DLPFC effects during encoding, predicted by the HERA model, was observed only in low-performing older adults, while the asymmetry reduction predicted by the HAROLD model was selectively shown for the high-performing group. The present data confirm that older adults with higher memory performance show less prefrontal asymmetry as an efficient strategy to counteract age-related memory decline. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Mathematical modeling of high and low temperature heat pipes

    NASA Technical Reports Server (NTRS)

    Chi, S. W.

    1971-01-01

    Following a review of heat and mass transfer theory relevant to heat pipe performance, math models are developed for calculating heat-transfer limitations of high-temperature heat pipes and heat-transfer limitations and temperature gradient of low temperature heat pipes. Calculated results are compared with the available experimental data from various sources to increase confidence in the present math models. Complete listings of two computer programs for high- and low-temperature heat pipes respectively are included. These programs enable the performance to be predicted of heat pipes with wrapped-screen, rectangular-groove, or screen-covered rectangular-groove wick.

  1. Temperature Dependent Modal Test/Analysis Correlation of X-34 Fastrac Composite Rocket Nozzle

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Brunty, Joseph A. (Technical Monitor)

    2001-01-01

    A unique high temperature modal test and model correlation/update program has been performed on the composite nozzle of the FASTRAC engine for the NASA X-34 Reusable Launch Vehicle. The program was required to provide an accurate high temperature model of the nozzle for incorporation into the engine system structural dynamics model for loads calculation; this model is significantly different from the ambient case due to the large decrease in composite stiffness properties due to heating. The high-temperature modal test was performed during a hot-fire test of the nozzle. Previously, a series of high fidelity modal tests and finite element model correlation of the nozzle in a free-free configuration had been performed. This model was then attached to a modal-test verified model of the engine hot-fire test stand and the ambient system mode shapes were identified. A reduced set of accelerometers was then attached to the nozzle, the engine fired full-duration, and the frequency peaks corresponding to the ambient nozzle modes individually isolated and tracked as they decreased during the test. To update the finite-element model of the nozzle to these frequency curves, the percentage differences of the anisotropic composite moduli due to temperature variation from ambient, which had been used in the initial modeling and which were obtained by small sample coupon testing, were multiplied by an iteratively determined constant factor. These new properties were used to create high-temperature nozzle models corresponding to 10 second engine operation increments and tied into the engine system model for loads determination.

  2. The implementation of sea ice model on a regional high-resolution scale

    NASA Astrophysics Data System (ADS)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  3. Model Policies in Support of High Performance School Buildings for All Children

    ERIC Educational Resources Information Center

    21st Century School Fund, 2006

    2006-01-01

    Model Policies in Support of High Performance School Buildings for All Children is to begin to create a coherent and comprehensive set of state policies that will provide the governmental infrastructure for effective and creative practice in facility management. There are examples of good policy in many states, but no state has a coherent set of…

  4. A Better Leveled Playing Field for Assessing Satisfactory Job Performance of Superintendents on the Basis of High-Stakes Testing Outcomes

    ERIC Educational Resources Information Center

    Young, I. Phillip; Cox, Edward P.; Buckman, David G.

    2014-01-01

    To assess satisfactory job performance of superintendents on the basis of school districts' high-stakes testing outcomes, existing teacher models were reviewed and critiqued as potential options for retrofit. For these models, specific problems were identified relative to the choice of referent groups. An alternate referent group (statewide…

  5. The Significance of the Response to Intervention Model on Elementary Reading Performance in Missouri

    ERIC Educational Resources Information Center

    Harrison, Philip L.

    2017-01-01

    The purpose of this study is to ascertain the essential elements of Response to Intervention programs among 150 high performing Title I schools with high rates of poverty as measured by free/reduced lunch participation rates. Response to Intervention (RTI) is a nationally-known instructional model used to assist students who are struggling to…

  6. Predicting High-Power Performance in Professional Cyclists.

    PubMed

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  7. What Matters from Admissions? Identifying Success and Risk Among Canadian Dental Students.

    PubMed

    Plouffe, Rachel A; Hammond, Robert; Goldberg, Harvey A; Chahine, Saad

    2018-05-01

    The aims of this study were to determine whether different student profiles would emerge in terms of high and low GPA performance in each year of dental school and to investigate the utility of preadmissions variables in predicting performance and performance stability throughout each year of dental school. Data from 11 graduating cohorts (2004-14) at the Schulich School of Medicine & Dentistry, University of Western Ontario, Canada, were collected and analyzed using bivariate correlations, latent profile analysis, and hierarchical generalized linear models (HGLMs). The data analyzed were for 616 students in total (332 males and 284 females). Four models were developed to predict adequate and poor performance throughout each of four dental school years. An additional model was developed to predict student performance stability across time. Two separate student profiles reflecting high and low GPA performance across each year of dental school were identified, and scores on cognitive preadmissions variables differentially predicted the probability of grouping into high and low performance profiles. Students with higher pre-dental GPAs and DAT chemistry were most likely to remain stable in a high-performance group across each year of dental school. Overall, the findings suggest that selection committees should consider pre-dental GPA and DAT chemistry scores as important tools for predicting dental school performance and stability across time. This research is important in determining how to better predict success and failure in various areas of preclinical dentistry courses and to provide low-performing students with adequate academic assistance.

  8. Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Yamada, Masako

    The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less

  9. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  10. Criteria for assessing problem solving and decision making in complex environments

    NASA Technical Reports Server (NTRS)

    Orasanu, Judith

    1993-01-01

    Training crews to cope with unanticipated problems in high-risk, high-stress environments requires models of effective problem solving and decision making. Existing decision theories use the criteria of logical consistency and mathematical optimality to evaluate decision quality. While these approaches are useful under some circumstances, the assumptions underlying these models frequently are not met in dynamic time-pressured operational environments. Also, applying formal decision models is both labor and time intensive, a luxury often lacking in operational environments. Alternate approaches and criteria are needed. Given that operational problem solving and decision making are embedded in ongoing tasks, evaluation criteria must address the relation between those activities and satisfaction of broader task goals. Effectiveness and efficiency become relevant for judging reasoning performance in operational environments. New questions must be addressed: What is the relation between the quality of decisions and overall performance by crews engaged in critical high risk tasks? Are different strategies most effective for different types of decisions? How can various decision types be characterized? A preliminary model of decision types found in air transport environments will be described along with a preliminary performance model based on an analysis of 30 flight crews. The performance analysis examined behaviors that distinguish more and less effective crews (based on performance errors). Implications for training and system design will be discussed.

  11. Computer-Assisted Decision Support for Student Admissions Based on Their Predicted Academic Performance.

    PubMed

    Muratov, Eugene; Lewis, Margaret; Fourches, Denis; Tropsha, Alexander; Cox, Wendy C

    2017-04-01

    Objective. To develop predictive computational models forecasting the academic performance of students in the didactic-rich portion of a doctor of pharmacy (PharmD) curriculum as admission-assisting tools. Methods. All PharmD candidates over three admission cycles were divided into two groups: those who completed the PharmD program with a GPA ≥ 3; and the remaining candidates. Random Forest machine learning technique was used to develop a binary classification model based on 11 pre-admission parameters. Results. Robust and externally predictive models were developed that had particularly high overall accuracy of 77% for candidates with high or low academic performance. These multivariate models were highly accurate in predicting these groups to those obtained using undergraduate GPA and composite PCAT scores only. Conclusion. The models developed in this study can be used to improve the admission process as preliminary filters and thus quickly identify candidates who are likely to be successful in the PharmD curriculum.

  12. Maydays and Murphies: A Study of the Effect of Organizational Design, Task, and Stress on Organizational Performance.

    ERIC Educational Resources Information Center

    Lin, Zhiang; Carley, Kathleen

    How should organizations of intelligent agents be designed so that they exhibit high performance even during periods of stress? A formal model of organizational performance given a distributed decision-making environment in which agents encounter a radar detection task is presented. Using this model the performance of organizations with various…

  13. Modeling of nitrate concentration in groundwater using artificial intelligence approach--a case study of Gaza coastal aquifer.

    PubMed

    Alagha, Jawad S; Said, Md Azlin Md; Mogheir, Yunes

    2014-01-01

    Nitrate concentration in groundwater is influenced by complex and interrelated variables, leading to great difficulty during the modeling process. The objectives of this study are (1) to evaluate the performance of two artificial intelligence (AI) techniques, namely artificial neural networks and support vector machine, in modeling groundwater nitrate concentration using scant input data, as well as (2) to assess the effect of data clustering as a pre-modeling technique on the developed models' performance. The AI models were developed using data from 22 municipal wells of the Gaza coastal aquifer in Palestine from 2000 to 2010. Results indicated high simulation performance, with the correlation coefficient and the mean average percentage error of the best model reaching 0.996 and 7 %, respectively. The variables that strongly influenced groundwater nitrate concentration were previous nitrate concentration, groundwater recharge, and on-ground nitrogen load of each land use land cover category in the well's vicinity. The results also demonstrated the merit of performing clustering of input data prior to the application of AI models. With their high performance and simplicity, the developed AI models can be effectively utilized to assess the effects of future management scenarios on groundwater nitrate concentration, leading to more reasonable groundwater resources management and decision-making.

  14. Nonlinear system identification of smart structures under high impact loads

    NASA Astrophysics Data System (ADS)

    Sarp Arsava, Kemal; Kim, Yeesock; El-Korchi, Tahar; Park, Hyo Seon

    2013-05-01

    The main purpose of this paper is to develop numerical models for the prediction and analysis of the highly nonlinear behavior of integrated structure control systems subjected to high impact loading. A time-delayed adaptive neuro-fuzzy inference system (TANFIS) is proposed for modeling of the complex nonlinear behavior of smart structures equipped with magnetorheological (MR) dampers under high impact forces. Experimental studies are performed to generate sets of input and output data for training and validation of the TANFIS models. The high impact load and current signals are used as the input disturbance and control signals while the displacement and acceleration responses from the structure-MR damper system are used as the output signals. The benchmark adaptive neuro-fuzzy inference system (ANFIS) is used as a baseline. Comparisons of the trained TANFIS models with experimental results demonstrate that the TANFIS modeling framework is an effective way to capture nonlinear behavior of integrated structure-MR damper systems under high impact loading. In addition, the performance of the TANFIS model is much better than that of ANFIS in both the training and the validation processes.

  15. Teaching Elliptical Excision Skills to Novice Medical Students: A Randomized Controlled Study Comparing Low- and High-Fidelity Bench Models

    PubMed Central

    Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério

    2014-01-01

    Background: The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. Aims: To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Materials and Methods: Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills’ training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs’ skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. Results: The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills’ training) was considered large (>0.80) in all measurements. Conclusion: The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials. PMID:24700937

  16. Towards an Automated Full-Turbofan Engine Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Turner, Mark G.; Norris, Andrew; Veres, Joseph P.

    2003-01-01

    The objective of this study was to demonstrate the high-fidelity numerical simulation of a modern high-bypass turbofan engine. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled three-dimensional computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady-state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the three-dimensional component models are integrated into the cycle model via partial performance maps generated automatically from the CFD flow solutions using one-dimensional meanline turbomachinery programs. This paper reports on the progress made towards the full-engine simulation of the GE90-94B engine, highlighting the generation of the high-pressure compressor partial performance map. The ongoing work will provide a system to evaluate the steady and unsteady aerodynamic and mechanical interactions between engine components at design and off-design operating conditions.

  17. Modelling impacts of performance on the probability of reproducing, and thereby on productive lifespan, allow prediction of lifetime efficiency in dairy cows.

    PubMed

    Phuong, H N; Blavy, P; Martin, O; Schmidely, P; Friggens, N C

    2016-01-01

    Reproductive success is a key component of lifetime efficiency - which is the ratio of energy in milk (MJ) to energy intake (MJ) over the lifespan, of cows. At the animal level, breeding and feeding management can substantially impact milk yield, body condition and energy balance of cows, which are known as major contributors to reproductive failure in dairy cattle. This study extended an existing lifetime performance model to incorporate the impacts that performance changes due to changing breeding and feeding strategies have on the probability of reproducing and thereby on the productive lifespan, and thus allow the prediction of a cow's lifetime efficiency. The model is dynamic and stochastic, with an individual cow being the unit modelled and one day being the unit of time. To evaluate the model, data from a French study including Holstein and Normande cows fed high-concentrate diets and data from a Scottish study including Holstein cows selected for high and average genetic merit for fat plus protein that were fed high- v. low-concentrate diets were used. Generally, the model consistently simulated productive and reproductive performance of various genotypes of cows across feeding systems. In the French data, the model adequately simulated the reproductive performance of Holsteins but significantly under-predicted that of Normande cows. In the Scottish data, conception to first service was comparably simulated, whereas interval traits were slightly under-predicted. Selection for greater milk production impaired the reproductive performance and lifespan but not lifetime efficiency. The definition of lifetime efficiency used in this model did not include associated costs or herd-level effects. Further works should include such economic indicators to allow more accurate simulation of lifetime profitability in different production scenarios.

  18. Performance Improvement [in HRD].

    ERIC Educational Resources Information Center

    1995

    These four papers are from a symposium that was facilitated by Richard J. Torraco at the 1995 conference of the Academy of Human Resource Development (HRD). "Performance Technology--Isn't It Time We Found Some New Models?" (William J. Rothwell) reviews briefly two classic models, describes criteria for the high performance workplace…

  19. VI-G, Sec. 661, P.L. 91-230. Final Performance Report.

    ERIC Educational Resources Information Center

    1976

    Presented is the final performance report of the CSDC model which is designed to provide services for learning disabled high school students. Sections cover the following program aspects: organizational structure, inservice sessions, identification of students, materials and equipment, evaluation of student performance, evaluation of the model,…

  20. Fundamentals of Modeling, Data Assimilation, and High-performance Computing

    NASA Technical Reports Server (NTRS)

    Rood, Richard B.

    2005-01-01

    This lecture will introduce the concepts of modeling, data assimilation and high- performance computing as it relates to the study of atmospheric composition. The lecture will work from basic definitions and will strive to provide a framework for thinking about development and application of models and data assimilation systems. It will not provide technical or algorithmic information, leaving that to textbooks, technical reports, and ultimately scientific journals. References to a number of textbooks and papers will be provided as a gateway to the literature.

  1. High temperature furnace modeling and performance verifications

    NASA Technical Reports Server (NTRS)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  2. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  3. A two-model hydrologic ensemble prediction of hydrograph: case study from the upper Nysa Klodzka river basin (SW Poland)

    NASA Astrophysics Data System (ADS)

    Niedzielski, Tomasz; Mizinski, Bartlomiej

    2016-04-01

    The HydroProg system has been elaborated in frame of the research project no. 2011/01/D/ST10/04171 of the National Science Centre of Poland and is steadily producing multimodel ensemble predictions of hydrograph in real time. Although there are six ensemble members available at present, the longest record of predictions and their statistics is available for two data-based models (uni- and multivariate autoregressive models). Thus, we consider 3-hour predictions of water levels, with lead times ranging from 15 to 180 minutes, computed every 15 minutes since August 2013 for the Nysa Klodzka basin (SW Poland) using the two approaches and their two-model ensemble. Since the launch of the HydroProg system there have been 12 high flow episodes, and the objective of this work is to present the performance of the two-model ensemble in the process of forecasting these events. For a sake of brevity, we limit our investigation to a single gauge located at the Nysa Klodzka river in the town of Klodzko, which is centrally located in the studied basin. We identified certain regular scenarios of how the models perform in predicting the high flows in Klodzko. At the initial phase of the high flow, well before the rising limb of hydrograph, the two-model ensemble is found to provide the most skilful prognoses of water levels. However, while forecasting the rising limb of hydrograph, either the two-model solution or the vector autoregressive model offers the best predictive performance. In addition, it is hypothesized that along with the development of the rising limb phase, the vector autoregression becomes the most skilful approach amongst the scrutinized ones. Our simple two-model exercise confirms that multimodel hydrologic ensemble predictions cannot be treated as universal solutions suitable for forecasting the entire high flow event, but their superior performance may hold only for certain phases of a high flow.

  4. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  5. Research and development on performance models of thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan

    2009-07-01

    Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.

  6. COOP 3D ARPA Experiment 109 National Center for Atmospheric Research

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Coupled atmospheric and hydrodynamic forecast models were executed on the supercomputing resources of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado and the Ohio Supercomputing Center (OSC)in Columbus, Ohio. respectively. The interoperation of the forecast models on these geographically diverse, high performance Cray platforms required the transfer of large three dimensional data sets at very high information rates. High capacity, terrestrial fiber optic transmission system technologies were integrated with those of an experimental high speed communications satellite in Geosynchronous Earth Orbit (GEO) to test the integration of the two systems. Operation over a spacecraft in GEO orbit required modification of the standard configuration of legacy data communications protocols to facilitate their ability to perform efficiently in the changing environment characteristic of a hybrid network. The success of this performance tuning enabled the use of such an architecture to facilitate high data rate, fiber optic quality data communications between high performance systems not accessible to standard terrestrial fiber transmission systems. Thus obviating the performance degradation often found in contemporary earth/satellite hybrids.

  7. Cognitive and Neural Bases of Skilled Performance.

    DTIC Science & Technology

    1987-10-04

    advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd

  8. ENVIRONMENTAL RESEARCH BRIEF : ANALYTIC ELEMENT MODELING OF GROUND-WATER FLOW AND HIGH PERFORMANCE COMPUTING

    EPA Science Inventory

    Several advances in the analytic element method have been made to enhance its performance and facilitate three-dimensional ground-water flow modeling in a regional aquifer setting. First, a new public domain modular code (ModAEM) has been developed for modeling ground-water flow ...

  9. A Community Health Worker "logic model": towards a theory of enhanced performance in low- and middle-income countries.

    PubMed

    Naimoli, Joseph F; Frymus, Diana E; Wuliji, Tana; Franco, Lynne M; Newsome, Martha H

    2014-10-02

    There has been a resurgence of interest in national Community Health Worker (CHW) programs in low- and middle-income countries (LMICs). A lack of strong research evidence persists, however, about the most efficient and effective strategies to ensure optimal, sustained performance of CHWs at scale. To facilitate learning and research to address this knowledge gap, the authors developed a generic CHW logic model that proposes a theoretical causal pathway to improved performance. The logic model draws upon available research and expert knowledge on CHWs in LMICs. Construction of the model entailed a multi-stage, inductive, two-year process. It began with the planning and implementation of a structured review of the existing research on community and health system support for enhanced CHW performance. It continued with a facilitated discussion of review findings with experts during a two-day consultation. The process culminated with the authors' review of consultation-generated documentation, additional analysis, and production of multiple iterations of the model. The generic CHW logic model posits that optimal CHW performance is a function of high quality CHW programming, which is reinforced, sustained, and brought to scale by robust, high-performing health and community systems, both of which mobilize inputs and put in place processes needed to fully achieve performance objectives. Multiple contextual factors can influence CHW programming, system functioning, and CHW performance. The model is a novel contribution to current thinking about CHWs. It places CHW performance at the center of the discussion about CHW programming, recognizes the strengths and limitations of discrete, targeted programs, and is comprehensive, reflecting the current state of both scientific and tacit knowledge about support for improving CHW performance. The model is also a practical tool that offers guidance for continuous learning about what works. Despite the model's limitations and several challenges in translating the potential for learning into tangible learning, the CHW generic logic model provides a solid basis for exploring and testing a causal pathway to improved performance.

  10. Theory and Modeling of High-Power Gyrotrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nusinovich, Gregory Semeon

    2016-04-29

    This report summarized results of the work performed at the Institute for Research in Electronics and Applied Physics of the University of Maryland (College Park, MD) in the framework of the DOE Grant “Theory and Modeling of High-Power Gyrotrons”. The report covers the work performed in 2011-2014. The research work was performed in three directions: - possibilities of stable gyrotron operation in very high-order modes offering the output power exceeding 1 MW level in long-pulse/continuous-wave regimes, - effect of small imperfections in gyrotron fabrication and alignment on the gyrotron efficiency and operation, - some issues in physics of beam-wave interactionmore » in gyrotrons.« less

  11. Quantum Accelerators for High-performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Britt, Keith A.; Mohiyaddin, Fahd A.

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, themore » prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.« less

  12. High performance computing for advanced modeling and simulation of materials

    NASA Astrophysics Data System (ADS)

    Wang, Jue; Gao, Fei; Vazquez-Poletti, Jose Luis; Li, Jianjiang

    2017-02-01

    The First International Workshop on High Performance Computing for Advanced Modeling and Simulation of Materials (HPCMS2015) was held in Austin, Texas, USA, Nov. 18, 2015. HPCMS 2015 was organized by Computer Network Information Center (Chinese Academy of Sciences), University of Michigan, Universidad Complutense de Madrid, University of Science and Technology Beijing, Pittsburgh Supercomputing Center, China Institute of Atomic Energy, and Ames Laboratory.

  13. Using Performance Assessment Model in Physics Laboratory to Increase Students’ Critical Thinking Disposition

    NASA Astrophysics Data System (ADS)

    Emiliannur, E.; Hamidah, I.; Zainul, A.; Wulan, A. R.

    2017-09-01

    Performance Assessment Model (PAM) has been developed to represent the physics concepts which able to be devided into five experiments: 1) acceleration due to gravity; 2) Hooke’s law; 3) simple harmonic motion; 4) work-energy concepts; and 5) the law of momentum conservation. The aim of this study was to determine the contribution of PAM in physics laboratory to increase students’ Critical Thinking Disposition (CTD) at senior high school. Subject of the study were 11th grade consist 32 students of a senior high school in Lubuk Sikaping, West Sumatera. The research used one group pretest-postest design. Data was collected through essay test and questionnaire about CTD. Data was analyzed using quantitative way with N-gain value. This study concluded that performance assessmet model effectively increases the N-gain at medium category. It means students’ critical thinking disposition significant increase after implementation of performance assessment model in physics laboratory.

  14. Performance and Costs of Ductless Heat Pumps in Marine-Climate High-Performance Homes -- Habitat for Humanity The Woods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael Lubliner; Howard, Luke; Hales, David

    2016-02-23

    This final Building America Partnership report focuses on the results of field testing, modeling, and monitoring of ductless mini-split heat pump hybrid heating systems in seven homes built and first occupied at various times between September 2013 and October 2014. The report also provides WSU documentation of high-performance home observations, lessons learned, and stakeholder recommendations for builders of affordable high-performance housing.

  15. The impact of a freshman academy on science performance of first-time ninth-grade students at one Georgia high school

    NASA Astrophysics Data System (ADS)

    Daniel, Vivian Summerour

    The purpose of this within-group experimental study was to find out to what extent ninth-grade students improved their science performance beyond their middle school science performance at one Georgia high school utilizing a freshman academy model. Freshman academies have been recognized as a useful tool for increasing academic performance among ninth-grade students because they address a range of academic support initiatives tailored to improve academic performance among ninth-grade students. The talent development model developed by Legters, Balfanz, Jordan, and McPartland (2002) has served as a foundational standard for many ninth grade academy programs. A cornerstone feature of this model is the creation of small learning communities used to increase ninth-grade student performance. Another recommendation was to offer credit recovery opportunities for ninth graders along with creating parent and community involvement activities to increase academic success among ninth-grade students. While the site's program included some of the initiatives outlined by the talent development model, it did not utilize all of them. The study concluded that the academy did not show a definitive increase in academic performance among ninth-grade students since most students stayed within their original performance category.

  16. Aztec Middle College: High School Alternatives in Community Colleges

    ERIC Educational Resources Information Center

    Olsen, Lynette

    2010-01-01

    The traditional high school model derived from the factory deficit model of the early 1900s has left many students, mainly minorities and/or low socioeconomic students, disenfranchised. This is evident in the poor school performance and high dropout rates of such students. Whereas the factory deficit model was created to promote only a few high…

  17. Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed

    NASA Astrophysics Data System (ADS)

    Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya

    2017-09-01

    NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.

  18. Staff | Computational Science | NREL

    Science.gov Websites

    develops and leads laboratory-wide efforts in high-performance computing and energy-efficient data centers Professional IV-High Perf Computing Jim.Albin@nrel.gov 303-275-4069 Ananthan, Shreyas Senior Scientist - High -Performance Algorithms and Modeling Shreyas.Ananthan@nrel.gov 303-275-4807 Bendl, Kurt IT Professional IV-High

  19. Closed Loop System Identification with Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.

    2004-01-01

    High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.

  20. Engine Performance Improvement for the 378-Foot High Endurance Cutter

    DOT National Transportation Integrated Search

    1978-06-01

    Methods for improving the performance of the main diesel engines : of the 378-foot Coast Guard High Endurance Cutter have been investgated. : These engines are models FM3W8-l-/8 rated for 3600hp at : 90QrDM. Present engine performance was evaluated t...

  1. Development and application of theoretical models for Rotating Detonation Engine flowfields

    NASA Astrophysics Data System (ADS)

    Fievisohn, Robert

    As turbine and rocket engine technology matures, performance increases between successive generations of engine development are becoming smaller. One means of accomplishing significant gains in thermodynamic performance and power density is to use detonation-based heat release instead of deflagration. This work is focused on developing and applying theoretical models to aid in the design and understanding of Rotating Detonation Engines (RDEs). In an RDE, a detonation wave travels circumferentially along the bottom of an annular chamber where continuous injection of fresh reactants sustains the detonation wave. RDEs are currently being designed, tested, and studied as a viable option for developing a new generation of turbine and rocket engines that make use of detonation heat release. One of the main challenges in the development of RDEs is to understand the complex flowfield inside the annular chamber. While simplified models are desirable for obtaining timely performance estimates for design analysis, one-dimensional models may not be adequate as they do not provide flow structure information. In this work, a two-dimensional physics-based model is developed, which is capable of modeling the curved oblique shock wave, exit swirl, counter-flow, detonation inclination, and varying pressure along the inflow boundary. This is accomplished by using a combination of shock-expansion theory, Chapman-Jouguet detonation theory, the Method of Characteristics (MOC), and other compressible flow equations to create a shock-fitted numerical algorithm and generate an RDE flowfield. This novel approach provides a numerically efficient model that can provide performance estimates as well as details of the large-scale flow structures in seconds on a personal computer. Results from this model are validated against high-fidelity numerical simulations that may require a high-performance computing framework to provide similar performance estimates. This work provides a designer a new tool to conduct large-scale parametric studies to optimize a design space before conducting computationally-intensive, high-fidelity simulations that may be used to examine additional effects. The work presented in this thesis not only bridges the gap between simple one-dimensional models and high-fidelity full numerical simulations, but it also provides an effective tool for understanding and exploring RDE flow processes.

  2. Robust modeling and performance analysis of high-power diode side-pumped solid-state laser systems.

    PubMed

    Kashef, Tamer; Ghoniemy, Samy; Mokhtar, Ayman

    2015-12-20

    In this paper, we present an enhanced high-power extrinsic diode side-pumped solid-state laser (DPSSL) model to accurately predict the dynamic operations and pump distribution under different practical conditions. We introduce a new implementation technique for the proposed model that provides a compelling incentive for the performance assessment and enhancement of high-power diode side-pumped Nd:YAG lasers using cooperative agents and by relying on the MATLAB, GLAD, and Zemax ray tracing software packages. A large-signal laser model that includes thermal effects and a modified laser gain formulation and incorporates the geometrical pump distribution for three radially arranged arrays of laser diodes is presented. The design of a customized prototype diode side-pumped high-power laser head fabricated for the purpose of testing is discussed. A detailed comparative experimental and simulation study of the dynamic operation and the beam characteristics that are used to verify the accuracy of the proposed model for analyzing the performance of high-power DPSSLs under different conditions are discussed. The simulated and measured results of power, pump distribution, beam shape, and slope efficiency are shown under different conditions and for a specific case, where the targeted output power is 140 W, while the input pumping power is 400 W. The 95% output coupler reflectivity showed good agreement with the slope efficiency, which is approximately 35%; this assures the robustness of the proposed model to accurately predict the design parameters of practical, high-power DPSSLs.

  3. Evaluating the Value of High Spatial Resolution in National Capacity Expansion Models using ReEDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat; Cole, Wesley

    2016-11-14

    Power sector capacity expansion models (CEMs) have a broad range of spatial resolutions. This paper uses the Regional Energy Deployment System (ReEDS) model, a long-term national scale electric sector CEM, to evaluate the value of high spatial resolution for CEMs. ReEDS models the United States with 134 load balancing areas (BAs) and captures the variability in existing generation parameters, future technology costs, performance, and resource availability using very high spatial resolution data, especially for wind and solar modeled at 356 resource regions. In this paper we perform planning studies at three different spatial resolutions--native resolution (134 BAs), state-level, and NERCmore » region level--and evaluate how results change under different levels of spatial aggregation in terms of renewable capacity deployment and location, associated transmission builds, and system costs. The results are used to ascertain the value of high geographically resolved models in terms of their impact on relative competitiveness among renewable energy resources.« less

  4. Theoretical Models of Protostellar Binary and Multiple Systems with AMR Simulations

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tomoaki; Tokuda, Kazuki; Onishi, Toshikazu; Inutsuka, Shu-ichiro; Saigo, Kazuya; Takakuwa, Shigehisa

    2017-05-01

    We present theoretical models for protostellar binary and multiple systems based on the high-resolution numerical simulation with an adaptive mesh refinement (AMR) code, SFUMATO. The recent ALMA observations have revealed early phases of the binary and multiple star formation with high spatial resolutions. These observations should be compared with theoretical models with high spatial resolutions. We present two theoretical models for (1) a high density molecular cloud core, MC27/L1521F, and (2) a protobinary system, L1551 NE. For the model for MC27, we performed numerical simulations for gravitational collapse of a turbulent cloud core. The cloud core exhibits fragmentation during the collapse, and dynamical interaction between the fragments produces an arc-like structure, which is one of the prominent structures observed by ALMA. For the model for L1551 NE, we performed numerical simulations of gas accretion onto protobinary. The simulations exhibit asymmetry of a circumbinary disk. Such asymmetry has been also observed by ALMA in the circumbinary disk of L1551 NE.

  5. A diagnostic model for chronic hypersensitivity pneumonitis

    PubMed Central

    Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R

    2017-01-01

    The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist’s diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. PMID:27245779

  6. The effect of bench model fidelity on fluoroscopy-guided transforaminal epidural injection training: a randomized control study.

    PubMed

    Gonzalez-Cota, Alan; Chiravuri, Srinivas; Stansfield, R Brent; Brummett, Chad M; Hamstra, Stanley J

    2013-01-01

    The purpose of this study was to determine whether high-fidelity simulators provide greater benefit than low-fidelity models in training fluoroscopy-guided transforaminal epidural injection. This educational study was a single-center, prospective, randomized 3-arm pretest-posttest design with a control arm. Eighteen anesthesia and physical medicine and rehabilitation residents were instructed how to perform a fluoroscopy-guided transforaminal epidural injection and assessed by experts on a reusable injectable phantom cadaver. The high- and low-fidelity groups received 30 minutes of supervised hands-on practice according to group assignment, and the control group received 30 minutes of didactic instruction from an expert. We found no differences at posttest between the high- and low-fidelity groups on global ratings of performance (P = 0.17) or checklist scores (P = 0.81). Participants who received either form of hands-on training significantly outperformed the control group on both the global rating of performance (control vs low-fidelity, P = 0.0048; control vs high-fidelity, P = 0.0047) and the checklist (control vs low-fidelity, P = 0.0047; control vs high-fidelity, P = 0.0047). Training an epidural procedure using a low-fidelity model may be equally effective as training on a high-fidelity model. These results are consistent with previous research on a variety of interventional procedures and further demonstrate the potential impact of simple, low-fidelity training models.

  7. Autonomous Aerobraking: Thermal Analysis and Response Surface Development

    NASA Technical Reports Server (NTRS)

    Dec, John A.; Thornblom, Mark N.

    2011-01-01

    A high-fidelity thermal model of the Mars Reconnaissance Orbiter was developed for use in an autonomous aerobraking simulation study. Response surface equations were derived from the high-fidelity thermal model and integrated into the autonomous aerobraking simulation software. The high-fidelity thermal model was developed using the Thermal Desktop software and used in all phases of the analysis. The use of Thermal Desktop exclusively, represented a change from previously developed aerobraking thermal analysis methodologies. Comparisons were made between the Thermal Desktop solutions and those developed for the previous aerobraking thermal analyses performed on the Mars Reconnaissance Orbiter during aerobraking operations. A variable sensitivity screening study was performed to reduce the number of variables carried in the response surface equations. Thermal analysis and response surface equation development were performed for autonomous aerobraking missions at Mars and Venus.

  8. High-Performance Integrated Control of water quality and quantity in urban water reservoirs

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.; Goedbloed, A.

    2015-11-01

    This paper contributes a novel High-Performance Integrated Control framework to support the real-time operation of urban water supply storages affected by water quality problems. We use a 3-D, high-fidelity simulation model to predict the main water quality dynamics and inform a real-time controller based on Model Predictive Control. The integration of the simulation model into the control scheme is performed by a model reduction process that identifies a low-order, dynamic emulator running 4 orders of magnitude faster. The model reduction, which relies on a semiautomatic procedural approach integrating time series clustering and variable selection algorithms, generates a compact and physically meaningful emulator that can be coupled with the controller. The framework is used to design the hourly operation of Marina Reservoir, a 3.2 Mm3 storm-water-fed reservoir located in the center of Singapore, operated for drinking water supply and flood control. Because of its recent formation from a former estuary, the reservoir suffers from high salinity levels, whose behavior is modeled with Delft3D-FLOW. Results show that our control framework reduces the minimum salinity levels by nearly 40% and cuts the average annual deficit of drinking water supply by about 2 times the active storage of the reservoir (about 4% of the total annual demand).

  9. Controlling Ethylene for Extended Preservation of Fresh Fruits and Vegetables

    DTIC Science & Technology

    2008-12-01

    into a process simulation to determine the effects of key design parameters on the overall performance of the system. Integrating process simulation...High Decay [Asian Pears High High Decay [ Avocados High High Decay lBananas Moderate ~igh Decay Cantaloupe High Moderate Decay Cherimoya Very High High...ozonolysis. Process simulation was subsequently used to understand the effect of key system parameters on EEU performance. Using this modeling work

  10. Value-Added Models for Teacher Preparation Programs: Validity and Reliability Threats, and a Manageable Alternative

    ERIC Educational Resources Information Center

    Brady, Michael P.; Heiser, Lawrence A.; McCormick, Jazarae K.; Forgan, James

    2016-01-01

    High-stakes standardized student assessments are increasingly used in value-added evaluation models to connect teacher performance to P-12 student learning. These assessments are also being used to evaluate teacher preparation programs, despite validity and reliability threats. A more rational model linking student performance to candidates who…

  11. A new adaptive L1-norm for optimal descriptor selection of high-dimensional QSAR classification model for anti-hepatitis C virus activity of thiourea derivatives.

    PubMed

    Algamal, Z Y; Lee, M H

    2017-01-01

    A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.

  12. An Illustrative Case Study of the Heuristic Practices of a High-Performing Research Department: Toward Building a Model Applicable in the Context of Large Urban Districts

    ERIC Educational Resources Information Center

    Munoz, Marco A.; Rodosky, Robert J.

    2011-01-01

    This case study provides an illustration of the heuristic practices of a high-performing research department, which in turn, will help build much needed models applicable in the context of large urban districts. This case study examines the accountability, planning, evaluation, testing, and research functions of a research department in a large…

  13. Advanced Performance Modeling with Combined Passive and Active Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dovrolis, Constantine; Sim, Alex

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performancemore » information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.« less

  14. Evaluation of a numerical model's ability to predict bed load transport observed in braided river experiments

    NASA Astrophysics Data System (ADS)

    Javernick, Luke; Redolfi, Marco; Bertoldi, Walter

    2018-05-01

    New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.

  15. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  16. Performance of homeostasis model assessment and serum high-sensitivity C-reactive protein for prediction of isolated post-load hyperglycaemia.

    PubMed

    Lai, Y-C; Li, H-Y; Hung, C-S; Lin, M-S; Shih, S-R; Ma, W-Y; Hua, C-H; Chuang, L-M; Sung, F-C; Wei, J-N

    2013-03-01

    To evaluate whether homeostasis model assessment and high-sensitivity C-reactive protein improve the prediction of isolated post-load hyperglycaemia. The subjects were 1458 adults without self-reported diabetes recruited between 2006 and 2010. Isolated post-load hyperglycaemia was defined as fasting plasma glucose < 7 mmol/l and 2-h post-load plasma glucose ≥ 11.1 mmol/l. Risk scores of isolated post-load hyperglycaemia were constructed by multivariate logistic regression. An independent group (n = 154) was enrolled from 2010 to 2011 to validate the models' performance. One hundred and twenty-three subjects (8.28%) were newly diagnosed as having diabetes mellitus. Among those with undiagnosed diabetes, 64 subjects (52%) had isolated post-load hyperglycaemia. Subjects with isolated post-load hyperglycaemia were older, more centrally obese and had higher blood pressure, HbA(1c), fasting plasma glucose, triglycerides, LDL cholesterol, high-sensitivity C-reactive protein and homeostasis model assessment of insulin resistance and lower homeostasis model assessment of β-cell function than those without diabetes. The risk scores included age, gender, BMI, homeostasis model assessment, high-sensitivity C-reactive protein and HbA(1c). The full model had high sensitivity (84%) and specificity (87%) and area under the receiver operating characteristic curve (0.91), with a cut-off point of 23.81; validation in an independent data set showed 88% sensitivity, 77% specificity and an area under curve of 0.89. Over half of those with undiagnosed diabetes had isolated post-load hyperglycaemia. Homeostasis model assessment and high-sensitivity C-reactive protein are useful to identify subjects with isolated post-load hyperglycaemia, with improved performance over fasting plasma glucose or HbA(1c) alone. © 2012 The Authors. Diabetic Medicine © 2012 Diabetes UK.

  17. High temperature superconductors applications in telecommunications

    NASA Technical Reports Server (NTRS)

    Kumar, A. Anil; Li, Jiang; Zhang, Ming Fang

    1995-01-01

    The purpose of this paper is twofold: (1) to discuss high temperature superconductors with specific reference to their employment in telecommunications applications; and (2) to discuss a few of the limitations of the normally employed two-fluid model. While the debate on the actual usage of high temperature superconductors in the design of electronic and telecommunications devices - obvious advantages versus practical difficulties - needs to be settled in the near future, it is of great interest to investigate the parameters and the assumptions that will be employed in such designs. This paper deals with the issue of providing the microwave design engineer with performance data for such superconducting waveguides. The values of conductivity and surface resistance, which are the primary determining factors of a waveguide performance, are computed based on the two-fluid model. A comparison between two models - a theoretical one in terms of microscopic parameters (termed Model A) and an experimental fit in terms of macroscopic parameters (termed Model B) - shows the limitations and the resulting ambiguities of the two-fluid model at high frequencies and at temperatures close to the transition temperature. The validity of the two-fluid model is then discussed. Our preliminary results show that the electrical transport description in the normal and superconducting phases as they are formulated in the two-fluid model needs to be modified to incorporate the new and special features of high temperature superconductors. Parameters describing the waveguide performance - conductivity, surface resistance and attenuation constant - will be computed. Potential applications in communications networks and large scale integrated circuits will be discussed. Some of the ongoing work will be reported. In particular, a brief proposal is made to investigate of the effects of electromagnetic interference and the concomitant notion of electromagnetic compatibility (EMI/EMC) of high T(sub c) superconductors.

  18. High temperature superconductors applications in telecommunications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, A.A.; Li, J.; Zhang, M.F.

    1994-12-31

    The purpose of this paper is twofold: to discuss high temperature superconductors with specific reference to their employment in telecommunications applications; and to discuss a few of the limitations of the normally employed two-fluid model. While the debate on the actual usage of high temperature superconductors in the design of electronic and telecommunications devices-obvious advantages versus practical difficulties-needs to be settled in the near future, it is of great interest to investigate the parameters and the assumptions that will be employed in such designs. This paper deals with the issue of providing the microwave design engineer with performance data formore » such superconducting waveguides. The values of conductivity and surface resistance, which are the primary determining factors of a waveguide performance, are computed based on the two-fluid model. A comparison between two models-a theoretical one in terms of microscopic parameters (termed Model A) and an experimental fit in terms of macroscopic parameters (termed Model B)-shows the limitations and the resulting ambiguities of the two-fluid model at high frequencies and at temperatures close to the transition temperature. The validity of the two-fluid model is then discussed. Our preliminary results show that the electrical transport description in the normal and superconducting phases as they are formulated in the two-fluid model needs to be modified to incorporate the new and special features of high temperature superconductors. Parameters describing the waveguide performance-conductivity, surface resistance and attenuation constant-will be computed. Potential applications in communications networks and large scale integrated circuits will be discussed. Some of the ongoing work will be reported. In particular, a brief proposal is made to investigate of the effects of electromagnetic interference and the concomitant notion of electromagnetic compatibility (EMI/EMC) of high T{sub c} superconductors.« less

  19. Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods

    DOE PAGES

    Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.

    2016-09-01

    Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less

  20. Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.

    Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less

  1. Low and high speed propellers for general aviation: Performance potential and recent wind tunnel test results

    NASA Technical Reports Server (NTRS)

    Jeracki, R. J.; Mitchell, G. A.

    1981-01-01

    The performance of lower speed, 5 foot diameter model general aviation propellers, was tested in the Lewis wind tunnel. Performance was evaluated for various levels of airfoil technology and activity factor. The difference was associated with inadequate modeling of blade and spinner losses for propellers round shank blade designs. Suggested concepts for improvement are: (1) advanced blade shapes (airfoils and sweep); (2) tip devices (proplets); (3) integrated propeller/nacelles; and (4) composites. Several advanced aerodynamic concepts were evaluated in the Lewis wind tunnel. Results show that high propeller performance can be obtained to at least Mach 0.8.

  2. Using risk-adjustment models to identify high-cost risks.

    PubMed

    Meenan, Richard T; Goodman, Michael J; Fishman, Paul A; Hornbrook, Mark C; O'Keeffe-Rosetti, Maureen C; Bachman, Donald J

    2003-11-01

    We examine the ability of various publicly available risk models to identify high-cost individuals and enrollee groups using multi-HMO administrative data. Five risk-adjustment models (the Global Risk-Adjustment Model [GRAM], Diagnostic Cost Groups [DCGs], Adjusted Clinical Groups [ACGs], RxRisk, and Prior-expense) were estimated on a multi-HMO administrative data set of 1.5 million individual-level observations for 1995-1996. Models produced distributions of individual-level annual expense forecasts for comparison to actual values. Prespecified "high-cost" thresholds were set within each distribution. The area under the receiver operating characteristic curve (AUC) for "high-cost" prevalences of 1% and 0.5% was calculated, as was the proportion of "high-cost" dollars correctly identified. Results are based on a separate 106,000-observation validation dataset. For "high-cost" prevalence targets of 1% and 0.5%, ACGs, DCGs, GRAM, and Prior-expense are very comparable in overall discrimination (AUCs, 0.83-0.86). Given a 0.5% prevalence target and a 0.5% prediction threshold, DCGs, GRAM, and Prior-expense captured $963,000 (approximately 3%) more "high-cost" sample dollars than other models. DCGs captured the most "high-cost" dollars among enrollees with asthma, diabetes, and depression; predictive performance among demographic groups (Medicaid members, members over 64, and children under 13) varied across models. Risk models can efficiently identify enrollees who are likely to generate future high costs and who could benefit from case management. The dollar value of improved prediction performance of the most accurate risk models should be meaningful to decision-makers and encourage their broader use for identifying high costs.

  3. High-Power, High-Thrust Ion Thruster (HPHTion)

    NASA Technical Reports Server (NTRS)

    Peterson, Peter Y.

    2015-01-01

    Advances in high-power photovoltaic technology have enabled the possibility of reasonably sized, high-specific power solar arrays. At high specific powers, power levels ranging from 50 to several hundred kilowatts are feasible. Ion thrusters offer long life and overall high efficiency (typically greater than 70 percent efficiency). In Phase I, the team at ElectroDynamic Applications, Inc., built a 25-kW, 50-cm ion thruster discharge chamber and fabricated a laboratory model. This was in response to the need for a single, high-powered engine to fill the gulf between the 7-kW NASA's Evolutionary Xenon Thruster (NEXT) system and a notional 25-kW engine. The Phase II project matured the laboratory model into a protoengineering model ion thruster. This involved the evolution of the discharge chamber to a high-performance thruster by performance testing and characterization via simulated and full beam extraction testing. Through such testing, the team optimized the design and built a protoengineering model thruster. Coupled with gridded ion thruster technology, this technology can enable a wide range of missions, including ambitious near-Earth NASA missions, Department of Defense missions, and commercial satellite activities.

  4. Meshless collocation methods for the numerical solution of elliptic boundary valued problems the rotational shallow water equations on the sphere

    NASA Astrophysics Data System (ADS)

    Blakely, Christopher D.

    This dissertation thesis has three main goals: (1) To explore the anatomy of meshless collocation approximation methods that have recently gained attention in the numerical analysis community; (2) Numerically demonstrate why the meshless collocation method should clearly become an attractive alternative to standard finite-element methods due to the simplicity of its implementation and its high-order convergence properties; (3) Propose a meshless collocation method for large scale computational geophysical fluid dynamics models. We provide numerical verification and validation of the meshless collocation scheme applied to the rotational shallow-water equations on the sphere and demonstrate computationally that the proposed model can compete with existing high performance methods for approximating the shallow-water equations such as the SEAM (spectral-element atmospheric model) developed at NCAR. A detailed analysis of the parallel implementation of the model, along with the introduction of parallel algorithmic routines for the high-performance simulation of the model will be given. We analyze the programming and computational aspects of the model using Fortran 90 and the message passing interface (mpi) library along with software and hardware specifications and performance tests. Details from many aspects of the implementation in regards to performance, optimization, and stabilization will be given. In order to verify the mathematical correctness of the algorithms presented and to validate the performance of the meshless collocation shallow-water model, we conclude the thesis with numerical experiments on some standardized test cases for the shallow-water equations on the sphere using the proposed method.

  5. Tropical cyclones over the North Indian Ocean: experiments with the high-resolution global icosahedral grid point model GME

    NASA Astrophysics Data System (ADS)

    Kumkar, Yogesh V.; Sen, P. N.; Chaudhari, Hemankumar S.; Oh, Jai-Ho

    2018-02-01

    In this paper, an attempt has been made to conduct a numerical experiment with the high-resolution global model GME to predict the tropical storms in the North Indian Ocean during the year 2007. Numerical integrations using the icosahedral hexagonal grid point global model GME were performed to study the evolution of tropical cyclones, viz., Akash, Gonu, Yemyin and Sidr over North Indian Ocean during 2007. It has been seen that the GME model forecast underestimates cyclone's intensity, but the model can capture the evolution of cyclone's intensity especially its weakening during landfall, which is primarily due to the cutoff of the water vapor supply in the boundary layer as cyclones approach the coastal region. A series of numerical simulation of tropical cyclones have been performed with GME to examine model capability in prediction of intensity and track of the cyclones. The model performance is evaluated by calculating the root mean square errors as cyclone track errors.

  6. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  7. Ku-Band rendezvous radar performance computer simulation model

    NASA Astrophysics Data System (ADS)

    Magnusson, H. G.; Goff, M. F.

    1984-06-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  8. Estimating thermal performance curves from repeated field observations

    USGS Publications Warehouse

    Childress, Evan; Letcher, Benjamin H.

    2017-01-01

    Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.

  9. Developing a Differentiated Model for the Teaching of Creative Writing to High Performing Students

    ERIC Educational Resources Information Center

    Ngo, Thu Thi Bich

    2016-01-01

    Differentiating writing instruction has been a puzzling matter for English teachers when it comes to teaching creative writing to high potential and high performing (HPHP) students. The lack of differentiation in creative writing pedagogy for HPHP students in Australia is due to two major issues: (1) teachers' lack of high-level linguistic and…

  10. Scale effect challenges in urban hydrology highlighted with a Fully Distributed Model and High-resolution rainfall data

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2017-04-01

    Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.

  11. From SED HI concept to Pleiades FM detection unit measurements

    NASA Astrophysics Data System (ADS)

    Renard, Christophe; Dantes, Didier; Neveu, Claude; Lamard, Jean-Luc; Oudinot, Matthieu; Materne, Alex

    2017-11-01

    The first flight model PLEIADES high resolution instrument under Thales Alenia Space development, on behalf of CNES, is currently in integration and test phases. Based on the SED HI detection unit concept, PLEIADES detection unit has been fully qualified before the integration at telescope level. The main radiometric performances have been measured on engineering and first flight models. This paper presents the results of performances obtained on the both models. After a recall of the SED HI concept, the design and performances of the main elements (charge coupled detectors, focal plane and video processing unit), detection unit radiometric performances are presented and compared to the instrument specifications for the panchromatic and multispectral bands. The performances treated are the following: - video signal characteristics, - dark signal level and dark signal non uniformity, - photo-response non uniformity, - non linearity and differential non linearity, - temporal and spatial noises regarding system definitions PLEIADES detection unit allows tuning of different functions: reference and sampling time positioning, anti-blooming level, gain value, TDI line number. These parameters are presented with their associated criteria of optimisation to achieve system radiometric performances and their sensitivities on radiometric performances. All the results of the measurements performed by Thales Alenia Space on the PLEIADES detection units demonstrate the high potential of the SED HI concept for Earth high resolution observation system allowing optimised performances at instrument and satellite levels.

  12. Finite element analysis of ultra-high performance concrete : modeling structural performance of an AASHTO type II girder and a 2nd generation pi-girder

    DOT National Transportation Integrated Search

    2010-10-01

    Ultra-high performance concrete (UHPC) is an advanced cementitious composite material which has been developed in recent decades. When compared to more conventional cement-based concrete materials, UHPC tends to exhibit superior properties such as in...

  13. Understanding and development of manufacturable screen-printed contacts on high sheet-resistance emitters for low-cost silicon solar cells

    NASA Astrophysics Data System (ADS)

    Hilali, Mohamed M.

    2005-11-01

    A simple cost-effective approach was proposed and successfully employed to fabricate high-quality screen-printed (SP) contacts to high sheet-resistance emitters (100 O/sq) to improve the Si solar cell efficiency. Device modeling was used to quantify the performance enhancement possible from the high sheet-resistance emitter for various cell designs. It was found that for performance enhancement from the high sheet-resistance emitter, certain cell design criteria must be satisfied. Model calculations showed that in order to achieve any performance enhancement over the conventional ˜40 O/sq emitter, the high sheet resistance emitter solar cell must have a reasonably good (<120,000 cm/s) or low front-surface recombination velocity (FSRV). Model calculations were also performed to establish requirements for high fill factors (FFs). The results showed that the series resistance should be less than 0.8 O-cm2, the shunt resistance should be greater than 1000 O-cm2, and the junction leakage current should be less than 25 nA/cm2. Analytical microscopy and surface analysis techniques were used to study the Ag-Si contact interface of different SP Ag pastes. Physical and electrical properties of SP Ag thick-film contacts were studied and correlated to understand and achieve good-quality ohmic contacts to high sheet-resistance emitters for solar cells. This information was then used to define the criteria for high-quality screen-printed contacts. The role of paste constituents and firing scheme on contact quality were investigated to tailor the high-quality screen-printed contact interface structure that results in high performance solar cells. Results indicated that small particle size, high glass transition temperature, rapid firing and less aggressive glass frit help in producing high-quality contacts. Based on these results high-quality SP contacts with high FFs > 0.78 on high sheet-resistance emitters were achieved for the first time using a simple single-step firing process. This technology was applied to different substrates (monocrystalline and multicrystalline) and surfaces (textured and planar). Cell efficiencies of ˜16.2% on low-cost EFG ribbon substrates were achieved on high sheet-resistance emitters with SP contacts. A record high-efficiency SP solar cell of 19% with textured high sheet-resistance emitter was also fabricated and modeled.

  14. Multi-Scale Multi-Domain Model | Transportation Research | NREL

    Science.gov Websites

    framework for NREL's MSMD model. NREL's MSMD model quantifies the impacts of electrical/thermal pathway : NREL Macroscopic design factors and highly dynamic environmental conditions significantly influence the design of affordable, long-lasting, high-performing, and safe large battery systems. The MSMD framework

  15. Simulator validation results and proposed reporting format from flight testing a software model of a complex, high-performance airplane.

    DOT National Transportation Integrated Search

    2008-01-01

    Computer simulations are often used in aviation studies. These simulation tools may require complex, high-fidelity aircraft models. Since many of the flight models used are third-party developed products, independent validation is desired prior to im...

  16. Finite element analysis of constrained total Condylar Knee Prosthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-07-13

    Exactech, Inc., is a prosthetic joint manufacturer based in Gainesville, FL. The company set the goal of developing a highly effective prosthetic articulation, based on scientific principles, not trial and error. They developed an evolutionary design for a total knee arthroplasty system that promised improved performance. They performed static load tests in the laboratory with similar previous designs, but dynamic laboratory testing was both difficult to perform and prohibitively expensive for a small business to undertake. Laboratory testing also cannot measure stress levels in the interior of the prosthesis where failures are known to initiate. To fully optimize their designsmore » for knee arthroplasty revisions, they needed range-of-motion stress/strain data at interior as well as exterior locations within the prosthesis. LLNL developed computer software (especially NIKE3D) specifically designed to perform stress/strain computations (finite element analysis) for complex geometries in large displacement/large deformation conditions. Additionally, LLNL had developed a high fidelity knee model for other analytical purposes. The analysis desired by Exactech could readily be performed using NIKE3D and a modified version of the high fidelity knee that contained the geometry of the condylar knee components. The LLNL high fidelity knee model was a finite element computer model which would not be transferred to Exactech during the course of this CRADA effort. The previously performed laboratory studies by Exactech were beneficial to LLNL in verifying the analytical capabilities of NIKE3D for human anatomical modeling. This, in turn, gave LLNL further entree to perform work-for-others in the prosthetics field. There were two purposes to the CRADA (1) To modify the LLNL High Fidelity Knee Model to accept the geometry of the Exactech Total Knee; and (2) To perform parametric studies of the possible design options in appropriate ranges of motion so that an optimum design could be selected for production. Because of unanticipated delays in the CRADA funding, the knee design had to be finalized before the analysis could be accomplished. Thus, the scope of work was modified by the industrial partner. It was decided that it would be most beneficial to perform FEA that would closely replicate the lab tests that had been done as the basis of the design. Exactech was responsible for transmitting the component geometries to Livermore, as well as providing complete data from the quasi-static laboratory loading tests that were performed on various designs. LLNL was responsible for defining the basic finite element mesh and carrying out the analysis. We performed the initial computer simulation and verified model integrity, using the laboratory data. After performing the parametric studies, the results were reviewed with Exactech. Also, the results were presented at the Orthopedic Research Society meeting in a poster session.« less

  17. Magnetic Measurements of the First Nb 3Sn Model Quadrupole (MQXFS) for the High-Luminosity LHC

    DOE PAGES

    DiMarco, J.; Ambrosio, G.; Chlachidze, G.; ...

    2016-12-12

    The US LHC Accelerator Research Program (LARP) and CERN are developing high-gradient Nb 3Sn magnets for the High Luminosity LHC interaction regions. Magnetic measurements of the first 1.5 m long, 150 mm aperture model quadrupole, MQXFS1, were performed during magnet assembly at LBNL, as well as during cryogenic testing at Fermilab’s Vertical Magnet Test Facility. This paper reports on the results of these magnetic characterization measurements, as well as on the performance of new probes developed for the tests.

  18. Translation from UML to Markov Model: A Performance Modeling Framework

    NASA Astrophysics Data System (ADS)

    Khan, Razib Hayat; Heegaard, Poul E.

    Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.

  19. Data error and highly parameterized groundwater models

    USGS Publications Warehouse

    Hill, M.C.

    2008-01-01

    Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.

  20. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  1. Innovative GOCI algorithm to derive turbidity in highly turbid waters: a case study in the Zhejiang coastal area.

    PubMed

    Qiu, Zhongfeng; Zheng, Lufei; Zhou, Yan; Sun, Deyong; Wang, Shengqiang; Wu, Wei

    2015-09-21

    An innovative algorithm is developed and validated to estimate the turbidity in Zhejiang coastal area (highly turbid waters) using data from the Geostationary Ocean Color Imager (GOCI). First, satellite-ground synchronous data (n = 850) was collected from 2014 to 2015 using 11 buoys equipped with a Yellow Spring Instrument (YSI) multi-parameter sonde capable of taking hourly turbidity measurements. The GOCI data-derived Rayleigh-corrected reflectance (R(rc)) was used in place of the widely used remote sensing reflectance (R(rs)) to model turbidity. Various band characteristics, including single band, band ratio, band subtraction, and selected band combinations, were analyzed to identify correlations with turbidity. The results indicated that band 6 had the closest relationship to turbidity; however, the combined bands 3 and 6 model simulated turbidity most accurately (R(2) = 0.821, p<0.0001), while the model based on band 6 alone performed almost as well (R(2) = 0.749, p<0.0001). An independent validation data set was used to evaluate the performances of both models, and the mean relative error values of 42.5% and 51.2% were obtained for the combined model and the band 6 model, respectively. The accurate performances of the proposed models indicated that the use of R(rc) to model turbidity in highly turbid coastal waters is feasible. As an example, the developed model was applied to 8 hourly GOCI images on 30 December 2014. Three cross sections were selected to identify the spatiotemporal variation of turbidity in the study area. Turbidity generally decreased from near-shore to offshore and from morning to afternoon. Overall, the findings of this study provide a simple and practical method, based on GOCI data, to estimate turbidity in highly turbid coastal waters at high temporal resolutions.

  2. Design logistics performance measurement model of automotive component industry for srengthening competitiveness of dealing AEC 2015

    NASA Astrophysics Data System (ADS)

    Amran, T. G.; Janitra Yose, Mindy

    2018-03-01

    As the free trade Asean Economic Community (AEC) causes the tougher competition, it is important that Indonesia’s automotive industry have high competitiveness as well. A model of logistics performance measurement was designed as an evaluation tool for automotive component companies to improve their logistics performance in order to compete in AEC. The design of logistics performance measurement model was based on the Logistics Scorecard perspectives, divided into two stages: identifying the logistics business strategy to get the KPI and arranging the model. 23 KPI was obtained. The measurement result can be taken into consideration of determining policies to improve the performance logistics competitiveness.

  3. Evaporating Spray in Supersonic Streams Including Turbulence Effects

    NASA Technical Reports Server (NTRS)

    Balasubramanyam, M. S.; Chen, C. P.

    2006-01-01

    Evaporating spray plays an important role in spray combustion processes. This paper describes the development of a new finite-conductivity evaporation model, based on the two-temperature film theory, for two-phase numerical simulation using Eulerian-Lagrangian method. The model is a natural extension of the T-blob/T-TAB atomization/spray model which supplies the turbulence characteristics for estimating effective thermal diffusivity within the droplet phase. Both one-way and two-way coupled calculations were performed to investigate the performance of this model. Validation results indicate the superiority of the finite-conductivity model in low speed parallel flow evaporating sprays. High speed cross flow spray results indicate the effectiveness of the T-blob/T-TAB model and point to the needed improvements in high speed evaporating spray modeling.

  4. DC and small-signal physical models for the AlGaAs/GaAs high electron mobility transistor

    NASA Technical Reports Server (NTRS)

    Sarker, J. C.; Purviance, J. E.

    1991-01-01

    Analytical and numerical models are developed for the microwave small-signal performance, such as transconductance, gate-to-source capacitance, current gain cut-off frequency and the optimum cut-off frequency of the AlGaAs/GaAs High Electron Mobility Transistor (HEMT), in both normal and compressed transconductance regions. The validated I-V characteristics and the small-signal performances of four HeMT's are presented.

  5. High-Achieving High School Students and Not so High-Achieving College Students: A Look at Lack of Self-Control, Academic Ability, and Performance in College

    ERIC Educational Resources Information Center

    Honken, Nora B.; Ralston, Patricia A. S.

    2013-01-01

    This study investigated the relationship among lack of self-control, academic ability, and academic performance for a cohort of freshman engineering students who were, with a few exceptions, extremely high achievers in high school. Structural equation modeling analysis led to the conclusion that lack of self-control in high school, as measured by…

  6. Thermal Testing and Analysis of an Efficient High-Temperature Multi-Screen Internal Insulation

    NASA Technical Reports Server (NTRS)

    Weiland, Stefan; Handrick, Karin; Daryabeigi, Kamran

    2007-01-01

    Conventional multi-layer insulations exhibit excellent insulation performance but they are limited to the temperature range to which their components reflective foils and spacer materials are compatible. For high temperature applications, the internal multi-screen insulation IMI has been developed that utilizes unique ceramic material technology to produce reflective screens with high temperature stability. For analytical insulation sizing a parametric material model is developed that includes the main contributors for heat flow which are radiation and conduction. The adaptation of model-parameters based on effective steady-state thermal conductivity measurements performed at NASA Langley Research Center (LaRC) allows for extrapolation to arbitrary stack configurations and temperature ranges beyond the ones that were covered in the conductivity measurements. Experimental validation of the parametric material model was performed during the thermal qualification test of the X-38 Chin-panel, where test results and predictions showed a good agreement.

  7. How Many Teachers Does It Take to Support a Student? Examining the Relationship between Teacher Support and Adverse Health Outcomes in High-Performing, Pressure-Cooker High Schools

    ERIC Educational Resources Information Center

    Conner, Jerusha O.; Miles, Sarah B.; Pope, Denise C.

    2014-01-01

    Although considerable research has demonstrated the importance of supportive teacher-student relationships to students' academic and nonacademic outcomes, few studies have explored these relationships in the context of high-performing high schools. Hierarchical linear modeling with a sample of 5,557 students from 14 different high-performing…

  8. The Impact of a Freshman Academy on Science Performance of First-Time Ninth-Grade Students at One Georgia High School

    ERIC Educational Resources Information Center

    Daniel, Vivian Summerour

    2011-01-01

    The purpose of this within-group experimental study was to find out to what extent ninth-grade students improved their science performance beyond their middle school science performance at one Georgia high school utilizing a freshman academy model. Freshman academies have been recognized as a useful tool for increasing academic performance among…

  9. Mechanical Behavior of a Low-Cost Ti-6Al-4V Alloy

    NASA Astrophysics Data System (ADS)

    Casem, D. T.; Weerasooriya, T.; Walter, T. R.

    2018-01-01

    Mechanical compression tests were performed on an economical Ti-6Al-4V alloy over a range of strain-rates and temperatures. Low rate experiments (0.001-0.1/s) were performed with a servo-hydraulic load frame and high rate experiments (1000-80,000/s) were performed with the Kolsky bar (Split Hopkinson pressure bar). Emphasis is placed on the large strain, high-rate, and high temperature behavior of the material in an effort to develop a predictive capability for adiabatic shear bands. Quasi-isothermal experiments were performed with the Kolsky bar to determine the large strain response at elevated rates, and bars with small diameters (1.59 mm and 794 µm, instrumented optically) were used to study the response at the higher strain-rates. Experiments were also conducted at temperatures ranging from 81 to 673 K. Two constitutive models are used to represent the data. The first is the Zerilli-Armstrong recovery strain model and the second is a modified Johnson-Cook model which uses the recovery strain term from the Zerilli-Armstrong model. In both cases, the recovery strain feature is critical for capturing the instability that precedes localization.

  10. Sparse representations via learned dictionaries for x-ray angiogram image denoising

    NASA Astrophysics Data System (ADS)

    Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu

    2018-03-01

    X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.

  11. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  12. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  13. Behavior of high-performance concrete in structural applications.

    DOT National Transportation Integrated Search

    2007-10-01

    High Performance Concrete (HPC) with improved properties has been developed by obtaining the maximum density of the matrix. Mathematical models developed by J.E. Funk and D.R. Dinger, are used to determine the particle size distribution to achieve th...

  14. Making Progress Toward Graduation: Evidence from the Talent Development High School Model

    ERIC Educational Resources Information Center

    Kemple, James J.; Herlihy, Corinne M.; Smith, Thomas J.

    2005-01-01

    In low-performing public high schools in U.S. cities, high proportions of students drop out, students who stay in school typically do not succeed academically, and efforts to make substantial reforms often meet with little success. The Talent Development High School model is a comprehensive school reform initiative that has been developed to…

  15. CTF (Subchannel) Calculations and Validation L3:VVI.H2L.P15.01

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gordon, Natalie

    The goal of the Verification and Validation Implementation (VVI) High to Low (Hi2Lo) process is utilizing a validated model in a high resolution code to generate synthetic data for improvement of the same model in a lower resolution code. This process is useful in circumstances where experimental data does not exist or it is not sufficient in quantity or resolution. Data from the high-fidelity code is treated as calibration data (with appropriate uncertainties and error bounds) which can be used to train parameters that affect solution accuracy in the lower-fidelity code model, thereby reducing uncertainty. This milestone presents a demonstrationmore » of the Hi2Lo process derived in the VVI focus area. The majority of the work performed herein describes the steps of the low-fidelity code used in the process with references to the work detailed in the companion high-fidelity code milestone (Reference 1). The CASL low-fidelity code used to perform this work was Cobra Thermal Fluid (CTF) and the high-fidelity code was STAR-CCM+ (STAR). The master branch version of CTF (pulled May 5, 2017 – Reference 2) was utilized for all CTF analyses performed as part of this milestone. The statistical and VVUQ components of the Hi2Lo framework were performed using Dakota version 6.6 (release date May 15, 2017 – Reference 3). Experimental data from Westinghouse Electric Company (WEC – Reference 4) was used throughout the demonstrated process to compare with the high-fidelity STAR results. A CTF parameter called Beta was chosen as the calibration parameter for this work. By default, Beta is defined as a constant mixing coefficient in CTF and is essentially a tuning parameter for mixing between subchannels. Since CTF does not have turbulence models like STAR, Beta is the parameter that performs the most similar function to the turbulence models in STAR. The purpose of the work performed in this milestone is to tune Beta to an optimal value that brings the CTF results closer to those measured in the WEC experiments.« less

  16. Business Models of High Performance Computing Centres in Higher Education in Europe

    ERIC Educational Resources Information Center

    Eurich, Markus; Calleja, Paul; Boutellier, Roman

    2013-01-01

    High performance computing (HPC) service centres are a vital part of the academic infrastructure of higher education organisations. However, despite their importance for research and the necessary high capital expenditures, business research on HPC service centres is mostly missing. From a business perspective, it is important to find an answer to…

  17. Identifying Aspects of Parental Involvement that Affect the Academic Achievement of High School Students

    ERIC Educational Resources Information Center

    Roulette-McIntyre, Ovella; Bagaka's, Joshua G.; Drake, Daniel D.

    2005-01-01

    This study identified parental practices that relate positively to high school students' academic performance. Parents of 643 high school students participated in the study. Data analysis, using a multiple linear regression model, shows parent-school connection, student gender, and race are significant predictors of student academic performance.…

  18. Optimizing spectral wave estimates with adjoint-based sensitivity maps

    NASA Astrophysics Data System (ADS)

    Orzech, Mark; Veeramony, Jay; Flampouris, Stylianos

    2014-04-01

    A discrete numerical adjoint has recently been developed for the stochastic wave model SWAN. In the present study, this adjoint code is used to construct spectral sensitivity maps for two nearshore domains. The maps display the correlations of spectral energy levels throughout the domain with the observed energy levels at a selected location or region of interest (LOI/ROI), providing a full spectrum of values at all locations in the domain. We investigate the effectiveness of sensitivity maps based on significant wave height ( H s ) in determining alternate offshore instrument deployment sites when a chosen nearshore location or region is inaccessible. Wave and bathymetry datasets are employed from one shallower, small-scale domain (Duck, NC) and one deeper, larger-scale domain (San Diego, CA). The effects of seasonal changes in wave climate, errors in bathymetry, and multiple assimilation points on sensitivity map shapes and model performance are investigated. Model accuracy is evaluated by comparing spectral statistics as well as with an RMS skill score, which estimates a mean model-data error across all spectral bins. Results indicate that data assimilation from identified high-sensitivity alternate locations consistently improves model performance at nearshore LOIs, while assimilation from low-sensitivity locations results in lesser or no improvement. Use of sub-sampled or alongshore-averaged bathymetry has a domain-specific effect on model performance when assimilating from a high-sensitivity alternate location. When multiple alternate assimilation locations are used from areas of lower sensitivity, model performance may be worse than with a single, high-sensitivity assimilation point.

  19. Looking beyond general metrics for model evaluation - lessons from an international model intercomparison study

    NASA Astrophysics Data System (ADS)

    Bouaziz, Laurène; de Boer-Euser, Tanja; Brauer, Claudia; Drogue, Gilles; Fenicia, Fabrizio; Grelier, Benjamin; de Niel, Jan; Nossent, Jiri; Pereira, Fernando; Savenije, Hubert; Thirel, Guillaume; Willems, Patrick

    2016-04-01

    International collaboration between institutes and universities is a promising way to reach consensus on hydrological model development. Education, experience and expert knowledge of the hydrological community have resulted in the development of a great variety of model concepts, calibration methods and analysis techniques. Although comparison studies are very valuable for international cooperation, they do often not lead to very clear new insights regarding the relevance of the modelled processes. We hypothesise that this is partly caused by model complexity and the used comparison methods, which focus on a good overall performance instead of focusing on specific events. We propose an approach that focuses on the evaluation of specific events. Eight international research groups calibrated their model for the Ourthe catchment in Belgium (1607 km2) and carried out a validation in time for the Ourthe (i.e. on two different periods, one of them on a blind mode for the modellers) and a validation in space for nested and neighbouring catchments of the Meuse in a completely blind mode. For each model, the same protocol was followed and an ensemble of best performing parameter sets was selected. Signatures were first used to assess model performances in the different catchments during validation. Comparison of the models was then followed by evaluation of selected events, which include: low flows, high flows and the transition from low to high flows. While the models show rather similar performances based on general metrics (i.e. Nash-Sutcliffe Efficiency), clear differences can be observed for specific events. While most models are able to simulate high flows well, large differences are observed during low flows and in the ability to capture the first peaks after drier months. The transferability of model parameters to neighbouring and nested catchments is assessed as an additional measure in the model evaluation. This suggested approach helps to select, among competing model alternatives, the most suitable model for a specific purpose.

  20. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    NASA Astrophysics Data System (ADS)

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress component in SEB models for the improvement of their performance, especially under too dry or wet environments.

  1. Advances in Experiment Design for High Performance Aircraft

    NASA Technical Reports Server (NTRS)

    Morelli, Engene A.

    1998-01-01

    A general overview and summary of recent advances in experiment design for high performance aircraft is presented, along with results from flight tests. General theoretical background is included, with some discussion of various approaches to maneuver design. Flight test examples from the F-18 High Alpha Research Vehicle (HARV) are used to illustrate applications of the theory. Input forms are compared using Cramer-Rao bounds for the standard errors of estimated model parameters. Directions for future research in experiment design for high performance aircraft are identified.

  2. Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight

    NASA Technical Reports Server (NTRS)

    Narducci, Robert; Orr, Stanley; Kreeger, Richard E.

    2012-01-01

    An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.

  3. 3D environment modeling and location tracking using off-the-shelf components

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.

    2016-05-01

    The remarkable popularity of smartphones over the past decade has led to a technological race for dominance in market share. This has resulted in a flood of new processors and sensors that are inexpensive, low power and high performance. These sensors include accelerometers, gyroscope, barometers and most importantly cameras. This sensor suite, coupled with multicore processors, allows a new community of researchers to build small, high performance platforms for low cost. This paper describes a system using off-the-shelf components to perform position tracking as well as environment modeling. The system relies on tracking using stereo vision and inertial navigation to determine movement of the system as well as create a model of the environment sensed by the system.

  4. Stage effects on stalling and recovery of a high-speed 10-stage axial-flow compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Copenhaver, W.W.

    1988-01-01

    Results of a high-speed 10-stage axial-flow compressor test involving overall compressor and individual stage performance while stalling and operating in quasi-steady rotating stall are described. Test procedures and data-acquisition methods used to obtain the dynamic stalling and quasi-steady in-stall data are explained. Unstalled and in-stall time-averaged data obtained from the compressor operating at five different shaft speeds and one off-schedule variable vane condition are presented. Effects of compressor speed and variable geometry on overall compressor in-stall pressure rise and hysteresis extent are illustrated through the use of quasi-steady-stage temperature rise and pressure-rise characteristics. Results indicate that individual stage performance duringmore » overall compressor rotating stall operation varies considerably throughout the length of the compressor. The measured high-speed 10-stage test compressor individual stage pressure and temperature characteristics were input into a stage-by-stage dynamic compressor performance model. Comparison of the model results and measured pressures provided the additional validation necessary to demonstrate the model's ability to predict high-speed multistage compressor stalling and in-stall performance.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat; Cole, Wesley

    Power sector capacity expansion models (CEMs) have a broad range of spatial resolutions. This paper uses the Regional Energy Deployment System (ReEDS) model, a long-term national scale electric sector CEM, to evaluate the value of high spatial resolution for CEMs. ReEDS models the United States with 134 load balancing areas (BAs) and captures the variability in existing generation parameters, future technology costs, performance, and resource availability using very high spatial resolution data, especially for wind and solar modeled at 356 resource regions. In this paper we perform planning studies at three different spatial resolutions--native resolution (134 BAs), state-level, and NERCmore » region level--and evaluate how results change under different levels of spatial aggregation in terms of renewable capacity deployment and location, associated transmission builds, and system costs. The results are used to ascertain the value of high geographically resolved models in terms of their impact on relative competitiveness among renewable energy resources.« less

  6. Identifying Chinese Microblog Users With High Suicide Probability Using Internet-Based Profile and Linguistic Features: Classification Model

    PubMed Central

    Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul SF

    2015-01-01

    Background Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. Objective The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. Methods There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric “Screening Efficiency” that were adopted to evaluate model effectiveness. Results Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Conclusions Individuals in China with high suicide probability are recognizable by profile and text-based information from microblogs. Although there is still much space to improve the performance of classification models in the future, this study may shed light on preliminary screening of risky individuals via machine learning algorithms, which can work side-by-side with expert scrutiny to increase efficiency in large-scale-surveillance of suicide probability from online social media. PMID:26543921

  7. Identifying Chinese Microblog Users With High Suicide Probability Using Internet-Based Profile and Linguistic Features: Classification Model.

    PubMed

    Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul Sf; Zhu, Tingshao

    2015-01-01

    Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric "Screening Efficiency" that were adopted to evaluate model effectiveness. Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Individuals in China with high suicide probability are recognizable by profile and text-based information from microblogs. Although there is still much space to improve the performance of classification models in the future, this study may shed light on preliminary screening of risky individuals via machine learning algorithms, which can work side-by-side with expert scrutiny to increase efficiency in large-scale-surveillance of suicide probability from online social media.

  8. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  9. Temperature dependent simulation of diamond depleted Schottky PIN diodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hathwar, Raghuraj; Dutta, Maitreya; Chowdhury, Srabanti

    2016-06-14

    Diamond is considered as an ideal material for high field and high power devices due to its high breakdown field, high lightly doped carrier mobility, and high thermal conductivity. The modeling and simulation of diamond devices are therefore important to predict the performances of diamond based devices. In this context, we use Silvaco{sup ®} Atlas, a drift-diffusion based commercial software, to model diamond based power devices. The models used in Atlas were modified to account for both variable range and nearest neighbor hopping transport in the impurity bands associated with high activation energies for boron doped and phosphorus doped diamond.more » The models were fit to experimentally reported resistivity data over a wide range of doping concentrations and temperatures. We compare to recent data on depleted diamond Schottky PIN diodes demonstrating low turn-on voltages and high reverse breakdown voltages, which could be useful for high power rectifying applications due to the low turn-on voltage enabling high forward current densities. Three dimensional simulations of the depleted Schottky PIN diamond devices were performed and the results are verified with experimental data at different operating temperatures.« less

  10. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    NASA Astrophysics Data System (ADS)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  11. Diagnostic methods for CW laser damage testing

    NASA Astrophysics Data System (ADS)

    Stewart, Alan F.; Shah, Rashmi S.

    2004-06-01

    High performance optical coatings are an enabling technology for many applications - navigation systems, telecom, fusion, advanced measurement systems of many types as well as directed energy weapons. The results of recent testing of superior optical coatings conducted at high flux levels will be presented. The diagnostics used in this type of nondestructive testing and the analysis of the data demonstrates the evolution of test methodology. Comparison of performance data under load to the predictions of thermal and optical models shows excellent agreement. These tests serve to anchor the models and validate the performance of the materials and coatings.

  12. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: An Earth Modeling System Software Framework Strawman Design that Integrates Cactus and UCLA/UCB Distributed Data Broker

    NASA Technical Reports Server (NTRS)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.

  13. Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.

    PubMed

    Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S

    2015-07-21

    The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.

  14. A High-Performance Neural Prosthesis Incorporating Discrete State Selection With Hidden Markov Models.

    PubMed

    Kao, Jonathan C; Nuyujukian, Paul; Ryu, Stephen I; Shenoy, Krishna V

    2017-04-01

    Communication neural prostheses aim to restore efficient communication to people with motor neurological injury or disease by decoding neural activity into control signals. These control signals are both analog (e.g., the velocity of a computer mouse) and discrete (e.g., clicking an icon with a computer mouse) in nature. Effective, high-performing, and intuitive-to-use communication prostheses should be capable of decoding both analog and discrete state variables seamlessly. However, to date, the highest-performing autonomous communication prostheses rely on precise analog decoding and typically do not incorporate high-performance discrete decoding. In this report, we incorporated a hidden Markov model (HMM) into an intracortical communication prosthesis to enable accurate and fast discrete state decoding in parallel with analog decoding. In closed-loop experiments with nonhuman primates implanted with multielectrode arrays, we demonstrate that incorporating an HMM into a neural prosthesis can increase state-of-the-art achieved bitrate by 13.9% and 4.2% in two monkeys ( ). We found that the transition model of the HMM is critical to achieving this performance increase. Further, we found that using an HMM resulted in the highest achieved peak performance we have ever observed for these monkeys, achieving peak bitrates of 6.5, 5.7, and 4.7 bps in Monkeys J, R, and L, respectively. Finally, we found that this neural prosthesis was robustly controllable for the duration of entire experimental sessions. These results demonstrate that high-performance discrete decoding can be beneficially combined with analog decoding to achieve new state-of-the-art levels of performance.

  15. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  16. Job stress models, depressive disorders and work performance of engineers in microelectronics industry.

    PubMed

    Chen, Sung-Wei; Wang, Po-Chuan; Hsin, Ping-Lung; Oates, Anthony; Sun, I-Wen; Liu, Shen-Ing

    2011-01-01

    Microelectronic engineers are considered valuable human capital contributing significantly toward economic development, but they may encounter stressful work conditions in the context of a globalized industry. The study aims at identifying risk factors of depressive disorders primarily based on job stress models, the Demand-Control-Support and Effort-Reward Imbalance models, and at evaluating whether depressive disorders impair work performance in microelectronics engineers in Taiwan. The case-control study was conducted among 678 microelectronics engineers, 452 controls and 226 cases with depressive disorders which were defined by a score 17 or more on the Beck Depression Inventory and a psychiatrist's diagnosis. The self-administered questionnaires included the Job Content Questionnaire, Effort-Reward Imbalance Questionnaire, demography, psychosocial factors, health behaviors and work performance. Hierarchical logistic regression was applied to identify risk factors of depressive disorders. Multivariate linear regressions were used to determine factors affecting work performance. By hierarchical logistic regression, risk factors of depressive disorders are high demands, low work social support, high effort/reward ratio and low frequency of physical exercise. Combining the two job stress models may have better predictive power for depressive disorders than adopting either model alone. Three multivariate linear regressions provide similar results indicating that depressive disorders are associated with impaired work performance in terms of absence, role limitation and social functioning limitation. The results may provide insight into the applicability of job stress models in a globalized high-tech industry considerably focused in non-Western countries, and the design of workplace preventive strategies for depressive disorders in Asian electronics engineering population.

  17. Explaining high and low performers in complex intervention trials: a new model based on diffusion of innovations theory.

    PubMed

    McMullen, Heather; Griffiths, Chris; Leber, Werner; Greenhalgh, Trisha

    2015-05-31

    Complex intervention trials may require health care organisations to implement new service models. In a recent cluster randomised controlled trial, some participating organisations achieved high recruitment, whereas others found it difficult to assimilate the intervention and were low recruiters. We sought to explain this variation and develop a model to inform organisational participation in future complex intervention trials. The trial included 40 general practices in a London borough with high HIV prevalence. The intervention was offering a rapid HIV test as part of the New Patient Health Check. The primary outcome was mean CD4 cell count at diagnosis. The process evaluation consisted of several hundred hours of ethnographic observation, 21 semi-structured interviews and analysis of routine documents (e.g., patient leaflets, clinical protocols) and trial documents (e.g., inclusion criteria, recruitment statistics). Qualitative data were analysed thematically using--and, where necessary, extending--Greenhalgh et al.'s model of diffusion of innovations. Narrative synthesis was used to prepare case studies of four practices representing maximum variety in clinicians' interest in HIV (assessed by level of serological testing prior to the trial) and performance in the trial (high vs. low recruiters). High-recruiting practices were, in general though not invariably, also innovative practices. They were characterised by strong leadership, good managerial relations, readiness for change, a culture of staff training and available staff time ('slack resources'). Their front-line staff believed that patients might benefit from the rapid HIV test ('relative advantage'), were emotionally comfortable administering it ('compatibility'), skilled in performing it ('task issues') and made creative adaptations to embed the test in local working practices ('reinvention'). Early experience of a positive HIV test ('observability') appeared to reinforce staff commitment to recruiting more participants. Low-performing practices typically had less good managerial relations, significant resource constraints, staff discomfort with the test and no positive results early in the trial. An adaptation of the diffusion of innovations model was an effective analytical tool for retrospectively explaining high and low-performing practices in a complex intervention research trial. Whether the model will work prospectively to predict performance (and hence shape the design of future trials) is unknown. ISRCTN Registry number: ISRCTN63473710. Date assigned: 22 April 2010.

  18. A Bayesian Performance Prediction Model for Mathematics Education: A Prototypical Approach for Effective Group Composition

    ERIC Educational Resources Information Center

    Bekele, Rahel; McPherson, Maggie

    2011-01-01

    This research work presents a Bayesian Performance Prediction Model that was created in order to determine the strength of personality traits in predicting the level of mathematics performance of high school students in Addis Ababa. It is an automated tool that can be used to collect information from students for the purpose of effective group…

  19. Rapid, high-resolution measurement of leaf area and leaf orientation using terrestrial LiDAR scanning data

    USDA-ARS?s Scientific Manuscript database

    The rapid evolution of high performance computing technology has allowed for the development of extremely detailed models of the urban and natural environment. Although models can now represent sub-meter-scale variability in environmental geometry, model users are often unable to specify the geometr...

  20. Artificial intelligence based model for optimization of COD removal efficiency of an up-flow anaerobic sludge blanket reactor in the saline wastewater treatment.

    PubMed

    Picos-Benítez, Alain R; López-Hincapié, Juan D; Chávez-Ramírez, Abraham U; Rodríguez-García, Adrián

    2017-03-01

    The complex non-linear behavior presented in the biological treatment of wastewater requires an accurate model to predict the system performance. This study evaluates the effectiveness of an artificial intelligence (AI) model, based on the combination of artificial neural networks (ANNs) and genetic algorithms (GAs), to find the optimum performance of an up-flow anaerobic sludge blanket reactor (UASB) for saline wastewater treatment. Chemical oxygen demand (COD) removal was predicted using conductivity, organic loading rate (OLR) and temperature as input variables. The ANN model was built from experimental data and performance was assessed through the maximum mean absolute percentage error (= 9.226%) computed from the measured and model predicted values of the COD. Accordingly, the ANN model was used as a fitness function in a GA to find the best operational condition. In the worst case scenario (low energy requirements, high OLR usage and high salinity) this model guaranteed COD removal efficiency values above 70%. This result is consistent and was validated experimentally, confirming that this ANN-GA model can be used as a tool to achieve the best performance of a UASB reactor with the minimum requirement of energy for saline wastewater treatment.

  1. Thermodynamic analysis of biofuels as fuels for high temperature fuel cells

    NASA Astrophysics Data System (ADS)

    Milewski, Jarosław; Bujalski, Wojciech; Lewandowski, Janusz

    2011-11-01

    Based on mathematical modeling and numerical simulations, applicativity of various biofuels on high temperature fuel cell performance are presented. Governing equations of high temperature fuel cell modeling are given. Adequate simulators of both solid oxide fuel cell (SOFC) and molten carbonate fuel cell (MCFC) have been done and described. Performance of these fuel cells with different biofuels is shown. Some characteristics are given and described. Advantages and disadvantages of various biofuels from the system performance point of view are pointed out. An analysis of various biofuels as potential fuels for SOFC and MCFC is presented. The results are compared with both methane and hydrogen as the reference fuels. The biofuels are characterized by both lower efficiency and lower fuel utilization factors compared with methane. The presented results are based on a 0D mathematical model in the design point calculation. The governing equations of the model are also presented. Technical and financial analysis of high temperature fuel cells (SOFC and MCFC) are shown. High temperature fuel cells can be fed by biofuels like: biogas, bioethanol, and biomethanol. Operational costs and possible incomes of those installation types were estimated and analyzed. A comparison against classic power generation units is shown. A basic indicator net present value (NPV) for projects was estimated and commented.

  2. Thermodynamic analysis of biofuels as fuels for high temperature fuel cells

    NASA Astrophysics Data System (ADS)

    Milewski, Jarosław; Bujalski, Wojciech; Lewandowski, Janusz

    2013-02-01

    Based on mathematical modeling and numerical simulations, applicativity of various biofuels on high temperature fuel cell performance are presented. Governing equations of high temperature fuel cell modeling are given. Adequate simulators of both solid oxide fuel cell (SOFC) and molten carbonate fuel cell (MCFC) have been done and described. Performance of these fuel cells with different biofuels is shown. Some characteristics are given and described. Advantages and disadvantages of various biofuels from the system performance point of view are pointed out. An analysis of various biofuels as potential fuels for SOFC and MCFC is presented. The results are compared with both methane and hydrogen as the reference fuels. The biofuels are characterized by both lower efficiency and lower fuel utilization factors compared with methane. The presented results are based on a 0D mathematical model in the design point calculation. The governing equations of the model are also presented. Technical and financial analysis of high temperature fuel cells (SOFC and MCFC) are shown. High temperature fuel cells can be fed by biofuels like: biogas, bioethanol, and biomethanol. Operational costs and possible incomes of those installation types were estimated and analyzed. A comparison against classic power generation units is shown. A basic indicator net present value (NPV) for projects was estimated and commented.

  3. Predictive models for subtypes of autism spectrum disorder based on single-nucleotide polymorphisms and magnetic resonance imaging.

    PubMed

    Jiao, Y; Chen, R; Ke, X; Cheng, L; Chu, K; Lu, Z; Herskovits, E H

    2011-01-01

    Autism spectrum disorder (ASD) is a neurodevelopmental disorder, of which Asperger syndrome and high-functioning autism are subtypes. Our goal is: 1) to determine whether a diagnostic model based on single-nucleotide polymorphisms (SNPs), brain regional thickness measurements, or brain regional volume measurements can distinguish Asperger syndrome from high-functioning autism; and 2) to compare the SNP, thickness, and volume-based diagnostic models. Our study included 18 children with ASD: 13 subjects with high-functioning autism and 5 subjects with Asperger syndrome. For each child, we obtained 25 SNPs for 8 ASD-related genes; we also computed regional cortical thicknesses and volumes for 66 brain structures, based on structural magnetic resonance (MR) examination. To generate diagnostic models, we employed five machine-learning techniques: decision stump, alternating decision trees, multi-class alternating decision trees, logistic model trees, and support vector machines. For SNP-based classification, three decision-tree-based models performed better than the other two machine-learning models. The performance metrics for three decision-tree-based models were similar: decision stump was modestly better than the other two methods, with accuracy = 90%, sensitivity = 0.95 and specificity = 0.75. All thickness and volume-based diagnostic models performed poorly. The SNP-based diagnostic models were superior to those based on thickness and volume. For SNP-based classification, rs878960 in GABRB3 (gamma-aminobutyric acid A receptor, beta 3) was selected by all tree-based models. Our analysis demonstrated that SNP-based classification was more accurate than morphometry-based classification in ASD subtype classification. Also, we found that one SNP--rs878960 in GABRB3--distinguishes Asperger syndrome from high-functioning autism.

  4. Modelling invasion for a habitat generalist and a specialist plant species

    USGS Publications Warehouse

    Evangelista, P.H.; Kumar, S.; Stohlgren, T.J.; Jarnevich, C.S.; Crall, A.W.; Norman, J. B.; Barnett, D.T.

    2008-01-01

    Predicting suitable habitat and the potential distribution of invasive species is a high priority for resource managers and systems ecologists. Most models are designed to identify habitat characteristics that define the ecological niche of a species with little consideration to individual species' traits. We tested five commonly used modelling methods on two invasive plant species, the habitat generalist Bromus tectorum and habitat specialist Tamarix chinensis, to compare model performances, evaluate predictability, and relate results to distribution traits associated with each species. Most of the tested models performed similarly for each species; however, the generalist species proved to be more difficult to predict than the specialist species. The highest area under the receiver-operating characteristic curve values with independent validation data sets of B. tectorum and T. chinensis was 0.503 and 0.885, respectively. Similarly, a confusion matrix for B. tectorum had the highest overall accuracy of 55%, while the overall accuracy for T. chinensis was 85%. Models for the generalist species had varying performances, poor evaluations, and inconsistent results. This may be a result of a generalist's capability to persist in a wide range of environmental conditions that are not easily defined by the data, independent variables or model design. Models for the specialist species had consistently strong performances, high evaluations, and similar results among different model applications. This is likely a consequence of the specialist's requirement for explicit environmental resources and ecological barriers that are easily defined by predictive models. Although defining new invaders as generalist or specialist species can be challenging, model performances and evaluations may provide valuable information on a species' potential invasiveness.

  5. Global Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav

    2015-11-01

    Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.

  6. Principles of Design for High Performing Organizations: An Assessment of the State of the Field of Organizational Design Research

    DTIC Science & Technology

    1994-03-01

    asked whether the planned structure considered (a) all objectives, (b) all functions, (c) all relevant units of analysis such as the plant , the...literature and provides an integrative model of design for high perfor-ming organizations. The model is based on an analysis of current theories of...important midrange theories underlie much of the work on organizational analysis . 0 Systems Approaches. These approaches emphasize the rational, goal

  7. High-performance heat pipes for heat recovery applications

    NASA Technical Reports Server (NTRS)

    Saaski, E. W.; Hartl, J. H.

    1980-01-01

    Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.

  8. High-Performance Substrates for SERS Detection via Microphotonic Photopolymer Characterization and Coating With Functionalized Hydrogels

    DTIC Science & Technology

    2006-11-26

    with controlled micro and nanostructure for highly selective, high sensitivity assays. The process was modeled and a procedure for fabricating SERS...small volumes with controlled micro and nanostructure for highly selective, high sensitivity assays. We proved the feasibility of the technique and...films templated by colloidal crystals. The control over the film structure allowed optimizing their performance for potential sensor applications. The

  9. Spectral-Element Seismic Wave Propagation Codes for both Forward Modeling in Complex Media and Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.

    2015-12-01

    We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.

  10. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-07-01

    Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  11. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  12. Appraising Teacher Performance: A Quantitative Approach.

    ERIC Educational Resources Information Center

    Wingate, James G.; Bowers, Fred

    Following a brief research review regarding the relationship between teacher behavior and student outcomes, a model is proposed for identifying those teaching behaviors that are significantly related to high-quality student performance. The model's stages include: (1) delineation of questions; (2) establishment of a framework; (3) selection of an…

  13. Modeling Human Performance: Effects of Personal Traits and Transitory States

    DTIC Science & Technology

    2002-06-01

    Self Confidence High Self Confidence Extroversion Introversion External Locus of Control Internal Locus of Control Positive Personality Case In the...levels, emotions may not have any effect on performance whatsoever. The current model does not recognize that there may be emotion thresholds that must be

  14. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  15. Which is the better forecasting model? A comparison between HAR-RV and multifractality volatility

    NASA Astrophysics Data System (ADS)

    Ma, Feng; Wei, Yu; Huang, Dengshi; Chen, Yixiang

    2014-07-01

    In this paper, by taking the 5-min high frequency data of the Shanghai Composite Index as example, we compare the forecasting performance of HAR-RV and Multifractal volatility, Realized volatility, Realized Bipower Variation and their corresponding short memory model with rolling windows forecasting method and the Model Confidence Set which is proved superior to SPA test. The empirical results show that, for six loss functions, HAR-RV outperforms other models. Moreover, to make the conclusions more precise and robust, we use the MCS test to compare the performance of their logarithms form models, and find that the HAR-log(RV) has a better performance in predicting future volatility. Furthermore, by comparing the two models of HAR-RV and HAR-log(RV), we conclude that, in terms of performance forecasting, the HAR-log(RV) model is the best model among models we have discussed in this paper.

  16. Performance of European chemistry transport models as function of horizontal resolution

    NASA Astrophysics Data System (ADS)

    Schaap, M.; Cuvelier, C.; Hendriks, C.; Bessagnet, B.; Baldasano, J. M.; Colette, A.; Thunis, P.; Karam, D.; Fagerli, H.; Graff, A.; Kranenburg, R.; Nyiri, A.; Pay, M. T.; Rouïl, L.; Schulz, M.; Simpson, D.; Stern, R.; Terrenoire, E.; Wind, P.

    2015-07-01

    Air pollution causes adverse effects on human health as well as ecosystems and crop yield and also has an impact on climate change trough short-lived climate forcers. To design mitigation strategies for air pollution, 3D Chemistry Transport Models (CTMs) have been developed to support the decision process. Increases in model resolution may provide more accurate and detailed information, but will cubically increase computational costs and pose additional challenges concerning high resolution input data. The motivation for the present study was therefore to explore the impact of using finer horizontal grid resolution for policy support applications of the European Monitoring and Evaluation Programme (EMEP) model within the Long Range Transboundary Air Pollution (LRTAP) convention. The goal was to determine the "optimum resolution" at which additional computational efforts do not provide increased model performance using presently available input data. Five regional CTMs performed four runs for 2009 over Europe at different horizontal resolutions. The models' responses to an increase in resolution are broadly consistent for all models. The largest response was found for NO2 followed by PM10 and O3. Model resolution does not impact model performance for rural background conditions. However, increasing model resolution improves the model performance at stations in and near large conglomerations. The statistical evaluation showed that the increased resolution better reproduces the spatial gradients in pollution regimes, but does not help to improve significantly the model performance for reproducing observed temporal variability. This study clearly shows that increasing model resolution is advantageous, and that leaving a resolution of 50 km in favour of a resolution between 10 and 20 km is practical and worthwhile. As about 70% of the model response to grid resolution is determined by the difference in the spatial emission distribution, improved emission allocation procedures at high spatial and temporal resolution are a crucial factor for further model resolution improvements.

  17. Three-dimensional modeling of flow through fractured tuff at Fran Ridge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Ho, C.K.; Glass, RJ.

    1996-09-01

    Numerical studies have been made of an infiltration experiment at Fran Ridge using the TOUGH2 code to aid in the selection of computational models for performance assessment. The exercise investigates the capabilities of TOUGH2 to model transient flows through highly fractured tuff and provides a possible means of calibration. Two distinctly different conceptual models were used in the TOUGH2 code, the dual permeability model and the equivalent continuum model. The infiltration test modeled involved the infiltration of dyed ponded water for 36 minutes. The 205 gallon infiltration of water observed in the experiment was subsequently modeled using measured Fran Ridgemore » fracture frequencies, and a specified fracture aperture of 285 {micro}m. The dual permeability formulation predicted considerable infiltration along the fracture network, which was in agreement with the experimental observations. As expected, al fracture penetration of the infiltrating water was calculated using the equivalent continuum model, thus demonstrating that this model is not appropriate for modeling the highly transient experiment. It is therefore recommended that the dual permeability model be given priority when computing high-flux infiltration for use in performance assessment studies.« less

  18. Three-dimensional modeling of flow through fractured tuff at Fran Ridge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Ho, C.K.; Glass, R.J.

    1996-01-01

    Numerical studies have been made of an infiltration experiment at Fran Ridge using the TOUGH2 code to aid in the selection of computational models for performance assessment. The exercise investigates the capabilities of TOUGH2 to model transient flows through highly fractured tuff and provides a possible means of calibration. Two distinctly different conceptual models were used in the TOUGH2 code, the dual permeability model and the equivalent continuum model. The infiltration test modeled involved the infiltration of dyed ponded water for 36 minutes. The 205 gallon filtration of water observed in the experiment was subsequently modeled using measured Fran Ridgemore » fracture frequencies, and a specified fracture aperture of 285 {mu}m. The dual permeability formulation predicted considerable infiltration along the fracture network, which was in agreement with the experimental observations. As expected, minimal fracture penetration of the infiltrating water was calculated using the equivalent continuum model, thus demonstrating that this model is not appropriate for modeling the highly transient experiment. It is therefore recommended that the dual permeability model be given priority when computing high-flux infiltration for use in performance assessment studies.« less

  19. Model-based reinforcement learning with dimension reduction.

    PubMed

    Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi

    2016-12-01

    The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.

  1. Comparative study of turbulence models in predicting hypersonic inlet flows

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1992-01-01

    A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.

  2. Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.

    PubMed

    Gong, Xiajing; Hu, Meng; Zhao, Liang

    2018-05-01

    Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  3. Dynamics and control of quadcopter using linear model predictive control approach

    NASA Astrophysics Data System (ADS)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  4. Multi-Fidelity Simulation of a Turbofan Engine With Results Zoomed Into Mini-Maps for a Zero-D Cycle Simulation

    NASA Technical Reports Server (NTRS)

    Turner, Mark G.; Reed, John A.; Ryder, Robert; Veres, Joseph P.

    2004-01-01

    A Zero-D cycle simulation of the GE90-94B high bypass turbofan engine has been achieved utilizing mini-maps generated from a high-fidelity simulation. The simulation utilizes the Numerical Propulsion System Simulation (NPSS) thermodynamic cycle modeling system coupled to a high-fidelity full-engine model represented by a set of coupled 3D computational fluid dynamic (CFD) component models. Boundary conditions from the balanced, steady state cycle model are used to define component boundary conditions in the full-engine model. Operating characteristics of the 3D component models are integrated into the cycle model via partial performance maps generated from the CFD flow solutions using one-dimensional mean line turbomachinery programs. This paper highlights the generation of the high-pressure compressor, booster, and fan partial performance maps, as well as turbine maps for the high pressure and low pressure turbine. These are actually "mini-maps" in the sense that they are developed only for a narrow operating range of the component. Results are compared between actual cycle data at a take-off condition and the comparable condition utilizing these mini-maps. The mini-maps are also presented with comparison to actual component data where possible.

  5. Neural correlates of effective learning in experienced medical decision-makers.

    PubMed

    Downar, Jonathan; Bhatt, Meghana; Montague, P Read

    2011-01-01

    Accurate associative learning is often hindered by confirmation bias and success-chasing, which together can conspire to produce or solidify false beliefs in the decision-maker. We performed functional magnetic resonance imaging in 35 experienced physicians, while they learned to choose between two treatments in a series of virtual patient encounters. We estimated a learning model for each subject based on their observed behavior and this model divided clearly into high performers and low performers. The high performers showed small, but equal learning rates for both successes (positive outcomes) and failures (no response to the drug). In contrast, low performers showed very large and asymmetric learning rates, learning significantly more from successes than failures; a tendency that led to sub-optimal treatment choices. Consistently with these behavioral findings, high performers showed larger, more sustained BOLD responses to failed vs. successful outcomes in the dorsolateral prefrontal cortex and inferior parietal lobule while low performers displayed the opposite response profile. Furthermore, participants' learning asymmetry correlated with anticipatory activation in the nucleus accumbens at trial onset, well before outcome presentation. Subjects with anticipatory activation in the nucleus accumbens showed more success-chasing during learning. These results suggest that high performers' brains achieve better outcomes by attending to informative failures during training, rather than chasing the reward value of successes. The differential brain activations between high and low performers could potentially be developed into biomarkers to identify efficient learners on novel decision tasks, in medical or other contexts.

  6. Performance and Costs of Ductless Heat Pumps in Marine-Climate High-Performance Homes -- Habitat for Humanity The Woods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubliner, Michael; Howard, Luke; Hales, David

    The Woods is a Habitat for Humanity (HFH) community of ENERGY STAR Homes Northwest (ESHNW)-certified homes located in the marine climate of Tacoma/Pierce County, Washington. This research report builds on an earlier preliminary draft 2014 BA report, and includes significant billing analysis and cost effectiveness research from a collaborative, ongoing Ductless Heat Pump (DHP)research effort for Tacoma Public Utilities (TPU) and Bonneville Power Administration (BPA). This report focuses on the results of field testing, modeling, and monitoring of ductless mini-split heat pump hybrid heating systems in seven homes built and first occupied at various times between September 2013 and Octobermore » 2014. The report also provides WSU documentation of high-performance home observations, lessons learned, and stakeholder recommendations for builders of affordable high-performance housing such as HFH. Tacoma Public Utilities (TPU) and Bonneville Power Administration (BPA). This report focuses on the results of field testing, modeling, and monitoring of ductless mini-split heat pump hybrid heating systems in seven homes built and first occupied at various times between September 2013 and October 2014. The report also provides WSU documentation of high-performance home observations, lessons learned, and stakeholder recommendations for builders of affordable high-performance housing such as HFH.« less

  7. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  8. Test techniques for model development of repetitive service energy storage capacitors

    NASA Astrophysics Data System (ADS)

    Thompson, M. C.; Mauldin, G. H.

    1984-03-01

    The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.

  9. Determination of quantitative retention-activity relationships between pharmacokinetic parameters and biological effectiveness fingerprints of Salvia miltiorrhiza constituents using biopartitioning and microemulsion high-performance liquid chromatography.

    PubMed

    Gao, Haoshi; Huang, Hongzhang; Zheng, Aini; Yu, Nuojun; Li, Ning

    2017-11-01

    In this study, we analyzed danshen (Salvia miltiorrhiza) constituents using biopartitioning and microemulsion high-performance liquid chromatography (MELC). The quantitative retention-activity relationships (QRARs) of the constituents were established to model their pharmacokinetic (PK) parameters and chromatographic retention data, and generate their biological effectiveness fingerprints. A high-performance liquid chromatography (HPLC) method was established to determine the abundance of the extracted danshen constituents, such as sodium danshensu, rosmarinic acid, salvianolic acid B, protocatechuic aldehyde, cryptotanshinone, and tanshinone IIA. And another HPLC protocol was established to determine the abundance of those constituents in rat plasma samples. An experimental model was built in Sprague Dawley (SD) rats, and calculated the corresponding PK parameterst with 3P97 software package. Thirty-five model drugs were selected to test the PK parameter prediction capacities of the various MELC systems and to optimize the chromatographic protocols. QRARs and generated PK fingerprints were established. The test included water/oil-soluble danshen constituents and the prediction capacity of the regression model was validated. The results showed that the model had good predictability. Copyright © 2017. Published by Elsevier B.V.

  10. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  11. Aeroservoelastic Modeling and Validation of a Thrust-Vectoring F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    1996-01-01

    An F/A-18 aircraft was modified to perform flight research at high angles of attack (AOA) using thrust vectoring and advanced control law concepts for agility and performance enhancement and to provide a testbed for the computational fluid dynamics community. Aeroservoelastic (ASE) characteristics had changed considerably from the baseline F/A-18 aircraft because of structural and flight control system amendments, so analyses and flight tests were performed to verify structural stability at high AOA. Detailed actuator models that consider the physical, electrical, and mechanical elements of actuation and its installation on the airframe were employed in the analysis to accurately model the coupled dynamics of the airframe, actuators, and control surfaces. This report describes the ASE modeling procedure, ground test validation, flight test clearance, and test data analysis for the reconfigured F/A-18 aircraft. Multivariable ASE stability margins are calculated from flight data and compared to analytical margins. Because this thrust-vectoring configuration uses exhaust vanes to vector the thrust, the modeling issues are nearly identical for modem multi-axis nozzle configurations. This report correlates analysis results with flight test data and makes observations concerning the application of the linear predictions to thrust-vectoring and high-AOA flight.

  12. A computational approach to compare regression modelling strategies in prediction research.

    PubMed

    Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H

    2016-08-25

    It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.

  13. Modeling climate change impacts on water trading.

    PubMed

    Luo, Bin; Maqsood, Imran; Gong, Yazhen

    2010-04-01

    This paper presents a new method of evaluating the impacts of climate change on the long-term performance of water trading programs, through designing an indicator to measure the mean of periodic water volume that can be released by trading through a water-use system. The indicator is computed with a stochastic optimization model which can reflect the random uncertainty of water availability. The developed method was demonstrated in the Swift Current Creek watershed of Prairie Canada under two future scenarios simulated by a Canadian Regional Climate Model, in which total water availabilities under future scenarios were estimated using a monthly water balance model. Frequency analysis was performed to obtain the best probability distributions for both observed and simulated water quantity data. Results from the case study indicate that the performance of a trading system is highly scenario-dependent in future climate, with trading effectiveness highly optimistic or undesirable under different future scenarios. Trading effectiveness also largely depends on trading costs, with high costs resulting in failure of the trading program. (c) 2010 Elsevier B.V. All rights reserved.

  14. Performance modeling codes for the QuakeSim problem solving environment

    NASA Technical Reports Server (NTRS)

    Parker, J. W.; Donnellan, A.; Lyzenga, G.; Rundle, J.; Tullis, T.

    2003-01-01

    The QuakeSim Problem Solving Environment uses a web-services approach to unify and deploy diverse remote data sources and processing services within a browser environment. Here we focus on the high-performance crustal modeling applications that will be included in this set of remote but interoperable applications.

  15. Chaining for Flexible and High-Performance Key-Value Systems

    DTIC Science & Technology

    2012-09-01

    store that is fault tolerant achieves high performance and availability, and offers strong data consistency? We present a new replication protocol...effective high performance data access and analytics, many sites use simpler data model “ NoSQL ” systems. ese systems store and retrieve data only by...DRAM, Flash, and disk-based storage; can act as an unreliable cache or a durable store ; and can offer strong or weak data consistency. e value of

  16. Development of a model to assess environmental performance, concerning HSE-MS principles.

    PubMed

    Abbaspour, M; Hosseinzadeh Lotfi, F; Karbassi, A R; Roayaei, E; Nikoomaram, H

    2010-06-01

    The main objective of the present study was to develop a valid and appropriate model to evaluate companies' efficiency and environmental performance, concerning health, safety, and environmental management system principles. The proposed model overcomes the shortcomings of the previous models developed in this area. This model has been designed on the basis of a mathematical method known as Data Envelopment Analysis (DEA). In order to differentiate high-performing companies from weak ones, one of DEA nonradial models named as enhanced Russell graph efficiency measure has been applied. Since some of the environmental performance indicators cannot be controlled by companies' managers, it was necessary to develop the model in a way that it could be applied when discretionary and/or nondiscretionary factors were involved. The model, then, has been modified on a real case that comprised 12 oil and gas general contractors. The results showed the relative efficiency, inefficiency sources, and the rank of contractors.

  17. Performance prediction of high Tc superconducting small antennas using a two-fluid-moment method model

    NASA Astrophysics Data System (ADS)

    Cook, G. G.; Khamas, S. K.; Kingsley, S. P.; Woods, R. C.

    1992-01-01

    The radar cross section and Q factors of electrically small dipole and loop antennas made with a YBCO high Tc superconductor are predicted using a two-fluid-moment method model, in order to determine the effects of finite conductivity on the performances of such antennas. The results compare the useful operating bandwidths of YBCO antennas exhibiting varying degrees of impurity with their copper counterparts at 77 K, showing a linear relationship between bandwidth and impurity level.

  18. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 3A: High pressure oxidizer turbo-pump preburner pump housing stress analysis report

    NASA Technical Reports Server (NTRS)

    Shannon, Robert V., Jr.

    1989-01-01

    The model generation and structural analysis performed for the High Pressure Oxidizer Turbopump (HPOTP) preburner pump volute housing located on the main pump end of the HPOTP in the space shuttle main engine are summarized. An ANSYS finite element model of the volute housing was built and executed. A static structural analysis was performed on the Engineering Analysis and Data System (EADS) Cray-XMP supercomputer

  19. The Processing and Mechanical Properties of High Temperature/High Performance Composites. Book 3. Constituent Properties and Macroscopic Performance: MMCs

    DTIC Science & Technology

    1993-04-01

    re - expressed as, v .= hCSw (C3) Combining Eqns. (C2) and C3) yields, Se = - ’. S (C4...of vi( s ) or v ’( s ). Substituting eq. (B10) into eq. (25), one finds the finite element method expression for functional Ud [ v ] which is U, d [] = v , K...Measurements 1- D 2- D S ,_DL 4 Constitutive W Constitutive Laws Laws Matrix Cracking Labor Models Models Stress Redistribution Numerical Calculations

  20. Design of high productivity antibody capture by protein A chromatography using an integrated experimental and modeling approach.

    PubMed

    Ng, Candy K S; Osuna-Sanchez, Hector; Valéry, Eric; Sørensen, Eva; Bracewell, Daniel G

    2012-06-15

    An integrated experimental and modeling approach for the design of high productivity protein A chromatography is presented to maximize productivity in bioproduct manufacture. The approach consists of four steps: (1) small-scale experimentation, (2) model parameter estimation, (3) productivity optimization and (4) model validation with process verification. The integrated use of process experimentation and modeling enables fewer experiments to be performed, and thus minimizes the time and materials required in order to gain process understanding, which is of key importance during process development. The application of the approach is demonstrated for the capture of antibody by a novel silica-based high performance protein A adsorbent named AbSolute. In the example, a series of pulse injections and breakthrough experiments were performed to develop a lumped parameter model, which was then used to find the best design that optimizes the productivity of a batch protein A chromatographic process for human IgG capture. An optimum productivity of 2.9 kg L⁻¹ day⁻¹ for a column of 5mm diameter and 8.5 cm length was predicted, and subsequently verified experimentally, completing the whole process design approach in only 75 person-hours (or approximately 2 weeks). Copyright © 2012 Elsevier B.V. All rights reserved.

  1. A generic simulation model to assess the performance of sterilization services in health establishments.

    PubMed

    Di Mascolo, Maria; Gouin, Alexia

    2013-03-01

    The work presented here is with a view to improving performance of sterilization services in hospitals. We carried out a survey in a large number of health establishments in the Rhône-Alpes region in France. Based on the results of this survey and a detailed study of a specific service, we have built a generic model. The generic nature of the model relies on a common structure with a high level of detail. This model can be used to improve the performance of a specific sterilization service and/or to dimension its resources. It can also serve for quantitative comparison of performance indicators of various sterilization services.

  2. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  3. Magnetosphere simulations with a high-performance 3D AMR MHD Code

    NASA Astrophysics Data System (ADS)

    Gombosi, Tamas; Dezeeuw, Darren; Groth, Clinton; Powell, Kenneth; Song, Paul

    1998-11-01

    BATS-R-US is a high-performance 3D AMR MHD code for space physics applications running on massively parallel supercomputers. In BATS-R-US the electromagnetic and fluid equations are solved with a high-resolution upwind numerical scheme in a tightly coupled manner. The code is very robust and it is capable of spanning a wide range of plasma parameters (such as β, acoustic and Alfvénic Mach numbers). Our code is highly scalable: it achieved a sustained performance of 233 GFLOPS on a Cray T3E-1200 supercomputer with 1024 PEs. This talk reports results from the BATS-R-US code for the GGCM (Geospace General Circularculation Model) Phase 1 Standard Model Suite. This model suite contains 10 different steady-state configurations: 5 IMF clock angles (north, south, and three equally spaced angles in- between) with 2 IMF field strengths for each angle (5 nT and 10 nT). The other parameters are: solar wind speed =400 km/sec; solar wind number density = 5 protons/cc; Hall conductance = 0; Pedersen conductance = 5 S; parallel conductivity = ∞.

  4. Multi-threaded parallel simulation of non-local non-linear problems in ultrashort laser pulse propagation in the presence of plasma

    NASA Astrophysics Data System (ADS)

    Baregheh, Mandana; Mezentsev, Vladimir; Schmitz, Holger

    2011-06-01

    We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor.

  5. Multivariate meta-analysis of individual participant data helped externally validate the performance and implementation of a prediction model.

    PubMed

    Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D

    2016-01-01

    Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  6. Modeling of Spark Gap Performance

    DTIC Science & Technology

    1983-06-01

    MODELING OF SPARK GAP PERFORMANCE* A. L. Donaldson, R. Ness, M. Hagler, M. Kristiansen Department of Electrical Engineering and L. L. Hatfield...gas pressure, and chaJ:ging rate on the voltage stability of high energy spark gaps is discussed. Implications of the model include changes in...an extremely useful, and physically reasonable framework, from which the properties of spark gaps under a wide variety of experimental conditions

  7. Wind tunnel investigation of a high lift system with pneumatic flow control

    NASA Astrophysics Data System (ADS)

    Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu

    2016-06-01

    Next generation passenger aircrafts require more efficient high lift systems under size and mass constraints, to achieve more fuel efficiency. This can be obtained in various ways: to improve/maintain aerodynamic performance while simplifying the mechanical design of the high lift system going to a single slotted flap, to maintain complexity and improve the aerodynamics even more, etc. Laminar wings have less efficient leading edge high lift systems if any, requiring more performance from the trailing edge flap. Pulsed blowing active flow control (AFC) in the gap of single element flap is investigated for a relatively large model. A wind tunnel model, test campaign and results and conclusion are presented.

  8. Akuna: An Open Source User Environment for Managing Subsurface Simulation Workflows

    NASA Astrophysics Data System (ADS)

    Freedman, V. L.; Agarwal, D.; Bensema, K.; Finsterle, S.; Gable, C. W.; Keating, E. H.; Krishnan, H.; Lansing, C.; Moeglein, W.; Pau, G. S. H.; Porter, E.; Scheibe, T. D.

    2014-12-01

    The U.S. Department of Energy (DOE) is investing in development of a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. ASCEM is an open source and modular computing framework that incorporates new advances and tools for predicting contaminant fate and transport in natural and engineered systems. The ASCEM toolset includes both a Platform with Integrated Toolsets (called Akuna) and a High-Performance Computing multi-process simulator (called Amanzi). The focus of this presentation is on Akuna, an open-source user environment that manages subsurface simulation workflows and associated data and metadata. In this presentation, key elements of Akuna are demonstrated, which includes toolsets for model setup, database management, sensitivity analysis, parameter estimation, uncertainty quantification, and visualization of both model setup and simulation results. A key component of the workflow is in the automated job launching and monitoring capabilities, which allow a user to submit and monitor simulation runs on high-performance, parallel computers. Visualization of large outputs can also be performed without moving data back to local resources. These capabilities make high-performance computing accessible to the users who might not be familiar with batch queue systems and usage protocols on different supercomputers and clusters.

  9. An Engineering Model of Human Balance Control-Part I: Biomechanical Model.

    PubMed

    Barton, Joseph E; Roy, Anindo; Sorkin, John D; Rogers, Mark W; Macko, Richard

    2016-01-01

    We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task-with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities.

  10. An Engineering Model of Human Balance Control—Part I: Biomechanical Model

    PubMed Central

    Barton, Joseph E.; Roy, Anindo; Sorkin, John D.; Rogers, Mark W.; Macko, Richard

    2016-01-01

    We developed a balance measurement tool (the balanced reach test (BRT)) to assess standing balance while reaching and pointing to a target moving in three-dimensional space according to a sum-of-sines function. We also developed a three-dimensional, 13-segment biomechanical model to analyze performance in this task. Using kinematic and ground reaction force (GRF) data from the BRT, we performed an inverse dynamics analysis to compute the forces and torques applied at each of the joints during the course of a 90 s test. We also performed spectral analyses of each joint's force activations. We found that the joints act in a different but highly coordinated manner to accomplish the tracking task—with individual joints responding congruently to different portions of the target disk's frequency spectrum. The test and the model also identified clear differences between a young healthy subject (YHS), an older high fall risk (HFR) subject before participating in a balance training intervention; and in the older subject's performance after training (which improved to the point that his performance approached that of the young subject). This is the first phase of an effort to model the balance control system with sufficient physiological detail and complexity to accurately simulate the multisegmental control of balance during functional reach across the spectra of aging, medical, and neurological conditions that affect performance. Such a model would provide insight into the function and interaction of the biomechanical and neurophysiological elements making up this system; and system adaptations to changes in these elements' performance and capabilities. PMID:26328608

  11. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  12. Structural-Thermal-Optical-Performance (STOP) Model Development and Analysis of a Field-widened Michelson Interferometer

    NASA Technical Reports Server (NTRS)

    Scola, Salvatore J.; Osmundsen, James F.; Murchison, Luke S.; Davis, Warren T.; Fody, Joshua M.; Boyer, Charles M.; Cook, Anthony L.; Hostetler, Chris A.; Seaman, Shane T.; Miller, Ian J.; hide

    2014-01-01

    An integrated Structural-Thermal-Optical-Performance (STOP) model was developed for a field-widened Michelson interferometer which is being built and tested for the High Spectral Resolution Lidar (HSRL) project at NASA Langley Research Center (LaRC). The performance of the interferometer is highly sensitive to thermal expansion, changes in refractive index with temperature, temperature gradients, and deformation due to mounting stresses. Hand calculations can only predict system performance for uniform temperature changes, under the assumption that coefficient of thermal expansion (CTE) mismatch effects are negligible. An integrated STOP model was developed to investigate the effects of design modifications on the performance of the interferometer in detail, including CTE mismatch, and other three- dimensional effects. The model will be used to improve the design for a future spaceflight version of the interferometer. The STOP model was developed using the Comet SimApp'TM' Authoring Workspace which performs automated integration between Pro-Engineer®, Thermal Desktop®, MSC Nastran'TM', SigFit'TM', Code V'TM', and MATLAB®. This is the first flight project for which LaRC has utilized Comet, and it allows a larger trade space to be studied in a shorter time than would be possible in a traditional STOP analysis. This paper describes the development of the STOP model, presents a comparison of STOP results for simple cases with hand calculations, and presents results of the correlation effort to bench-top testing of the interferometer. A trade study conducted with the STOP model which demonstrates a few simple design changes that can improve the performance seen in the lab is also presented.

  13. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  15. Predicting stillbirth in a low resource setting.

    PubMed

    Kayode, Gbenga A; Grobbee, Diederick E; Amoakoh-Coleman, Mary; Adeleke, Ibrahim Taiwo; Ansah, Evelyn; de Groot, Joris A H; Klipstein-Grobusch, Kerstin

    2016-09-20

    Stillbirth is a major contributor to perinatal mortality and it is particularly common in low- and middle-income countries, where annually about three million stillbirths occur in the third trimester. This study aims to develop a prediction model for early detection of pregnancies at high risk of stillbirth. This retrospective cohort study examined 6,573 pregnant women who delivered at Federal Medical Centre Bida, a tertiary level of healthcare in Nigeria from January 2010 to December 2013. Descriptive statistics were performed and missing data imputed. Multivariable logistic regression was applied to examine the associations between selected candidate predictors and stillbirth. Discrimination and calibration were used to assess the model's performance. The prediction model was validated internally and over-optimism was corrected. We developed a prediction model for stillbirth that comprised maternal comorbidity, place of residence, maternal occupation, parity, bleeding in pregnancy, and fetal presentation. As a secondary analysis, we extended the model by including fetal growth rate as a predictor, to examine how beneficial ultrasound parameters would be for the predictive performance of the model. After internal validation, both calibration and discriminative performance of both the basic and extended model were excellent (i.e. C-statistic basic model = 0.80 (95 % CI 0.78-0.83) and extended model = 0.82 (95 % CI 0.80-0.83)). We developed a simple but informative prediction model for early detection of pregnancies with a high risk of stillbirth for early intervention in a low resource setting. Future research should focus on external validation of the performance of this promising model.

  16. HRST architecture modeling and assessments

    NASA Astrophysics Data System (ADS)

    Comstock, Douglas A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segments for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented.

  17. Comparison Between Surf and Multi-Shock Forest Fire High Explosive Burn Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenfield, Nicholas Alexander

    PAGOSA1 has several different burn models used to model high explosive detonation. Two of these, Multi-Shock Forest Fire and Surf, are capable of modeling shock initiation. Accurately calculating shock initiation of a high explosive is important because it is a mechanism for detonation in many accident scenarios (i.e. fragment impact). Comparing the models to pop-plot data give confidence that the models are accurately calculating detonation or lack thereof. To compare the performance of these models, pop-plots2 were created from simulations where one two cm block of PBX 9502 collides with another block of PBX 9502.

  18. Predictive modelling of JT-60SA high-beta steady-state plasma with impurity accumulation

    NASA Astrophysics Data System (ADS)

    Hayashi, N.; Hoshino, K.; Honda, M.; Ide, S.

    2018-06-01

    The integrated modelling code TOPICS has been extended to include core impurity transport, and applied to predictive modelling of JT-60SA high-beta steady-state plasma with the accumulation of impurity seeded to reduce the divertor heat load. In the modelling, models and conditions are selected for a conservative prediction, which considers a lower bound of plasma performance with the maximum accumulation of impurity. The conservative prediction shows the compatibility of impurity seeding with core plasma with high-beta (β N  >  3.5) and full current drive conditions, i.e. when Ar seeding reduces the divertor heat load below 10 MW m‑2, its accumulation in the core is so moderate that the core plasma performance can be recovered by additional heating within the machine capability to compensate for Ar radiation. Due to the strong dependence of accumulation on the pedestal density gradient, high separatrix density is important for the low accumulation as well as the low divertor heat load. The conservative prediction also shows that JT-60SA has enough capability to explore the divertor heat load control by impurity seeding in high-beta steady-state plasmas.

  19. Generating daily high spatial land surface temperatures by combining ASTER and MODIS land surface temperature products for environmental process monitoring.

    PubMed

    Wu, Mingquan; Li, Hua; Huang, Wenjiang; Niu, Zheng; Wang, Changyao

    2015-08-01

    There is a shortage of daily high spatial land surface temperature (LST) data for use in high spatial and temporal resolution environmental process monitoring. To address this shortage, this work used the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), and the Spatial and Temporal Data Fusion Approach (STDFA) to estimate high spatial and temporal resolution LST by combining Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) LST and Moderate Resolution Imaging Spectroradiometer (MODIS) LST products. The actual ASTER LST products were used to evaluate the precision of the combined LST images using the correlation analysis method. This method was tested and validated in study areas located in Gansu Province, China. The results show that all the models can generate daily synthetic LST image with a high correlation coefficient (r) of 0.92 between the synthetic image and the actual ASTER LST observations. The ESTARFM has the best performance, followed by the STDFA and the STARFM. Those models had better performance in desert areas than in cropland. The STDFA had better noise immunity than the other two models.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, D.R.; Hutchinson, J.L.

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overallmore » computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.« less

  1. Network Performance Evaluation Model for assessing the impacts of high-occupancy vehicle facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janson, B.N.; Zozaya-Gorostiza, C.; Southworth, F.

    1986-09-01

    A model to assess the impacts of major high-occupancy vehicle (HOV) facilities on regional levels of energy consumption and vehicle air pollution emissions in urban aeas is developed and applied. This model can be used to forecast and compare the impacts of alternative HOV facility design and operation plans on traffic patterns, travel costs, model choice, travel demand, energy consumption and vehicle emissions. The model is designed to show differences in the overall impacts of alternative HOV facility types, locations and operation plans rather than to serve as a tool for detailed engineering design and traffic planning studies. The Networkmore » Performance Evaluation Model (NETPEM) combines several urban transportation planning models within a multi-modal network equilibrium framework including modules with which to define the type, location and use policy of the HOV facility to be tested, and to assess the impacts of this facility.« less

  2. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  3. PyMCT: A Very High Level Language Coupling Tool For Climate System Models

    NASA Astrophysics Data System (ADS)

    Tobis, M.; Pierrehumbert, R. T.; Steder, M.; Jacob, R. L.

    2006-12-01

    At the Climate Systems Center of the University of Chicago, we have been examining strategies for applying agile programming techniques to complex high-performance modeling experiments. While the "agile" development methodology differs from a conventional requirements process and its associated milestones, the process remain a formal one. It is distinguished by continuous improvement in functionality, large numbers of small releases, extensive and ongoing testing strategies, and a strong reliance on very high level languages (VHLL). Here we report on PyMCT, which we intend as a core element in a model ensemble control superstructure. PyMCT is a set of Python bindings for MCT, the Fortran-90 based Model Coupling Toolkit, which forms the infrastructure for the inter-component communication in the Community Climate System Model (CCSM). MCT provides a scalable model communication infrastructure. In order to take maximum advantage of agile software development methodologies, we exposed MCT functionality to Python, a prominent VHLL. We describe how the scalable architecture of MCT allows us to overcome the relatively weak runtime performance of Python, so that the performance of the combined system is not severely impacted. To demonstrate these advantages, we reimplemented the CCSM coupler in Python. While this alone offers no new functionality, it does provide a rigorous test of PyMCT functionality and performance. We reimplemented the CPL6 library, presenting an interesting case study of the comparison between conventional Fortran-90 programming and the higher abstraction level provided by a VHLL. The powerful abstractions provided by Python will allow much more complex experimental paradigms. In particular, we hope to build on the scriptability of our coupling strategy to enable systematic sensitivity tests. Our most ambitious objective is to combine our efforts with Bayesian inverse modeling techniques toward objective tuning at the highest level, across model architectures.

  4. Ice-sheet modelling accelerated by graphics cards

    NASA Astrophysics Data System (ADS)

    Brædstrup, Christian Fredborg; Damsgaard, Anders; Egholm, David Lundbek

    2014-11-01

    Studies of glaciers and ice sheets have increased the demand for high performance numerical ice flow models over the past decades. When exploring the highly non-linear dynamics of fast flowing glaciers and ice streams, or when coupling multiple flow processes for ice, water, and sediment, researchers are often forced to use super-computing clusters. As an alternative to conventional high-performance computing hardware, the Graphical Processing Unit (GPU) is capable of massively parallel computing while retaining a compact design and low cost. In this study, we present a strategy for accelerating a higher-order ice flow model using a GPU. By applying the newest GPU hardware, we achieve up to 180× speedup compared to a similar but serial CPU implementation. Our results suggest that GPU acceleration is a competitive option for ice-flow modelling when compared to CPU-optimised algorithms parallelised by the OpenMP or Message Passing Interface (MPI) protocols.

  5. A semi-empirical model for the formation and depletion of the high burnup structure in UO 2

    DOE PAGES

    Pizzocri, D.; Cappia, F.; Luzzi, L.; ...

    2017-01-31

    In the rim zone of UO 2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. To this end, we per-formed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Moreover, based on these new experimental data, we assume an exponential reduction of the average grain size withmore » local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.« less

  6. High-latitude filtering in a global grid-point model using model normal modes. [Fourier filters for synoptic weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, L. L.; Kalnay, E.; Navon, I. M.

    1985-01-01

    A normal modes expansion technique is applied to perform high latitude filtering in the GLAS fourth order global shallow water model with orography. The maximum permissible time step in the solution code is controlled by the frequency of the fastest propagating mode, which can be a gravity wave. Numerical methods are defined for filtering the data to identify the number of gravity modes to be included in the computations in order to obtain the appropriate zonal wavenumbers. The performances of the model with and without the filter, and with a time tendency and a prognostic field filter are tested with simulations of the Northern Hemisphere winter. The normal modes expansion technique is shown to leave the Rossby modes intact and permit 3-5 day predictions, a range not possible with the other high-latitude filters.

  7. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing.

    PubMed

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2014-10-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA's CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream . Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels.

  8. Reduced-order modeling of the flow around a high-lift configuration with unsteady Coanda blowing

    NASA Astrophysics Data System (ADS)

    Semaan, Richard; Cordier, Laurent; Noack, Bernd; Kumar, Pradeep; Burnazzi, Marco; Tissot, Gilles

    2015-11-01

    We propose a low-dimensional POD model for the transient and post-transient flow around a high-lift airfoil with unsteady Coanda blowing over the trailing edge. This model comprises the effect of high-frequency modulated blowing which mitigates vortex shedding and increases lift. The structure of the dynamical system is derived from the Navier-Stokes equations with a Galerkin projection and from subsequent dynamic simplifications. The system parameters are determined with a data assimilation (4D-Var) method. The boundary actuation is incorporated into the model with actuation modes following Graham et al. (1999); Kasnakoğlu et al. (2008). As novel enabler, we show that the performance of the POD model significantly benefits from employing additional actuation modes for different frequency components associated with the same actuation input. In addition, linear, weakly nonlinear and fully nonlinear models are considered. The current study suggests that separate actuation modes for different actuation frequencies improve Galerkin model performance, in particular with respect to the important base-flow changes. We acknowledge (1) the Collaborative Research Centre (CRC 880) ``Fundamentals of High Lift of Future Civil Aircraft,'' and 2) the Senior Chair of Excellence ``Closed-loop control of turbulent shear flows using reduced-order models'' (TUCOROM).

  9. Impact of organisational characteristics on turnover intention among care workers in nursing homes in Korea: a structural equation model.

    PubMed

    Ha, Jong Goon; Man Kim, Ji; Hwang, Won Ju; Lee, Sang Gyu

    2014-09-01

    The aim of the present study was to analyse the impact of organisational characteristics on the turnover intention of care workers working at nursing homes in Korea. Study participants included 504 care workers working at 14 nursing homes in Korea. The variables measured were: high-performance work practices, consisting of five subfactors (official training, employment stability, autonomy, employee participation and group-based payment); organisational commitment, consisting of three subfactors (affective, normative and continuance commitment); organisational support; and turnover intention. The inter-relationship between high-performance work practices, organisational support, organisational commitment and turnover intention and the fit of the hypothetical model were analysed using structural equation modelling. According to our analysis, high-performance work practices not only had a direct effect on turnover intention, but also an indirect effect by mediating organisational support and commitment. The factor having the largest direct influence on turnover intention was organisational commitment. The results of the present study suggest that to improve health conditions for frail elderly patients at nursing homes, as well as the efficiency of nursing homes through the continuance of nursing service and enhancement of quality of service, long-term care facilities should reduce the turnover intention of care workers by increasing their organisational commitment by actively implementing high-performance work practices.

  10. Multi-Objective Aerodynamic Optimization of the Streamlined Shape of High-Speed Trains Based on the Kriging Model.

    PubMed

    Xu, Gang; Liang, Xifeng; Yao, Shuanbao; Chen, Dawei; Li, Zhiwei

    2017-01-01

    Minimizing the aerodynamic drag and the lift of the train coach remains a key issue for high-speed trains. With the development of computing technology and computational fluid dynamics (CFD) in the engineering field, CFD has been successfully applied to the design process of high-speed trains. However, developing a new streamlined shape for high-speed trains with excellent aerodynamic performance requires huge computational costs. Furthermore, relationships between multiple design variables and the aerodynamic loads are seldom obtained. In the present study, the Kriging surrogate model is used to perform a multi-objective optimization of the streamlined shape of high-speed trains, where the drag and the lift of the train coach are the optimization objectives. To improve the prediction accuracy of the Kriging model, the cross-validation method is used to construct the optimal Kriging model. The optimization results show that the two objectives are efficiently optimized, indicating that the optimization strategy used in the present study can greatly improve the optimization efficiency and meet the engineering requirements.

  11. Using High-Dimensional Image Models to Perform Highly Undetectable Steganography

    NASA Astrophysics Data System (ADS)

    Pevný, Tomáš; Filler, Tomáš; Bas, Patrick

    This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.

  12. Performance and Reliability of Bonded Interfaces for High-Temperature Packaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVoto, Douglas

    2016-06-08

    This is a technical review of the DOE VTO EDT project EDT063, Performance and Reliability of Bonded Interfaces for High-Temperature Packaging. A procedure for analyzing the reliability of sintered-silver through experimental thermal cycling and crack propagation modeling has been outlined and results have been presented.

  13. High Performance Work Organizations. Myths and Realities.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    Organizations are being urged to become "high performance work organizations" (HPWOs) and vocational teachers have begun considering how best to prepare workers for them. Little consensus exists as to what HPWOs are. Several common characteristics of HPWOs have been identified, and two distinct models of HPWOs are emerging in the United…

  14. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  15. General job performance of first-line supervisors: the role of conscientiousness in determining its effects on subordinate exhaustion.

    PubMed

    Perry, Sara Jansen; Rubino, Cristina; Witt, L A

    2011-04-01

    In an integrated test of the job demands-resources model and trait activation theory, we predicted that the general job performance of employees who also hold supervisory roles may act as a demand to subordinates, depending on levels of subordinate conscientiousness. In a sample of 313 customer service call centre employees, we found that high-conscientiousness individuals were more likely to experience emotional exhaustion, and low-conscientiousness individuals were less likely as the general job performance of their supervisor improved. The results were curvilinear, such that high-conscientiousness individuals' exhaustion levelled off with very high supervisor performance (two standard deviations above the mean), and low-conscientiousness individuals' exhaustion levelled off as supervisor performance improved from moderate to high. These findings suggest high-conscientiousness employees may efficiently handle demands presented by a low-performing coworker who is their boss, but when performance expectations are high (i.e. high-performing boss), these achievement-oriented employees may direct their resources (i.e. energy and time) towards performance-related efforts at the expense of their well-being. Conversely, low-conscientiousness employees suffer when paired with a low-performing boss, but benefit from a supervisor who demonstrates at least moderate job performance.

  16. Damage-mitigating control of aircraft for high performance and life extension

    NASA Astrophysics Data System (ADS)

    Caplin, Jeffrey

    1998-12-01

    A methodology is proposed for the synthesis of a Damage-Mitigating Control System for a high-performance fighter aircraft. The design of such a controller involves consideration of damage to critical points of the structure, as well as the performance requirements of the aircraft. This research is interdisciplinary, and brings existing knowledge in the fields of unsteady aerodynamics, structural dynamics, fracture mechanics, and control theory together to formulate a new approach towards aircraft flight controller design. A flexible wing model is formulated using the Finite Element Method, and the important mode shapes and natural frequencies are identified. The Doublet Lattice Method is employed to develop an unsteady flow model for computation of the unsteady aerodynamic loads acting on the wing due to rigid-body maneuvers and structural deformation. These two models are subsequently incorporated into a pre-existing nonlinear rigid-body aircraft flight-dynamic model. A family of robust Damage-Mitigating Controllers is designed using the Hinfinity-optimization and mu-synthesis method. In addition to weighting the error between the ideal performance and the actual performance of the aircraft, weights are also placed on the strain amplitude at the root of each wing. The results show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.

  17. Robust Damage-Mitigating Control of Aircraft for High Performance and Structural Durability

    NASA Technical Reports Server (NTRS)

    Caplin, Jeffrey; Ray, Asok; Joshi, Suresh M.

    1999-01-01

    This paper presents the concept and a design methodology for robust damage-mitigating control (DMC) of aircraft. The goal of DMC is to simultaneously achieve high performance and structural durability. The controller design procedure involves consideration of damage at critical points of the structure, as well as the performance requirements of the aircraft. An aeroelastic model of the wings has been formulated and is incorporated into a nonlinear rigid-body model of aircraft flight-dynamics. Robust damage-mitigating controllers are then designed using the H(infinity)-based structured singular value (mu) synthesis method based on a linearized model of the aircraft. In addition to penalizing the error between the ideal performance and the actual performance of the aircraft, frequency-dependent weights are placed on the strain amplitude at the root of each wing. Using each controller in turn, the control system is put through an identical sequence of maneuvers, and the resulting (varying amplitude cyclic) stress profiles are analyzed using a fatigue crack growth model that incorporates the effects of stress overload. Comparisons are made to determine the impact of different weights on the resulting fatigue crack damage in the wings. The results of simulation experiments show significant savings in fatigue life of the wings while retaining the dynamic performance of the aircraft.

  18. Nonlinearity analysis of measurement model for vision-based optical navigation system

    NASA Astrophysics Data System (ADS)

    Li, Jianguo; Cui, Hutao; Tian, Yang

    2015-02-01

    In the autonomous optical navigation system based on line-of-sight vector observation, nonlinearity of measurement model is highly correlated with the navigation performance. By quantitatively calculating the degree of nonlinearity of the focal plane model and the unit vector model, this paper focuses on determining which optical measurement model performs better. Firstly, measurement equations and measurement noise statistics of these two line-of-sight measurement models are established based on perspective projection co-linearity equation. Then the nonlinear effects of measurement model on the filter performance are analyzed within the framework of the Extended Kalman filter, also the degrees of nonlinearity of two measurement models are compared using the curvature measure theory from differential geometry. Finally, a simulation of star-tracker-based attitude determination is presented to confirm the superiority of the unit vector measurement model. Simulation results show that the magnitude of curvature nonlinearity measurement is consistent with the filter performance, and the unit vector measurement model yields higher estimation precision and faster convergence properties.

  19. Performance analysis of high-concentrated multi-junction solar cells in hot climate

    NASA Astrophysics Data System (ADS)

    Ghoneim, Adel A.; Kandil, Kandil M.; Alzanki, Talal H.; Alenezi, Mohammad R.

    2018-03-01

    Multi-junction concentrator solar cells are a promising technology as they can fulfill the increasing energy demand with renewable sources. Focusing sunlight upon the aperture of multi-junction photovoltaic (PV) cells can generate much greater power densities than conventional PV cells. So, concentrated PV multi-junction solar cells offer a promising way towards achieving minimum cost per kilowatt-hour. However, these cells have many aspects that must be fixed to be feasible for large-scale energy generation. In this work, a model is developed to analyze the impact of various atmospheric factors on concentrator PV performance. A single-diode equivalent circuit model is developed to examine multi-junction cells performance in hot weather conditions, considering the impacts of both temperature and concentration ratio. The impacts of spectral variations of irradiance on annual performance of various high-concentrated photovoltaic (HCPV) panels are examined, adapting spectra simulations using the SMARTS model. Also, the diode shunt resistance neglected in the existing models is considered in the present model. The present results are efficiently validated against measurements from published data to within 2% accuracy. Present predictions show that the single-diode model considering the shunt resistance gives accurate and reliable results. Also, aerosol optical depth (AOD) and air mass are most important atmospheric parameters having a significant impact on HCPV cell performance. In addition, the electrical efficiency (η) is noticed to increase with concentration to a certain concentration degree after which it decreases. Finally, based on the model predictions, let us conclude that the present model could be adapted properly to examine HCPV cells' performance over a broad range of operating conditions.

  20. Application of Classification Models to Pharyngeal High-Resolution Manometry

    ERIC Educational Resources Information Center

    Mielens, Jason D.; Hoffman, Matthew R.; Ciucci, Michelle R.; McCulloch, Timothy M.; Jiang, Jack J.

    2012-01-01

    Purpose: The authors present 3 methods of performing pattern recognition on spatiotemporal plots produced by pharyngeal high-resolution manometry (HRM). Method: Classification models, including the artificial neural networks (ANNs) multilayer perceptron (MLP) and learning vector quantization (LVQ), as well as support vector machines (SVM), were…

  1. Investigation of arterial gas occlusions. [effect of noncondensable gases on high performance heat pipes

    NASA Technical Reports Server (NTRS)

    Saaski, E. W.

    1974-01-01

    The effect of noncondensable gases on high-performance arterial heat pipes was investigated both analytically and experimentally. Models have been generated which characterize the dissolution of gases in condensate, and the diffusional loss of dissolved gases from condensate in arterial flow. These processes, and others, were used to postulate stability criteria for arterial heat pipes under isothermal and non-isothermal condensate flow conditions. A rigorous second-order gas-loaded heat pipe model, incorporating axial conduction and one-dimensional vapor transport, was produced and used for thermal and gas studies. A Freon-22 (CHCIF2) heat pipe was used with helium and xenon to validate modeling. With helium, experimental data compared well with theory. Unusual gas-control effects with xenon were attributed to high solubility.

  2. Development of a high-velocity free-flight launcher : the Ames light-gas gun

    NASA Technical Reports Server (NTRS)

    Charters, A C; Denardo, B Pat; Rossow, Vernon J

    1955-01-01

    Recent interest in long-range missiles has stimulated a search for new experimental techniques which can reproduce in the laboratory the high temperatures and Mach numbers associated with the missiles' flight. One promising possibility lies in free-flight testing of laboratory models which are flown at the full velocity of the missile. In this type of test, temperatures are approximated and aerodynamic heating of the model is representative of that experienced by the missile in high-velocity flight. A prime requirement of the free-flight test technique is a device which had the capacity for launching models at the velocities desired. In response to thie need, a gun firing light models at velocities up to 15,000 feet per second has been developed at the Ames Aeronautical Laboratory. The design of this gun, the analysis of its performance, and the results of the initial firing trials are described in this paper. The firing trials showed that the measured velocities and pressures agreed well with the predicted values. Also, the erosion of the launch tube was very small for the eleven rounds fired. The performance of the gun suggests that it will prove to be a satisfactory launcher for high-velocity free-flight tests. However, it should be mentioned that only the gross performance has been evaluated so far, and, consequently, the operation of the gun must be investigated in further detail before its performance can be reliably predicted over its full operating range.

  3. The active learning hypothesis of the job-demand-control model: an experimental examination.

    PubMed

    Häusser, Jan Alexander; Schulz-Hardt, Stefan; Mojzisch, Andreas

    2014-01-01

    The active learning hypothesis of the job-demand-control model [Karasek, R. A. 1979. "Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign." Administration Science Quarterly 24: 285-307] proposes positive effects of high job demands and high job control on performance. We conducted a 2 (demands: high vs. low) × 2 (control: high vs. low) experimental office workplace simulation to examine this hypothesis. Since performance during a work simulation is confounded by the boundaries of the demands and control manipulations (e.g. time limits), we used a post-test, in which participants continued working at their task, but without any manipulation of demands and control. This post-test allowed for examining active learning (transfer) effects in an unconfounded fashion. Our results revealed that high demands had a positive effect on quantitative performance, without affecting task accuracy. In contrast, high control resulted in a speed-accuracy tradeoff, that is participants in the high control conditions worked slower but with greater accuracy than participants in the low control conditions.

  4. Gender Consequences of a National Performance-Based Funding Model: New Pieces in an Old Puzzle

    ERIC Educational Resources Information Center

    Nielsen, Mathias Wullum

    2017-01-01

    This article investigates the extent to which the Danish "Bibliometric Research Indicator" (BRI) reflects the performance of men and women differently. The model is based on a differentiated counting of peer-reviewed publications, awarding three and eight points for contributions to "well-regarded" and highly selective journals…

  5. 40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Control System Performance Warranty Short Tests § 85.2203 Short test standards for 1981 and later model... 1982 and later model year vehicles at high altitude to which high altitude certification standards of 1... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Short test standards for 1981 and...

  6. 40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Control System Performance Warranty Short Tests § 85.2203 Short test standards for 1981 and later model... 1982 and later model year vehicles at high altitude to which high altitude certification standards of 1... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Short test standards for 1981 and...

  7. 40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Control System Performance Warranty Short Tests § 85.2203 Short test standards for 1981 and later model... 1982 and later model year vehicles at high altitude to which high altitude certification standards of 1... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Short test standards for 1981 and...

  8. 40 CFR 85.2204 - Short test standards for 1981 and later model year light-duty trucks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Control System Performance Warranty Short Tests § 85.2204 Short test standards for 1981 and later model... later model year trucks at high altitude to which high altitude certification standards of 2.0 g/mile HC... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Short test standards for 1981 and...

  9. 40 CFR 85.2204 - Short test standards for 1981 and later model year light-duty trucks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Control System Performance Warranty Short Tests § 85.2204 Short test standards for 1981 and later model... later model year trucks at high altitude to which high altitude certification standards of 2.0 g/mile HC... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Short test standards for 1981 and...

  10. 40 CFR 85.2204 - Short test standards for 1981 and later model year light-duty trucks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Control System Performance Warranty Short Tests § 85.2204 Short test standards for 1981 and later model... later model year trucks at high altitude to which high altitude certification standards of 2.0 g/mile HC... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Short test standards for 1981 and...

  11. 40 CFR 85.2203 - Short test standards for 1981 and later model year light-duty vehicles.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Control System Performance Warranty Short Tests § 85.2203 Short test standards for 1981 and later model... 1982 and later model year vehicles at high altitude to which high altitude certification standards of 1... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Short test standards for 1981 and...

  12. 40 CFR 85.2204 - Short test standards for 1981 and later model year light-duty trucks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Control System Performance Warranty Short Tests § 85.2204 Short test standards for 1981 and later model... later model year trucks at high altitude to which high altitude certification standards of 2.0 g/mile HC... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Short test standards for 1981 and...

  13. A diagnostic model for chronic hypersensitivity pneumonitis.

    PubMed

    Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R

    2016-10-01

    The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist's diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  14. Disk flexibility effects on the rotordynamics of the SSME high pressure turbopumps

    NASA Technical Reports Server (NTRS)

    Flowers, George T.

    1990-01-01

    Rotordynamical analyses are typically performed using rigid disk models. Studies of rotor models in which the effects of disk flexibility were included indicate that it may be an important effect for many systems. This issue is addressed with respect to the Space Shuttle Main Engine high pressure turbopumps. Finite element analyses were performed for a simplified free-free flexible disk rotor models and the modes and frequencies compared to those of a rigid disk model. Equations were developed to account for disk flexibility in rotordynamical analysis. Simulation studies were conducted to assess the influence of disk flexibility on the HPOTP. Some recommendations are given as to the importance of disk flexibility and for how this project should proceed.

  15. A Preliminary Assessment of the SURF Reactive Burn Model Implementation in FLAG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Carl Edward; McCombe, Ryan Patrick; Carver, Kyle

    Properly validated and calibrated reactive burn models (RBM) can be useful engineering tools for assessing high explosive performance and safety. Experiments with high explosives are expensive. Inexpensive RBM calculations are increasingly relied on for predictive analysis for performance and safety. This report discusses the validation of Menikoff and Shaw’s SURF reactive burn model, which has recently been implemented in the FLAG code. The LANL Gapstick experiment is discussed as is its’ utility in reactive burn model validation. Data obtained from pRad for the LT-63 series is also presented along with FLAG simulations using SURF for both PBX 9501 and PBXmore » 9502. Calibration parameters for both explosives are presented.« less

  16. High-performance multiprocessor architecture for a 3-D lattice gas model

    NASA Technical Reports Server (NTRS)

    Lee, F.; Flynn, M.; Morf, M.

    1991-01-01

    The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.

  17. Small-volume, ultrahigh-vacuum-compatible high-pressure reaction cell for combined kinetic and in situ IR spectroscopic measurements on planar model catalysts

    NASA Astrophysics Data System (ADS)

    Zhao, Z.; Diemant, T.; Häring, T.; Rauscher, H.; Behm, R. J.

    2005-12-01

    We describe the design and performance of a high-pressure reaction cell for simultaneous kinetic and in situ infrared reflection (IR) spectroscopic measurements on model catalysts at elevated pressures, between 10-3 and 103mbars, which can be operated both as batch reactor and as flow reactor with defined gas flow. The cell is attached to an ultrahigh-vacuum (UHV) system, which is used for sample preparation and also contains facilities for sample characterization. Specific for this design is the combination of a small cell volume, which allows kinetic measurements with high sensitivity under batch or continuous flow conditions, the complete isolation of the cell from the UHV part during UHV measurements, continuous temperature control during both UHV and high-pressure operation, and rapid transfer between UHV and high-pressure stage. Gas dosing is performed by a designed gas-handling system, which allows operation as flow reactor with calibrated gas flows at adjustable pressures. To study the kinetics of reactions on the model catalysts, a quadrupole mass spectrometer is connected to the high-pressure cell. IR measurements are possible in situ by polarization-modulation infrared reflection-absorption spectroscopy, which also allows measurements at elevated pressures. The performance of the setup is demonstrated by test measurements on the kinetics for CO oxidation and the CO adsorption on a Au /TiO2/Ru(0001) model catalyst film at 1-50 mbar total pressure.

  18. A deep convolutional neural network with new training methods for bearing fault diagnosis under noisy environment and different working load

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Li, Chuanhao; Peng, Gaoliang; Chen, Yuanhang; Zhang, Zhujun

    2018-02-01

    In recent years, intelligent fault diagnosis algorithms using machine learning technique have achieved much success. However, due to the fact that in real world industrial applications, the working load is changing all the time and noise from the working environment is inevitable, degradation of the performance of intelligent fault diagnosis methods is very serious. In this paper, a new model based on deep learning is proposed to address the problem. Our contributions of include: First, we proposed an end-to-end method that takes raw temporal signals as inputs and thus doesn't need any time consuming denoising preprocessing. The model can achieve pretty high accuracy under noisy environment. Second, the model does not rely on any domain adaptation algorithm or require information of the target domain. It can achieve high accuracy when working load is changed. To understand the proposed model, we will visualize the learned features, and try to analyze the reasons behind the high performance of the model.

  19. Generalized internal model robust control for active front steering intervention

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Zhao, Youqun; Ji, Xuewu; Liu, Yahui; Zhang, Lipeng

    2015-03-01

    Because of the tire nonlinearity and vehicle's parameters' uncertainties, robust control methods based on the worst cases, such as H ∞, µ synthesis, have been widely used in active front steering control, however, in order to guarantee the stability of active front steering system (AFS) controller, the robust control is at the cost of performance so that the robust controller is a little conservative and has low performance for AFS control. In this paper, a generalized internal model robust control (GIMC) that can overcome the contradiction between performance and stability is used in the AFS control. In GIMC, the Youla parameterization is used in an improved way. And GIMC controller includes two sections: a high performance controller designed for the nominal vehicle model and a robust controller compensating the vehicle parameters' uncertainties and some external disturbances. Simulations of double lane change (DLC) maneuver and that of braking on split- µ road are conducted to compare the performance and stability of the GIMC control, the nominal performance PID controller and the H ∞ controller. Simulation results show that the high nominal performance PID controller will be unstable under some extreme situations because of large vehicle's parameters variations, H ∞ controller is conservative so that the performance is a little low, and only the GIMC controller overcomes the contradiction between performance and robustness, which can both ensure the stability of the AFS controller and guarantee the high performance of the AFS controller. Therefore, the GIMC method proposed for AFS can overcome some disadvantages of control methods used by current AFS system, that is, can solve the instability of PID or LQP control methods and the low performance of the standard H ∞ controller.

  20. Exploring discrepancies between quantitative validation results and the geomorphic plausibility of statistical landslide susceptibility maps

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2016-06-01

    Empirical models are frequently applied to produce landslide susceptibility maps for large areas. Subsequent quantitative validation results are routinely used as the primary criteria to infer the validity and applicability of the final maps or to select one of several models. This study hypothesizes that such direct deductions can be misleading. The main objective was to explore discrepancies between the predictive performance of a landslide susceptibility model and the geomorphic plausibility of subsequent landslide susceptibility maps while a particular emphasis was placed on the influence of incomplete landslide inventories on modelling and validation results. The study was conducted within the Flysch Zone of Lower Austria (1,354 km2) which is known to be highly susceptible to landslides of the slide-type movement. Sixteen susceptibility models were generated by applying two statistical classifiers (logistic regression and generalized additive model) and two machine learning techniques (random forest and support vector machine) separately for two landslide inventories of differing completeness and two predictor sets. The results were validated quantitatively by estimating the area under the receiver operating characteristic curve (AUROC) with single holdout and spatial cross-validation technique. The heuristic evaluation of the geomorphic plausibility of the final results was supported by findings of an exploratory data analysis, an estimation of odds ratios and an evaluation of the spatial structure of the final maps. The results showed that maps generated by different inventories, classifiers and predictors appeared differently while holdout validation revealed similar high predictive performances. Spatial cross-validation proved useful to expose spatially varying inconsistencies of the modelling results while additionally providing evidence for slightly overfitted machine learning-based models. However, the highest predictive performances were obtained for maps that explicitly expressed geomorphically implausible relationships indicating that the predictive performance of a model might be misleading in the case a predictor systematically relates to a spatially consistent bias of the inventory. Furthermore, we observed that random forest-based maps displayed spatial artifacts. The most plausible susceptibility map of the study area showed smooth prediction surfaces while the underlying model revealed a high predictive capability and was generated with an accurate landslide inventory and predictors that did not directly describe a bias. However, none of the presented models was found to be completely unbiased. This study showed that high predictive performances cannot be equated with a high plausibility and applicability of subsequent landslide susceptibility maps. We suggest that greater emphasis should be placed on identifying confounding factors and biases in landslide inventories. A joint discussion between modelers and decision makers of the spatial pattern of the final susceptibility maps in the field might increase their acceptance and applicability.

  1. Practical Techniques for Modeling Gas Turbine Engine Performance

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  2. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    NASA Astrophysics Data System (ADS)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.

  3. Modeling and experimental performance of an intermediate temperature reversible solid oxide cell for high-efficiency, distributed-scale electrical energy storage

    NASA Astrophysics Data System (ADS)

    Wendel, Christopher H.; Gao, Zhan; Barnett, Scott A.; Braun, Robert J.

    2015-06-01

    Electrical energy storage is expected to be a critical component of the future world energy system, performing load-leveling operations to enable increased penetration of renewable and distributed generation. Reversible solid oxide cells, operating sequentially between power-producing fuel cell mode and fuel-producing electrolysis mode, have the capability to provide highly efficient, scalable electricity storage. However, challenges ranging from cell performance and durability to system integration must be addressed before widespread adoption. One central challenge of the system design is establishing effective thermal management in the two distinct operating modes. This work leverages an operating strategy to use carbonaceous reactant species and operate at intermediate stack temperature (650 °C) to promote exothermic fuel-synthesis reactions that thermally self-sustain the electrolysis process. We present performance of a doped lanthanum-gallate (LSGM) electrolyte solid oxide cell that shows high efficiency in both operating modes at 650 °C. A physically based electrochemical model is calibrated to represent the cell performance and used to simulate roundtrip operation for conditions unique to these reversible systems. Design decisions related to system operation are evaluated using the cell model including current density, fuel and oxidant reactant compositions, and flow configuration. The analysis reveals tradeoffs between electrical efficiency, thermal management, energy density, and durability.

  4. Summary of directional divergence characteristics of several high performance aircraft configurations

    NASA Technical Reports Server (NTRS)

    Greer, H. D.

    1972-01-01

    The present paper summarizes the high-angle-of-attack characteristics of a number of high-performance aircraft as determined from model force tests and free-flight model tests and correlates these characteristics with the dynamic directional-stability parameter. This correlation shows that the dynamic directional-stability parameter correlates fairly well with directional divergence. Data are also presented to show the effect of some airframe modifications on the directional divergence potential of the configuration. These results show that leading-edge slates seem to be the most effective airframe modification for reducing or eliminating the directional divergence potential of aircraft with moderately swept wings.

  5. Prediction of the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient-elution conditions.

    PubMed

    D'Archivio, Angelo Antonio; Maggi, Maria Anna; Ruggieri, Fabrizio

    2014-08-01

    In this paper, a multilayer artificial neural network is used to model simultaneously the effect of solute structure and eluent concentration profile on the retention of s-triazines in reversed-phase high-performance liquid chromatography under linear gradient elution. The retention data of 24 triazines, including common herbicides and their metabolites, are collected under 13 different elution modes, covering the following experimental domain: starting acetonitrile volume fraction ranging between 40 and 60% and gradient slope ranging between 0 and 1% acetonitrile/min. The gradient parameters together with five selected molecular descriptors, identified by quantitative structure-retention relationship modelling applied to individual separation conditions, are the network inputs. Predictive performance of this model is evaluated on six external triazines and four unseen separation conditions. For comparison, retention of triazines is modelled by both quantitative structure-retention relationships and response surface methodology, which describe separately the effect of molecular structure and gradient parameters on the retention. Although applied to a wider variable domain, the network provides a performance comparable to that of the above "local" models and retention times of triazines are modelled with accuracy generally better than 7%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Modeling and Prediction of Solvent Effect on Human Skin Permeability using Support Vector Regression and Random Forest.

    PubMed

    Baba, Hiromi; Takahara, Jun-ichi; Yamashita, Fumiyoshi; Hashida, Mitsuru

    2015-11-01

    The solvent effect on skin permeability is important for assessing the effectiveness and toxicological risk of new dermatological formulations in pharmaceuticals and cosmetics development. The solvent effect occurs by diverse mechanisms, which could be elucidated by efficient and reliable prediction models. However, such prediction models have been hampered by the small variety of permeants and mixture components archived in databases and by low predictive performance. Here, we propose a solution to both problems. We first compiled a novel large database of 412 samples from 261 structurally diverse permeants and 31 solvents reported in the literature. The data were carefully screened to ensure their collection under consistent experimental conditions. To construct a high-performance predictive model, we then applied support vector regression (SVR) and random forest (RF) with greedy stepwise descriptor selection to our database. The models were internally and externally validated. The SVR achieved higher performance statistics than RF. The (externally validated) determination coefficient, root mean square error, and mean absolute error of SVR were 0.899, 0.351, and 0.268, respectively. Moreover, because all descriptors are fully computational, our method can predict as-yet unsynthesized compounds. Our high-performance prediction model offers an attractive alternative to permeability experiments for pharmaceutical and cosmetic candidate screening and optimizing skin-permeable topical formulations.

  7. R&D of high reliable refrigeration system for superconducting generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosoya, T.; Shindo, S.; Yaguchi, H.

    1996-12-31

    Super-GM carries out R&D of 70 MW class superconducting generators (model machines), refrigeration system and superconducting wires to apply superconducting technology to electric power apparatuses. The helium refrigeration system for keeping field windings of superconducting generator (SCG) in cryogenic environment must meet the requirement of high reliability for uninterrupted long term operation of the SCG. In FY 1992, a high reliable conventional refrigeration system for the model machines was integrated by combining components such as compressor unit, higher temperature cold box and lower temperature cold box which were manufactured utilizing various fundamental technologies developed in early stage of the projectmore » since 1988. Since FY 1993, its performance tests have been carried out. It has been confirmed that its performance was fulfilled the development target of liquefaction capacity of 100 L/h and impurity removal in the helium gas to < 0.1 ppm. Furthermore, its operation method and performance were clarified to all different modes as how to control liquefaction rate and how to supply liquid helium from a dewar to the model machine. In addition, the authors have made performance tests and system performance analysis of oil free screw type and turbo type compressors which greatly improve reliability of conventional refrigeration systems. The operation performance and operational control method of the compressors has been clarified through the tests and analysis.« less

  8. Predictive genetic testing for the identification of high-risk groups: a simulation study on the impact of predictive ability

    PubMed Central

    2011-01-01

    Background Genetic risk models could potentially be useful in identifying high-risk groups for the prevention of complex diseases. We investigated the performance of this risk stratification strategy by examining epidemiological parameters that impact the predictive ability of risk models. Methods We assessed sensitivity, specificity, and positive and negative predictive value for all possible risk thresholds that can define high-risk groups and investigated how these measures depend on the frequency of disease in the population, the frequency of the high-risk group, and the discriminative accuracy of the risk model, as assessed by the area under the receiver-operating characteristic curve (AUC). In a simulation study, we modeled genetic risk scores of 50 genes with equal odds ratios and genotype frequencies, and varied the odds ratios and the disease frequency across scenarios. We also performed a simulation of age-related macular degeneration risk prediction based on published odds ratios and frequencies for six genetic risk variants. Results We show that when the frequency of the high-risk group was lower than the disease frequency, positive predictive value increased with the AUC but sensitivity remained low. When the frequency of the high-risk group was higher than the disease frequency, sensitivity was high but positive predictive value remained low. When both frequencies were equal, both positive predictive value and sensitivity increased with increasing AUC, but higher AUC was needed to maximize both measures. Conclusions The performance of risk stratification is strongly determined by the frequency of the high-risk group relative to the frequency of disease in the population. The identification of high-risk groups with appreciable combinations of sensitivity and positive predictive value requires higher AUC. PMID:21797996

  9. Rate and timing cues associated with the cochlear amplifier: level discrimination based on monaural cross-frequency coincidence detection.

    PubMed

    Heinz, M G; Colburn, H S; Carney, L H

    2001-10-01

    The perceptual significance of the cochlear amplifier was evaluated by predicting level-discrimination performance based on stochastic auditory-nerve (AN) activity. Performance was calculated for three models of processing: the optimal all-information processor (based on discharge times), the optimal rate-place processor (based on discharge counts), and a monaural coincidence-based processor that uses a non-optimal combination of rate and temporal information. An analytical AN model included compressive magnitude and level-dependent-phase responses associated with the cochlear amplifier, and high-, medium-, and low-spontaneous-rate (SR) fibers with characteristic frequencies (CFs) spanning the AN population. The relative contributions of nonlinear magnitude and nonlinear phase responses to level encoding were compared by using four versions of the model, which included and excluded the nonlinear gain and phase responses in all possible combinations. Nonlinear basilar-membrane (BM) phase responses are robustly encoded in near-CF AN fibers at low frequencies. Strongly compressive BM responses at high frequencies near CF interact with the high thresholds of low-SR AN fibers to produce large dynamic ranges. Coincidence performance based on a narrow range of AN CFs was robust across a wide dynamic range at both low and high frequencies, and matched human performance levels. Coincidence performance based on all CFs demonstrated the "near-miss" to Weber's law at low frequencies and the high-frequency "mid-level bump." Monaural coincidence detection is a physiologically realistic mechanism that is extremely general in that it can utilize AN information (average-rate, synchrony, and nonlinear-phase cues) from all SR groups.

  10. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  11. Performance assessment of nitrate leaching models for highly vulnerable soils used in low-input farming based on lysimeter data.

    PubMed

    Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco

    2014-11-15

    The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all models were able to identify years and crops with high- and low-leaching rates. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Thermal signature identification system (TheSIS): a spread spectrum temperature cycling method

    NASA Astrophysics Data System (ADS)

    Merritt, Scott

    2015-03-01

    NASA GSFC's Thermal Signature Identification System (TheSIS) 1) measures the high order dynamic responses of optoelectronic components to direct sequence spread-spectrum temperature cycling, 2) estimates the parameters of multiple autoregressive moving average (ARMA) or other models the of the responses, 3) and selects the most appropriate model using the Akaike Information Criterion (AIC). Using the AIC-tested model and parameter vectors from TheSIS, one can 1) select high-performing components on a multivariate basis, i.e., with multivariate Figures of Merit (FOMs), 2) detect subtle reversible shifts in performance, and 3) investigate irreversible changes in component or subsystem performance, e.g. aging. We show examples of the TheSIS methodology for passive and active components and systems, e.g. fiber Bragg gratings (FBGs) and DFB lasers with coupled temperature control loops, respectively.

  13. Limits of performance: CW laser damage

    NASA Astrophysics Data System (ADS)

    Shah, Rashmi S.; Rey, Justin J.; Stewart, Alan F.

    2007-01-01

    High performance optical coatings are an enabling technology for many applications - navigation systems, telecom, fusion, advanced measurement systems of many types as well as directed energy weapons. The results of recent testing of superior optical coatings conducted at high flux levels have been presented. Failure of these coatings was rare. However, induced damage was not expected from simple thermal models relating flux loading to induced temperatures. Clearly, other mechanisms must play a role in the occurrence of laser damage. Contamination is an obvious mechanism-both particulate and molecular. Less obvious are structural defects and the role of induced stresses. These mechanisms are examined through simplified models and finite element analysis. The results of the models are compared to experiment, for induced temperatures and observed stress levels. The role of each mechanism is described and limiting performance is determined.

  14. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    NASA Astrophysics Data System (ADS)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  15. Terrestrial Planet Finder Coronagraph Optical Modeling

    NASA Technical Reports Server (NTRS)

    Basinger, Scott A.; Redding, David C.

    2004-01-01

    The Terrestrial Planet Finder Coronagraph will rely heavily on modeling and analysis throughout its mission lifecycle. Optical modeling is especially important, since the tolerances on the optics as well as scattered light suppression are critical for the mission's success. The high contrast imaging necessary to observe a planet orbiting a distant star requires new and innovative technologies to be developed and tested, and detailed optical modeling provides predictions for evaluating design decisions. It also provides a means to develop and test algorithms designed to actively suppress scattered light via deformable mirrors and other techniques. The optical models are used in conjunction with structural and thermal models to create fully integrated optical/structural/thermal models that are used to evaluate dynamic effects of disturbances on the overall performance of the coronagraph. The optical models we have developed have been verified on the High Contrast Imaging Testbed. Results of the optical modeling verification and the methods used to perform full three-dimensional near-field diffraction analysis are presented.

  16. Links among high-performance work environment, service quality, and customer satisfaction: an extension to the healthcare sector.

    PubMed

    Scotti, Dennis J; Harmon, Joel; Behson, Scott J

    2007-01-01

    Healthcare managers must deliver high-quality patient services that generate highly satisfied and loyal customers. In this article, we examine how a high-involvement approach to the work environment of healthcare employees may lead to exceptional service quality, satisfied patients, and ultimately to loyal customers. Specifically, we investigate the chain of events through which high-performance work systems (HPWS) and customer orientation influence employee and customer perceptions of service quality and patient satisfaction in a national sample of 113 Veterans Health Administration ambulatory care centers. We present a conceptual model for linking work environment to customer satisfaction and test this model using structural equations modeling. The results suggest that (1) HPWS is linked to employee perceptions of their ability to deliver high-quality customer service, both directly and through their perceptions of customer orientation; (2) employee perceptions of customer service are linked to customer perceptions of high-quality service; and (3) perceived service quality is linked with customer satisfaction. Theoretical and practical implications of our findings, including suggestions of how healthcare managers can implement changes to their work environments, are discussed.

  17. Simulation and analysis of atmospheric transmission performance in airborne Terahertz communication

    NASA Astrophysics Data System (ADS)

    Pan, Chengsheng; Shi, Xin; Liu, Chengyang; Wang, Xue; Ding, Yuanming

    2018-02-01

    For the special meteorological condition of high altitude transmission; first the influence of atmospheric turbulence on the Terahertz wireless communication is analyzed, and the atmospheric constants model with increase in height is given. On this basis, the relationship between the flicker index and the high altitude horizon transmission distance of the Terahertz wave is analyzed by simulation. Then, through the analysis of high altitude path loss and noise, the high altitude wireless link model is built. Finally, the link loss budget is given according to the current Terahertz device parameters, and bit error rate (BER) performance of on-off keyed modulation (OOK) and pulse position modulation (PPM) in four Terahertz frequency bands is compared and analyzed. All these above provided theoretical reference for high-altitude Terahertz wireless communication transmission.

  18. Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution

    ERIC Educational Resources Information Center

    Subramanian, Venkat R.

    2006-01-01

    High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…

  19. Morbidity Rate Prediction of Dengue Hemorrhagic Fever (DHF) Using the Support Vector Machine and the Aedes aegypti Infection Rate in Similar Climates and Geographical Areas

    PubMed Central

    Kesorn, Kraisak; Ongruk, Phatsavee; Chompoosri, Jakkrawarn; Phumee, Atchara; Thavara, Usavadee; Tawatsin, Apiwat; Siriyasatien, Padet

    2015-01-01

    Background In the past few decades, several researchers have proposed highly accurate prediction models that have typically relied on climate parameters. However, climate factors can be unreliable and can lower the effectiveness of prediction when they are applied in locations where climate factors do not differ significantly. The purpose of this study was to improve a dengue surveillance system in areas with similar climate by exploiting the infection rate in the Aedes aegypti mosquito and using the support vector machine (SVM) technique for forecasting the dengue morbidity rate. Methods and Findings Areas with high incidence of dengue outbreaks in central Thailand were studied. The proposed framework consisted of the following three major parts: 1) data integration, 2) model construction, and 3) model evaluation. We discovered that the Ae. aegypti female and larvae mosquito infection rates were significantly positively associated with the morbidity rate. Thus, the increasing infection rate of female mosquitoes and larvae led to a higher number of dengue cases, and the prediction performance increased when those predictors were integrated into a predictive model. In this research, we applied the SVM with the radial basis function (RBF) kernel to forecast the high morbidity rate and take precautions to prevent the development of pervasive dengue epidemics. The experimental results showed that the introduced parameters significantly increased the prediction accuracy to 88.37% when used on the test set data, and these parameters led to the highest performance compared to state-of-the-art forecasting models. Conclusions The infection rates of the Ae. aegypti female mosquitoes and larvae improved the morbidity rate forecasting efficiency better than the climate parameters used in classical frameworks. We demonstrated that the SVM-R-based model has high generalization performance and obtained the highest prediction performance compared to classical models as measured by the accuracy, sensitivity, specificity, and mean absolute error (MAE). PMID:25961289

  20. Influence of Cleats-Surface Interaction on the Performance and Risk of Injury in Soccer: A Systematic Review

    PubMed Central

    Macedo, Rui; Montes, António Mesquita

    2017-01-01

    Objective To review the influence of cleats-surface interaction on the performance and risk of injury in soccer athletes. Design Systematic review. Data Sources Scopus, Web of science, PubMed, and B-on. Eligibility Criteria Full experimental and original papers, written in English that studied the influence of soccer cleats on sports performance and injury risk in artificial or natural grass. Results Twenty-three articles were included in this review: nine related to performance and fourteen to injury risk. On artificial grass, the soft ground model on dry and wet conditions and the turf model in wet conditions are related to worse performance. Compared to rounded studs, bladed ones improve performance during changes of directions in both natural and synthetic grass. Cleat models presenting better traction on the stance leg improve ball velocity while those presenting a homogeneous pressure across the foot promote better kicking accuracy. Bladed studs can be considered less secure by increasing plantar pressure on lateral border. The turf model decrease peak plantar pressure compared to other studded models. Conclusion The soft ground model provides lower performance especially on artificial grass, while the turf model provides a high protective effect in both fields. PMID:28684897

  1. Influence of Cleats-Surface Interaction on the Performance and Risk of Injury in Soccer: A Systematic Review.

    PubMed

    Silva, Diogo C F; Santos, Rubim; Vilas-Boas, João Paulo; Macedo, Rui; Montes, António Mesquita; Sousa, Andreia S P

    2017-01-01

    To review the influence of cleats-surface interaction on the performance and risk of injury in soccer athletes. Systematic review. Scopus, Web of science, PubMed, and B-on. Full experimental and original papers, written in English that studied the influence of soccer cleats on sports performance and injury risk in artificial or natural grass. Twenty-three articles were included in this review: nine related to performance and fourteen to injury risk. On artificial grass, the soft ground model on dry and wet conditions and the turf model in wet conditions are related to worse performance. Compared to rounded studs, bladed ones improve performance during changes of directions in both natural and synthetic grass. Cleat models presenting better traction on the stance leg improve ball velocity while those presenting a homogeneous pressure across the foot promote better kicking accuracy. Bladed studs can be considered less secure by increasing plantar pressure on lateral border. The turf model decrease peak plantar pressure compared to other studded models. The soft ground model provides lower performance especially on artificial grass, while the turf model provides a high protective effect in both fields.

  2. Integrated modeling of plasma ramp-up in DIII-D ITER-like and high bootstrap current scenario discharges

    NASA Astrophysics Data System (ADS)

    Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team

    2018-04-01

    Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.

  3. Relationship of nurses' intrapersonal characteristics with work performance and caring behaviors: A cross-sectional study.

    PubMed

    Geyer, Nelouise-Marié; Coetzee, Siedine K; Ellis, Suria M; Uys, Leana R

    2018-02-28

    This study aimed to describe intrapersonal characteristics (professional values, personality, empathy, and job involvement), work performance as perceived by nurses, and caring behaviors as perceived by patients, and to examine the relationships among these variables. A cross-sectional design was employed. A sample was recruited of 218 nurses and 116 patients in four private hospitals and four public hospitals. Data were collected using self-report measures. Data analysis included descriptive statistics, exploratory and confirmatory factor analyses, hierarchical linear modelling, correlations, and structural equation modeling. Nurses perceived their work performance to be of high quality. Among the intrapersonal characteristics, nurses had high scores for professional values, and moderately high scores for personality, empathy and job involvement. Patients perceived nurses' caring behaviors as moderately high. Professional values of nurses were the only selected intrapersonal characteristic with a statistically significant positive relationship, of practical importance, with work performance as perceived by nurses and with caring behaviors as perceived by patients at ward level. Managers can enhance nurses' work performance and caring behaviors through provision of in-service training that focuses on development of professional values. © 2018 John Wiley & Sons Australia, Ltd.

  4. 3D Structural Model of High-Performance Non-Fullerene Polymer Solar Cells as Revealed by High-Resolution AFM.

    PubMed

    Shi, Shaowei; Chen, Xiaofeng; Liu, Xubo; Wu, Xuefei; Liu, Feng; Zhang, Zhi-Guo; Li, Yongfang; Russell, Thomas P; Wang, Dong

    2017-07-26

    Rapid improvements in nonfullerene polymer solar cells (PSCs) have brought power conversion efficiencies to greater than 12%. To further improve device performance, a fundamental understanding of the correlations between structure and performance is essential. In this paper, based on a typical high-performance system consisting of J61(one donor-acceptor (D-A) copolymer of benzodithiophene and fluorine substituted benzotriazole) and ITIC (3,9-bis(2-methylene-(3-(1,1-dicyanomethylene)-indanone)-5,5,11,11-tetrakis(4-hexylphenyl)-dithieno[2,3-d:2',3'-d']-s-indaceno[1,2-b:5,6-b']-dithiophene), a 3D structural model is directly imaged by employing high-resolution atomic force microscopy (AFM). Hierarchical morphologies ranging from fiberlike crystallites, several nanometers in size, to a bicontinuous morphology, having domains tens of nanometers in size, are observed. A fibrillar interpenetrating networks of J61-rich domains embedded in a matrix comprised of a J61/ITIC is seen, reflecting the partial miscibility of J61 with ITIC. These hierarchical nanostructural characteristics are coupled to significantly enhanced exciton dissociation, and further contribute to photocurrent and final device performance.

  5. Study of a pursuit-evasion guidance law for high performance aircraft

    NASA Technical Reports Server (NTRS)

    Williams, Peggy S.; Menon, P. K. A.; Antoniewicz, Robert F.; Duke, Eugene L.

    1989-01-01

    The study of a one-on-one aircraft pursuit-evasion guidance scheme for high-performance aircraft is discussed. The research objective is to implement a guidance law derived earlier using differential game theory in conjunction with the theory of feedback linearization. Unlike earlier research in this area, the present formulation explicitly recognizes the two-sided nature of the pursuit-evasion scenario. The present research implements the guidance law in a realistic model of a modern high-performance fighter aircraft. Also discussed are the details of the guidance law, implementation in a highly detailed simulation of a high-performance fighter, and numerical results for two engagement geometries. Modifications of the guidance law for onboard implementation is also discussed.

  6. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  7. Performance Analysis of Transposition Models Simulating Solar Radiation on Inclined Surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit

    2016-06-02

    Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined photovoltaic panels. Following numerous studies comparing the performance of transposition models, this work aims to understand the quantitative uncertainty in state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models, with one substantially underestimating the diffuse plane-of-array irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of the empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can bemore » used as a guide for the future development of physics-based transposition models and evaluations of system performance.« less

  8. Flexible Fabrics with High Thermal Conductivity for Advanced Spacesuits

    NASA Technical Reports Server (NTRS)

    Trevino, Luis A.; Bue, Grant; Orndoff, Evelyne; Kesterson, Matt; Connel, John W.; Smith, Joseph G., Jr.; Southward, Robin E.; Working, Dennis; Watson, Kent A.; Delozier, Donovan M.

    2006-01-01

    This paper describes the effort and accomplishments for developing flexible fabrics with high thermal conductivity (FFHTC) for spacesuits to improve thermal performance, lower weight and reduce complexity. Commercial and additional space exploration applications that require substantial performance enhancements in removal and transport of heat away from equipment as well as from the human body can benefit from this technology. Improvements in thermal conductivity were achieved through the use of modified polymers containing thermally conductive additives. The objective of the FFHTC effort is to significantly improve the thermal conductivity of the liquid cooled ventilation garment by improving the thermal conductivity of the subcomponents (i.e., fabric and plastic tubes). This paper presents the initial system modeling studies, including a detailed liquid cooling garment model incorporated into the Wissler human thermal regulatory model, to quantify the necessary improvements in thermal conductivity and garment geometries needed to affect system performance. In addition, preliminary results of thermal conductivity improvements of the polymer components of the liquid cooled ventilation garment are presented. By improving thermal garment performance, major technology drivers will be addressed for lightweight, high thermal conductivity, flexible materials for spacesuits that are strategic technical challenges of the Exploration

  9. Polarization-mediated Debye-screening of surface potential fluctuations in dual-channel AlN/GaN high electron mobility transistors

    NASA Astrophysics Data System (ADS)

    Deen, David A.; Miller, Ross A.; Osinsky, Andrei V.; Downey, Brian P.; Storm, David F.; Meyer, David J.; Scott Katzer, D.; Nepal, Neeraj

    2016-12-01

    A dual-channel AlN/GaN/AlN/GaN high electron mobility transistor (HEMT) architecture is proposed, simulated, and demonstrated that suppresses gate lag due to surface-originated trapped charge. Dual two-dimensional electron gas (2DEG) channels are utilized such that the top 2DEG serves as an equipotential that screens potential fluctuations resulting from surface trapped charge. The bottom channel serves as the transistor's modulated channel. Two device modeling approaches have been performed as a means to guide the device design and to elucidate the relationship between the design and performance metrics. The modeling efforts include a self-consistent Poisson-Schrodinger solution for electrostatic simulation as well as hydrodynamic three-dimensional device modeling for three-dimensional electrostatics, steady-state, and transient simulations. Experimental results validated the HEMT design whereby homo-epitaxial growth on free-standing GaN substrates and fabrication of the same-wafer dual-channel and recessed-gate AlN/GaN HEMTs have been demonstrated. Notable pulsed-gate performance has been achieved by the fabricated HEMTs through a gate lag ratio of 0.86 with minimal drain current collapse while maintaining high levels of dc and rf performance.

  10. Automating the generation of finite element dynamical cores with Firedrake

    NASA Astrophysics Data System (ADS)

    Ham, David; Mitchell, Lawrence; Homolya, Miklós; Luporini, Fabio; Gibson, Thomas; Kelly, Paul; Cotter, Colin; Lange, Michael; Kramer, Stephan; Shipton, Jemma; Yamazaki, Hiroe; Paganini, Alberto; Kärnä, Tuomas

    2017-04-01

    The development of a dynamical core is an increasingly complex software engineering undertaking. As the equations become more complete, the discretisations more sophisticated and the hardware acquires ever more fine-grained parallelism and deeper memory hierarchies, the problem of building, testing and modifying dynamical cores becomes increasingly complex. Here we present Firedrake, a code generation system for the finite element method with specialist features designed to support the creation of geoscientific models. Using Firedrake, the dynamical core developer writes the partial differential equations in weak form in a high level mathematical notation. Appropriate function spaces are chosen and time stepping loops written at the same high level. When the programme is run, Firedrake generates high performance C code for the resulting numerics which are executed in parallel. Models in Firedrake typically take a tiny fraction of the lines of code required by traditional hand-coding techniques. They support more sophisticated numerics than are easily achieved by hand, and the resulting code is frequently higher performance. Critically, debugging, modifying and extending a model written in Firedrake is vastly easier than by traditional methods due to the small, highly mathematical code base. Firedrake supports a wide range of key features for dynamical core creation: A vast range of discretisations, including both continuous and discontinuous spaces and mimetic (C-grid-like) elements which optimally represent force balances in geophysical flows. High aspect ratio layered meshes suitable for ocean and atmosphere domains. Curved elements for high accuracy representations of the sphere. Support for non-finite element operators, such as parametrisations. Access to PETSc, a world-leading library of programmable linear and nonlinear solvers. High performance adjoint models generated automatically by symbolically reasoning about the forward model. This poster will present the key features of the Firedrake system, as well as those of Gusto, an atmospheric dynamical core, and Thetis, a coastal ocean model, both of which are written in Firedrake.

  11. On-Board Propulsion System Analysis of High Density Propellants

    NASA Technical Reports Server (NTRS)

    Schneider, Steven J.

    1998-01-01

    The impact of the performance and density of on-board propellants on science payload mass of Discovery Program class missions is evaluated. A propulsion system dry mass model, anchored on flight-weight system data from the Near Earth Asteroid Rendezvous mission is used. This model is used to evaluate the performance of liquid oxygen, hydrogen peroxide, hydroxylammonium nitrate, and oxygen difluoride oxidizers with hydrocarbon and metal hydride fuels. Results for the propellants evaluated indicate that the state-of-art, Earth Storable propellants with high performance rhenium engine technology in both the axial and attitude control systems has performance capabilities that can only be exceeded by liquid oxygen/hydrazine, liquid oxygen/diborane and oxygen difluoride/diborane propellant combinations. Potentially lower ground operations costs is the incentive for working with nontoxic propellant combinations.

  12. Accumulative Probability Model for Automated Network Traffic Analyses

    DOT National Transportation Integrated Search

    1972-10-01

    THE REPORT PRESENTS AN ILLUSTRATION OF THE ACCUMULATIVE PROBABILITY MODEL WHICH IS APPLICABLE TO GROUND TRANSPORTATION SYSTEMS WHERE HIGH-SPEED AND CLOSE HEADWAYS ARE A PERFORMANCE REQUIREMENT. THE PAPER DESCRIBES THE MODEL, ILLUSTRATES IT WITH A HYP...

  13. Sensor fusion display evaluation using information integration models in enhanced/synthetic vision applications

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1993-01-01

    Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.

  14. Users matter : multi-agent systems model of high performance computing cluster users.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Hood, C. S.; Decision and Information Sciences

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less

  15. Antihypoxants, thiasolo[5,4-b]indole derivatives, increase exercise performance in rats and mice.

    PubMed

    Marysheva, V V; Shabanov, P D

    2009-01-01

    The actoptrotective activity of 12 new antihypoxants of the thiasolo[5,4-b]indole series was studied on the model of treadmill running until exhaustion 1 and 24 h after intraperitoneal injection. Highly active compounds more effective than the reference drugs bemithyl and phenamine were found. They increased exercise performance 1 or 24 h after injection or maintained high performance throughout 24 h.

  16. A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System

    NASA Astrophysics Data System (ADS)

    Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.

    2017-10-01

    A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.

  17. GOCO05c: A New Combined Gravity Field Model Based on Full Normal Equations and Regionally Varying Weighting

    NASA Astrophysics Data System (ADS)

    Fecher, T.; Pail, R.; Gruber, T.

    2017-05-01

    GOCO05c is a gravity field model computed as a combined solution of a satellite-only model and a global data set of gravity anomalies. It is resolved up to degree and order 720. It is the first model applying regionally varying weighting. Since this causes strong correlations among all gravity field parameters, the resulting full normal equation system with a size of 2 TB had to be solved rigorously by applying high-performance computing. GOCO05c is the first combined gravity field model independent of EGM2008 that contains GOCE data of the whole mission period. The performance of GOCO05c is externally validated by GNSS-levelling comparisons, orbit tests, and computation of the mean dynamic topography, achieving at least the quality of existing high-resolution models. Results show that the additional GOCE information is highly beneficial in insufficiently observed areas, and that due to the weighting scheme of individual data the spectral and spatial consistency of the model is significantly improved. Due to usage of fill-in data in specific regions, the model cannot be used for physical interpretations in these regions.

  18. High-frequency analysis of Earth gravity field models based on terrestrial gravity and GPS/levelling data: a case study in Greece

    NASA Astrophysics Data System (ADS)

    Papanikolaou, T. D.; Papadopoulos, N.

    2015-06-01

    The present study aims at the validation of global gravity field models through numerical investigation in gravity field functionals based on spherical harmonic synthesis of the geopotential models and the analysis of terrestrial data. We examine gravity models produced according to the latest approaches for gravity field recovery based on the principles of the Gravity field and steadystate Ocean Circulation Explorer (GOCE) and Gravity Recovery And Climate Experiment (GRACE) satellite missions. Furthermore, we evaluate the overall spectrum of the ultra-high degree combined gravity models EGM2008 and EIGEN-6C3stat. The terrestrial data consist of gravity and collocated GPS/levelling data in the overall Hellenic region. The software presented here implements the algorithm of spherical harmonic synthesis in a degree-wise cumulative sense. This approach may quantify the bandlimited performance of the individual models by monitoring the degree-wise computed functionals against the terrestrial data. The degree-wise analysis performed yields insight in the short-wavelengths of the Earth gravity field as these are expressed by the high degree harmonics.

  19. Physical fitness predicts technical-tactical and time-motion profile in simulated Judo and Brazilian Jiu-Jitsu matches.

    PubMed

    Coswig, Victor S; Gentil, Paulo; Bueno, João C A; Follmer, Bruno; Marques, Vitor A; Del Vecchio, Fabrício B

    2018-01-01

    Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. The sample consisted of Judo ( n  = 16) and BJJ ( n  = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights.

  20. Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.

    PubMed

    Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders

    2018-05-02

    Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.

  1. High Stakes Testing in Lower-Performing High Schools: Mathematics Teachers' Perceptions of Burnout and Retention

    ERIC Educational Resources Information Center

    Kirtley, Karmen

    2012-01-01

    This dissertation grows from a concern that the current public school accountability model, designed ostensibly to increase achievement in lower-performing schools, may be creating unidentified negative consequences for teachers and students within those schools. This hermeneutical phenomenological study features the perceptions of seventeen…

  2. Strategic Culture Change: The Door to Achieving High Performance and Inclusion.

    ERIC Educational Resources Information Center

    Miller, Frederick A.

    1998-01-01

    Presents diversity as a resource to create a high performing work culture that enables all employees to do their best work. Distinguishes between diversity and inclusion, describes a model for diagnosing an organization's culture, sets forth steps for implementing a organizational change, and discusses the human resource professional's role.…

  3. Performance-Based Task Assessment of Higher-Order Proficiencies in Redesigned STEM High Schools

    ERIC Educational Resources Information Center

    Ernst, Jeremy V.; Glennie, Elizabeth; Li, Songze

    2017-01-01

    This study explored student abilities in applying conceptual knowledge when presented with structured performance tasks. Specifically, the study gauged proficiency in higher-order applications of students enrolled in earth and environmental science or biology. The student sample was drawn from a Redesigned STEM high school model where a tested…

  4. Performance and Reliability of Bonded Interfaces for High-temperature Packaging: Annual Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVoto, Douglas J.

    2017-10-19

    As maximum device temperatures approach 200 °Celsius, continuous operation, sintered silver materials promise to maintain bonds at these high temperatures without excessive degradation rates. A detailed characterization of the thermal performance and reliability of sintered silver materials and processes has been initiated for the next year. Future steps in crack modeling include efforts to simulate crack propagation directly using the extended finite element method (X-FEM), a numerical technique that uses the partition of unity method for modeling discontinuities such as cracks in a system.

  5. Performance and Reliability of Bonded Interfaces for High-Temperature Packaging; NREL (National Renewable Energy Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVoto, Douglas

    2015-06-10

    This is a technical review of the DOE VTO EDT project EDT063, Performance and Reliability of Bonded Interfaces for High-Temperature Packaging. A procedure for analyzing the reliability of sintered-silver through experimental thermal cycling and crack propagation modeling has been outlined and results have been presented.

  6. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  7. CAD and CAE Analysis for Siphon Jet Toilet

    NASA Astrophysics Data System (ADS)

    Wang, Yuhua; Xiu, Guoji; Tan, Haishu

    The high precision 3D laser scanner with the dual CCD technology was used to measure the original design sample of a siphon jet toilet. The digital toilet model was constructed from the cloud data measured with the curve and surface fitting technology and the CAD/CAE systems. The Realizable k - ɛ double equation model of the turbulence viscosity coefficient method and the VOF multiphase flow model were used to simulate the flushing flow in the toilet digital model. Through simulating and analyzing the distribution of the flushing flow's total pressure, the flow speed at the toilet-basin surface and the siphoning bent tube, the toilet performance can be evaluated efficiently and conveniently. The method of "establishing digital model, flushing flow simulating, performances evaluating, function shape modifying" would provide a high efficiency approach to develop new water-saving toilets.

  8. A brief dataset on the model-based evaluation of the growth performance of Bacillus coagulans and l-lactic acid production in a lignin-supplemented medium.

    PubMed

    Glaser, Robert; Venus, Joachim

    2017-04-01

    The data presented in this article are related to the research article entitled "Model-based characterization of growth performance and l-lactic acid production with high optical purity by thermophilic Bacillus coagulans in a lignin-supplemented mixed substrate medium (R. Glaser and J. Venus, 2016) [1]". This data survey provides the information on characterization of three Bacillus coagulans strains. Information on cofermentation of lignocellulose-related sugars in lignin-containing media is given. Basic characterization data are supported by optical-density high-throughput screening and parameter adjustment to logistic growth models. Lab scale fermentation procedures are examined by model adjustment of a Monod kinetics-based growth model. Lignin consumption is analyzed using the data on decolorization of a lignin-supplemented minimal medium.

  9. Synergia: an accelerator modeling tool with 3-D space charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amundson, James F.; Spentzouris, P.; /Fermilab

    2004-07-01

    High precision modeling of space-charge effects, together with accurate treatment of single-particle dynamics, is essential for designing future accelerators as well as optimizing the performance of existing machines. We describe Synergia, a high-fidelity parallel beam dynamics simulation package with fully three dimensional space-charge capabilities and a higher order optics implementation. We describe the computational techniques, the advanced human interface, and the parallel performance obtained using large numbers of macroparticles. We also perform code benchmarks comparing to semi-analytic results and other codes. Finally, we present initial results on particle tune spread, beam halo creation, and emittance growth in the Fermilab boostermore » accelerator.« less

  10. A Stewart isolator with high-static-low-dynamic stiffness struts based on negative stiffness magnetic springs

    NASA Astrophysics Data System (ADS)

    Zheng, Yisheng; Li, Qingpin; Yan, Bo; Luo, Yajun; Zhang, Xinong

    2018-05-01

    In order to improve the isolation performance of passive Stewart platforms, the negative stiffness magnetic spring (NSMS) is employed to construct high static low dynamic stiffness (HSLDS) struts. With the NSMS, the resonance frequencies of the platform can be reduced effectively without deteriorating its load bearing capacity. The model of the Stewart isolation platform with HSLDS struts is presented and the stiffness characteristic of its struts is studied firstly. Then the nonlinear dynamic model of the platform including both geometry nonlinearity and stiffness nonlinearity is established; and its simplified dynamic model is derived under the condition of small vibration. The effect of nonlinearity on the isolation performance is also evaluated. Finally, a prototype is built and the isolation performance is tested. Both simulated and experimental results demonstrate that, by using the NSMS, the resonance frequencies of the Stewart isolator are reduced and the isolation performance in all six directions is improved: the isolation frequency band is increased and extended to a lower-frequency level.

  11. Staggered-grid finite-difference acoustic modeling with the Time-Domain Atmospheric Acoustic Propagation Suite (TDAAPS).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldridge, David Franklin; Collier, Sandra L.; Marlin, David H.

    2005-05-01

    This document is intended to serve as a users guide for the time-domain atmospheric acoustic propagation suite (TDAAPS) program developed as part of the Department of Defense High-Performance Modernization Office (HPCMP) Common High-Performance Computing Scalable Software Initiative (CHSSI). TDAAPS performs staggered-grid finite-difference modeling of the acoustic velocity-pressure system with the incorporation of spatially inhomogeneous winds. Wherever practical the control structure of the codes are written in C++ using an object oriented design. Sections of code where a large number of calculations are required are written in C or F77 in order to enable better compiler optimization of these sections. Themore » TDAAPS program conforms to a UNIX style calling interface. Most of the actions of the codes are controlled by adding flags to the invoking command line. This document presents a large number of examples and provides new users with the necessary background to perform acoustic modeling with TDAAPS.« less

  12. Toward a Model-Based Predictive Controller Design in Brain–Computer Interfaces

    PubMed Central

    Kamrunnahar, M.; Dias, N. S.; Schiff, S. J.

    2013-01-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain–computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8–23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications. PMID:21267657

  13. Toward a model-based predictive controller design in brain-computer interfaces.

    PubMed

    Kamrunnahar, M; Dias, N S; Schiff, S J

    2011-05-01

    A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.

  14. Progress with variable cycle engines

    NASA Technical Reports Server (NTRS)

    Westmoreland, J. S.

    1980-01-01

    The evaluation of components of an advanced propulsion system for a future supersonic cruise vehicle is discussed. These components, a high performance duct burner for thrust augmentation and a low jet noise coannular exhaust nozzle, are part of the variable stream control engine. An experimental test program involving both isolated component and complete engine tests was conducted for the high performance, low emissions duct burner with excellent results. Nozzle model tests were completed which substantiate the inherent jet noise benefit associated with the unique velocity profile possible of a coannular exhaust nozzle system on a variable stream control engine. Additional nozzle model performance tests have established high thrust efficiency levels at takeoff and supersonic cruise for this nozzle system. Large scale testing of these two critical components is conducted using an F100 engine as the testbed for simulating the variable stream control engine.

  15. Evaluation of global climate model on performances of precipitation simulation and prediction in the Huaihe River basin

    NASA Astrophysics Data System (ADS)

    Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi

    2017-06-01

    Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.

  16. Improving low-performing high schools: searching for evidence of promise.

    PubMed

    Fleischman, Steve; Heppen, Jessica

    2009-01-01

    Noting that many of the nation's high schools are beset with major problems, such as low student reading and math achievement, high dropout rates, and an inadequate supply of effective teachers, Steve Fleischman and Jessica Heppen survey a range of strategies that educators have used to improve low-performing high schools. The authors begin by showing how the standards-based school reform movement, together with the No Child Left Behind Act requirement that underperforming schools adopt reforms supported by scientifically based research, spurred policy makers, educators, and researchers to create and implement a variety of approaches to attain improvement. Fleischman and Heppen then review a number of widely adopted reform models that aim to change "business as usual" in low-performing high schools. The models include comprehensive school reform programs, dual enrollment and early college high schools, smaller learning communities, specialty (for example, career) academies, charter high schools, and education management organizations. In practice, say the authors, many of these improvement efforts overlap, defying neat distinctions. Often, reforms are combined to reinforce one another. The authors explain the theories that drive the reforms, review evidence of their reforms' effectiveness to date, and suggest what it will take to make them work well. Although the reforms are promising, the authors say, few as yet have solid evidence of systematic or sustained success. In concluding, Fleischman and Heppen emphasize that the reasons for a high school's poor performance are so complex that no one reform model or approach, no matter how powerful, can turn around low-performing schools. They also stress the need for educators to implement each reform program with fidelity to its requirements and to support it for the time required for success. Looking to the future, the authors suggest steps that decision makers, researchers, and sponsors of research can take to promote evidence-based progress in education.

  17. DMI's Baltic Sea Coastal operational forecasting system

    NASA Astrophysics Data System (ADS)

    Murawski, Jens; Berg, Per; Weismann Poulsen, Jacob

    2017-04-01

    Operational forecasting is challenged with bridging the gap between the large scales of the driving weather systems and the local, human scales of the model applications. The limit of what can be represented by local model has been continuously shifted to higher and higher spatial resolution, with the aim to better resolve the local dynamic and to make it possible to describe processes that could only be parameterised in older versions, with the ultimate goal to improve the quality of the forecast. Current hardware trends demand a str onger focus on the development of efficient, highly parallelised software and require a refactoring of the code with a solid focus on portable performance. The gained performance can be used for running high resolution model with a larger coverage. Together with the development of efficient two-way nesting routines, this has made it possible to approach the near-coastal zone with model applications that can run in a time effective way. Denmarks Meteorological Institute uses the HBM(1) ocean circulation model for applications that covers the entire Baltic Sea and North Sea with an integrated model set-up that spans the range of horizontal resolution from 1nm for the entire Baltic Sea to approx. 200m resolution in local fjords (Limfjord). For the next model generation, the high resolution set-ups are going to be extended and new high resolution domains in coastal zones are either implemented or tested for operational use. For the first time it will be possible to cover large stretches of the Baltic coastal zone with sufficiently high resolution to model the local hydrodynamic adequately. (1) HBM stands for HIROMB-BOOS-Model, whereas HIROMB stands for "High Resolution Model for the Baltic Sea" and BOOS stands for "Baltic Operational Oceanography System".

  18. Argobots: A Lightweight Low-Level Threading and Tasking Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seo, Sangmin; Amer, Abdelhalim; Balaji, Pavan

    In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, either are too specific to applications or architectures or are not as powerful or flexible. In this paper, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing amore » rich set of controls to allow specialization by end users or high-level programming models. We describe the design, implementation, and performance characterization of Argobots and present integrations with three high-level models: OpenMP, MPI, and colocated I/O services. Evaluations show that (1) Argobots, while providing richer capabilities, is competitive with existing simpler generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency-hiding capabilities; and (4) I/O services with Argobots reduce interference with colocated applications while achieving performance competitive with that of a Pthreads approach.« less

  19. Monte-Carlo modelling to determine optimum filter choices for sub-microsecond optical pyrometry.

    PubMed

    Ota, Thomas A; Chapman, David J; Eakins, Daniel E

    2017-04-01

    When designing a spectral-band pyrometer for use at high time resolutions (sub-μs), there is ambiguity regarding the optimum characteristics for a spectral filter(s). In particular, while prior work has discussed uncertainties in spectral-band pyrometry, there has been little discussion of the effects of noise which is an important consideration in time-resolved, high speed experiments. Using a Monte-Carlo process to simulate the effects of noise, a model of collection from a black body has been developed to give insights into the optimum choices for centre wavelength and passband width. The model was validated and then used to explore the effects of centre wavelength and passband width on measurement uncertainty. This reveals a transition centre wavelength below which uncertainties in calculated temperature are high. To further investigate system performance, simultaneous variation of the centre wavelength and bandpass width of a filter is investigated. Using data reduction, the effects of temperature and noise levels are illustrated and an empirical approximation is determined. The results presented show that filter choice can significantly affect instrument performance and, while best practice requires detailed modelling to achieve optimal performance, the expression presented can be used to aid filter selection.

  20. Organization-based Model-driven Development of High-assurance Multiagent Systems

    DTIC Science & Technology

    2009-02-27

    based Model -driven Development of High-assurance Multiagent Systems " performed by Dr. Scott A . DeLoach and Dr Robby at Kansas State University... A Capabilities Based Model for Artificial Organizations. Journal of Autonomous Agents and Multiagent Systems . Volume 16, no. 1, February 2008, pp...Matson, E . T. (2007). A capabilities based theory of artificial organizations. Journal of Autonomous Agents and Multiagent Systems

  1. DO3SE model applicability and O3 flux performance compared to AOT40 for an O3-sensitive tropical tree species (Psidium guajava L. 'Paluma').

    PubMed

    Assis, Pedro I L S; Alonso, Rocío; Meirelles, Sérgio T; Moraes, Regina M

    2015-07-01

    Phytotoxic ozone (O3) levels have been recorded in the Metropolitan Region of São Paulo (MRSP). Flux-based critical levels for O3 through stomata have been adopted for some northern hemisphere species, showing better accuracy than with accumulated ozone exposure above a threshold of 40 ppb (AOT40). In Brazil, critical levels for vegetation protection against O3 adverse effects do not exist. The study aimed to investigate the applicability of O3 deposition model (Deposition of Ozone for Stomatal Exchange (DO3SE)) to an O3-sensitive tropical tree species (Psidium guajava L. 'Paluma') under the MRSP environmental conditions, which are very unstable, and to assess the performance of O3 flux and AOT40 in relation to O3-induced leaf injuries. Stomatal conductance (g s) parameterization for 'Paluma' was carried out and used to calculate different rate thresholds (from 0 to 5 nmol O3 m(-2) projected leaf area (PLA) s(-1)) for the phytotoxic ozone dose (POD). The model performance was assessed through the relationship between the measured and modeled g sto. Leaf injuries were analyzed and associated with POD and AOT40. The model performance was satisfactory and significant (R (2) = 0.56; P < 0.0001; root-mean-square error (RMSE) = 116). As already expected, high AOT40 values did not result in high POD values. Although high POD values do not always account for more injuries, POD0 showed better performance than did AOT40 and other different rate thresholds for POD. Further investigation is necessary to improve our model and also to check if there is a critical level of ozone in which leaf injuries arise. The conclusion is that the DO3SE model for 'Paluma' is applicable in the MRSP as well as in temperate regions and may contribute to future directives.

  2. Low Cost High Performance Nanostructured Spectrally Selective Coating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Sungho

    2017-04-05

    Sunlight absorbing coating is a key enabling technology to achieve high-temperature high-efficiency concentrating solar power operation. A high-performance solar absorbing material must simultaneously meet all the following three stringent requirements: high thermal efficiency (usually measured by figure of merit), high-temperature durability, and oxidation resistance. The objective of this research is to employ a highly scalable process to fabricate and coat black oxide nanoparticles onto solar absorber surface to achieve ultra-high thermal efficiency. Black oxide nanoparticles have been synthesized using a facile process and coated onto absorber metal surface. The material composition, size distribution and morphology of the nanoparticle are guidedmore » by numeric modeling. Optical and thermal properties have been both modeled and measured. High temperature durability has been achieved by using nanocomposites and high temperature annealing. Mechanical durability on thermal cycling have also been investigated and optimized. This technology is promising for commercial applications in next-generation high-temperature concentration solar power (CSP) plants.« less

  3. Mathematical modeling of high and low temperature heat pipes

    NASA Technical Reports Server (NTRS)

    Chi, S. W.

    1971-01-01

    Mathematical models are developed for calculating heat-transfer limitations of high-temperature heat pipes and heat-transfer limitations and temperature gradient of low temperature heat pipes. Calculated results are compared with the available experimental data from various sources to increase confidence in the present math models. Complete listings of two computer programs for high- and low-temperature heat pipes respectively are appended. These programs enable the performance of heat pipes with wrapped-screen, rectangular-groove or screen-covered rectangular-groove wick to be predicted.

  4. The Preliminary Investigation of the Factors that Influence the E-Learning Adoption in Higher Education Institutes: Jordan Case Study

    ERIC Educational Resources Information Center

    Al-hawari, Maen; Al-halabi, Sanaa

    2010-01-01

    Creativity and high performance in learning processes are the main concerns of educational institutions. E-learning contributes to the creativity and performance of these institutions and reproduces a traditional learning model based primarily on knowledge transfer into more innovative models based on collaborative learning. In this paper, the…

  5. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    ERIC Educational Resources Information Center

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  6. Reducing Avoidable Deaths Among Veterans: Directing Private-Sector Surgical Care to High-Performance Hospitals

    PubMed Central

    Weeks, William B.; West, Alan N.; Wallace, Amy E.; Lee, Richard E.; Goodman, David C.; Dimick, Justin B.; Bagian, James P.

    2007-01-01

    Objectives. We quantified older (65 years and older) Veterans Health Administration (VHA) patients’ use of the private sector to obtain 14 surgical procedures and assessed the potential impact of directing that care to high-performance hospitals. Methods. Using a merged VHA–Medicare inpatient database for 2000 and 2001, we determined where older VHA enrollees obtained 6 cardiovascular surgeries and 8 cancer resections and whether private-sector care was obtained in high- or low-performance hospitals (based on historical performance and determined 2 years in advance of the service year). We then modeled the mortality and travel burden effect of directing private-sector care to high-performance hospitals. Results. Older veterans obtained most of their procedures in the private sector, but that care was equally distributed across high- and low-performance hospitals. Directing private-sector care to high-performance hospitals could have led to the avoidance of 376 to 584 deaths, most through improved cardiovascular care outcomes. Using historical mortality to define performance would produce better outcomes with lower travel time. Conclusions. Policy that directs older VHA enrollees’ private-sector care to high-performance hospitals promises to reduce mortality for VHA’s service population and warrants further exploration. PMID:17971543

  7. Integrated modeling and robust control for full-envelope flight of robotic helicopters

    NASA Astrophysics Data System (ADS)

    La Civita, Marco

    Robotic helicopters have attracted a great deal of interest from the university, the industry, and the military world. They are versatile machines and there is a large number of important missions that they could accomplish. Nonetheless, there are only a handful of documented examples of robotic-helicopter applications in real-world scenarios. This situation is mainly due to the poor flight performance that can be achieved and---more important---guaranteed under automatic control. Given the maturity of control theory, and given the large body of knowledge in helicopter dynamics, it seems that the lack of success in flying high-performance controllers for robotic helicopters, especially by academic groups and by small industries, has nothing to do with helicopters or control theory as such. The problem lies instead in the large amount of time and resources needed to synthesize, test, and implement new control systems with the approach normally followed in the aeronautical industry. This thesis attempts to provide a solution by presenting a modeling and control framework that minimizes the time, cost, and both human and physical resources necessary to design high-performance flight controllers. The work is divided in two main parts. The first consists of the development of a modeling technique that allows the designer to obtain a high-fidelity model adequate for both real-time simulation and controller design, with few flight, ground, and wind-tunnel tests and a modest level of complexity in the dynamic equations. The second consists of the exploitation of the predictive capabilities of the model and of the robust stability and performance guarantees of the Hinfinity loop-shaping control theory to reduce the number of iterations of the design/simulated-evaluation/flight-test-evaluation procedure. The effectiveness of this strategy is demonstrated by designing and flight testing a wide-envelope high-performance controller for the Carnegie Mellon University robotic helicopter.

  8. The use of animal models in homeopathic research--a review of 2010-2014 PubMed indexed papers.

    PubMed

    Bonamin, Leoni Villano; Cardoso, Thayná Neves; de Carvalho, Aloísio Cunha; Amaral, Juliana Gimenez

    2015-10-01

    In the 1990s, a study was performed on the effects of highly diluted thyroxine on frog metamorphosis. This model represented one of the most discussed examples of the biological effects of high dilutions over the next two decades. In 2010, another critical conceptual review of the use of animal models in homeopathy and high-dilution research was published. The main contribution of these studies was the elucidation of the biological features and phenomenology of the effects of high dilutions on living systems, representing an important step forward in our understanding of the mechanisms of action of homeopathic medicines. We performed a further review of this line of investigation using the same methods. Fifty-three articles that were indexed in the PubMed database and used 12 different animal species were systematically evaluated. Only a fraction of the studies (29/53) reported herein were performed with "ultra high" dilutions. The other studies were performed with dilutions in ranges below 10(-23) (14/53 articles) or commercial complexes (10/53 articles). Only two articles reported negative results; both used in vivo protocols to test commercial complexes, one in fish and one in bees. The quality of the employed techniques improved in 2010-2014 compared with the studies that were reviewed previously in 2010, with the inclusion of more ethically refined protocols, including in vitro primary cell cultures and ex vivo studies (10/53 articles), often with three or more replicates and analyses of epigenetic mechanisms that were previously unknown in 2010. In our updated review of the past 5 years, we found further demonstrations of the biological effects of homeopathy using more refined animal models and in vitro techniques. Copyright © 2015 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  9. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    USDA-ARS?s Scientific Manuscript database

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  10. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE PAGES

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...

    2017-01-26

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  11. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  12. CFD Analysis of Emissions for a Candidate N+3 Combustor

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud

    2015-01-01

    An effort was undertaken to analyze the performance of a model Lean-Direct Injection (LDI) combustor designed to meet emissions and performance goals for NASA's N+3 program. Computational predictions of Emissions Index (EINOx) and combustor exit temperature were obtained for operation at typical power conditions expected of a small-core, high pressure-ratio (greater than 50), high T3 inlet temperature (greater than 950K) N+3 combustor. Reacting-flow computations were performed with the National Combustion Code (NCC) for a model N+3 LDI combustor, which consisted of a nine-element LDI flame-tube derived from a previous generation (N+2) thirteen-element LDI design. A consistent approach to mesh-optimization, spray-modeling and kinetics-modeling was used, in order to leverage the lessons learned from previous N+2 flame-tube analysis with the NCC. The NCC predictions for the current, non-optimized N+3 combustor operating indicated a 74% increase in NOx emissions as compared to that of the emissions-optimized, parent N+2 LDI combustor.

  13. Planar junctionless phototransistor: A potential high-performance and low-cost device for optical-communications

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new junctionless optical controlled field effect transistor (JL-OCFET) and its comprehensive theoretical model is proposed to achieve high optical performance and low cost fabrication process. Exhaustive study of the device characteristics and comparison between the proposed junctionless design and the conventional inversion mode structure (IM-OCFET) for similar dimensions are performed. Our investigation reveals that the proposed design exhibits an outstanding capability to be an alternative to the IM-OCFET due to the high performance and the weak signal detection benefit offered by this design. Moreover, the developed analytical expressions are exploited to formulate the objective functions to optimize the device performance using Genetic Algorithms (GAs) approach. The optimized JL-OCFET not only demonstrates good performance in terms of derived drain current and responsivity, but also exhibits superior signal to noise ratio, low power consumption, high-sensitivity, high ION/IOFF ratio and high-detectivity as compared to the conventional IM-OCFET counterpart. These characteristics make the optimized JL-OCFET potentially suitable for developing low cost and ultrasensitive photodetectors for high-performance and low cost inter-chips data communication applications.

  14. Performance Effects of Measurement and Analysis: Perspectives from CMMI High Maturity Organizations and Appraisers

    DTIC Science & Technology

    2010-06-01

    models 13 The Chi-Square test fails to reject the null hypothesis that there is no difference between 2008 and 2009 data (p-value = 0.601). This...attributed to process performance modeling 53 Table 4: Relationships between data quality and integrity activities and overall value attributed to... data quality and integrity; staffing and resources devoted to the work; pertinent training and coaching; and the alignment of the models with

  15. Regional patterns of future runoff changes from Earth system models constrained by observation

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Zhou, Feng; Piao, Shilong; Huang, Mengtian; Chen, Anping; Ciais, Philippe; Li, Yue; Lian, Xu; Peng, Shushi; Zeng, Zhenzhong

    2017-06-01

    In the recent Intergovernmental Panel on Climate Change assessment, multimodel ensembles (arithmetic model averaging, AMA) were constructed with equal weights given to Earth system models, without considering the performance of each model at reproducing current conditions. Here we use Bayesian model averaging (BMA) to construct a weighted model ensemble for runoff projections. Higher weights are given to models with better performance in estimating historical decadal mean runoff. Using the BMA method, we find that by the end of this century, the increase of global runoff (9.8 ± 1.5%) under Representative Concentration Pathway 8.5 is significantly lower than estimated from AMA (12.2 ± 1.3%). BMA presents a less severe runoff increase than AMA at northern high latitudes and a more severe decrease in Amazonia. Runoff decrease in Amazonia is stronger than the intermodel difference. The intermodel difference in runoff changes is mainly caused not only by precipitation differences among models, but also by evapotranspiration differences at the high northern latitudes.

  16. More oxygen during development enhanced flight performance but not thermal tolerance of Drosophila melanogaster.

    PubMed

    Shiehzadegan, Shayan; Le Vinh Thuy, Jacqueline; Szabla, Natalia; Angilletta, Michael J; VandenBrooks, John M

    2017-01-01

    High temperatures can stress animals by raising the oxygen demand above the oxygen supply. Consequently, animals under hypoxia could be more sensitive to heating than those exposed to normoxia. Although support for this model has been limited to aquatic animals, oxygen supply might limit the heat tolerance of terrestrial animals during energetically demanding activities. We evaluated this model by studying the flight performance and heat tolerance of flies (Drosophila melanogaster) acclimated and tested at different concentrations of oxygen (12%, 21%, and 31%). We expected that flies raised at hypoxia would develop into adults that were more likely to fly under hypoxia than would flies raised at normoxia or hyperoxia. We also expected flies to benefit from greater oxygen supply during testing. These effects should have been most pronounced at high temperatures, which impair locomotor performance. Contrary to our expectations, we found little evidence that flies raised at hypoxia flew better when tested at hypoxia or tolerated extreme heat better than did flies raised at normoxia or hyperoxia. Instead, flies raised at higher oxygen levels performed better at all body temperatures and oxygen concentrations. Moreover, oxygen supply during testing had the greatest effect on flight performance at low temperature, rather than high temperature. Our results poorly support the hypothesis that oxygen supply limits performance at high temperatures, but do support the idea that hyperoxia during development improves performance of flies later in life.

  17. Numerical Stability and Control Analysis Towards Falling-Leaf Prediction Capabilities of Splitflow for Two Generic High-Performance Aircraft Models

    NASA Technical Reports Server (NTRS)

    Charlton, Eric F.

    1998-01-01

    Aerodynamic analysis are performed using the Lockheed-Martin Tactical Aircraft Systems (LMTAS) Splitflow computational fluid dynamics code to investigate the computational prediction capabilities for vortex-dominated flow fields of two different tailless aircraft models at large angles of attack and sideslip. These computations are performed with the goal of providing useful stability and control data to designers of high performance aircraft. Appropriate metrics for accuracy, time, and ease of use are determined in consultations with both the LMTAS Advanced Design and Stability and Control groups. Results are obtained and compared to wind-tunnel data for all six components of forces and moments. Moment data is combined to form a "falling leaf" stability analysis. Finally, a handful of viscous simulations were also performed to further investigate nonlinearities and possible viscous effects in the differences between the accumulated inviscid computational and experimental data.

  18. A performance comparison of scalar, vector, and concurrent vector computers including supercomputers for modeling transport of reactive contaminants in groundwater

    NASA Astrophysics Data System (ADS)

    Tripathi, Vijay S.; Yeh, G. T.

    1993-06-01

    Sophisticated and highly computation-intensive models of transport of reactive contaminants in groundwater have been developed in recent years. Application of such models to real-world contaminant transport problems, e.g., simulation of groundwater transport of 10-15 chemically reactive elements (e.g., toxic metals) and relevant complexes and minerals in two and three dimensions over a distance of several hundred meters, requires high-performance computers including supercomputers. Although not widely recognized as such, the computational complexity and demand of these models compare with well-known computation-intensive applications including weather forecasting and quantum chemical calculations. A survey of the performance of a variety of available hardware, as measured by the run times for a reactive transport model HYDROGEOCHEM, showed that while supercomputers provide the fastest execution times for such problems, relatively low-cost reduced instruction set computer (RISC) based scalar computers provide the best performance-to-price ratio. Because supercomputers like the Cray X-MP are inherently multiuser resources, often the RISC computers also provide much better turnaround times. Furthermore, RISC-based workstations provide the best platforms for "visualization" of groundwater flow and contaminant plumes. The most notable result, however, is that current workstations costing less than $10,000 provide performance within a factor of 5 of a Cray X-MP.

  19. Assessment of the performance of CORDEX-SA experiments in simulating seasonal mean temperature over the Himalayan region for the present climate: Part I

    NASA Astrophysics Data System (ADS)

    Nengker, T.; Choudhary, A.; Dimri, A. P.

    2018-04-01

    The ability of an ensemble of five regional climate models (hereafter RCMs) under Coordinated Regional Climate Downscaling Experiments-South Asia (hereafter, CORDEX-SA) in simulating the key features of present day near surface mean air temperature (Tmean) climatology (1970-2005) over the Himalayan region is studied. The purpose of this paper is to understand the consistency in the performance of models across the ensemble, space and seasons. For this a number of statistical measures like trend, correlation, variance, probability distribution function etc. are applied to evaluate the performance of models against observation and simultaneously the underlying uncertainties between them for four different seasons. The most evident finding from the study is the presence of a large cold bias (-6 to -8 °C) which is systematically seen across all the models and across space and time over the Himalayan region. However, these RCMs with its fine resolution perform extremely well in capturing the spatial distribution of the temperature features as indicated by a consistently high spatial correlation (greater than 0.9) with the observation in all seasons. In spite of underestimation in simulated temperature and general intensification of cold bias with increasing elevation the models show a greater rate of warming than the observation throughout entire altitudinal stretch of study region. During winter, the simulated rate of warming gets even higher at high altitudes. Moreover, a seasonal response of model performance and its spatial variability to elevation is found.

  20. Development of the NTF-117S Semi-Span Balance

    NASA Technical Reports Server (NTRS)

    Lynn, Keith C.

    2010-01-01

    A new high-capacity semi-span force and moment balance has recently been developed for use at the National Transonic Facility at the NASA Langley Research Center. This new semi-span balance provides the NTF a new measurement capability that will support testing of semi-span test models at transonic high-lift testing regimes. Future testing utilizing this new balance capability will include active circulation control and propulsion simulation testing of semi-span transonic wing models. The NTF has recently implemented a new highpressure air delivery station that will provide both high and low mass flow pressure lines that are routed out to the semi-span models via a set high/low pressure bellows that are indirectly linked to the metric end of the NTF-117S balance. A new check-load stand is currently being developed to provide the NTF with an in-house capability that will allow for performing check-loads on the NTF-117S balance in order to determine the pressure tare affects on the overall performance of the balance. An experimental design is being developed that will allow for experimentally assessing the static pressure tare affects on the balance performance.

  1. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  2. Non-isothermal electrochemical model for lithium-ion cells with composite cathodes

    NASA Astrophysics Data System (ADS)

    Basu, Suman; Patil, Rajkumar S.; Ramachandran, Sanoop; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin; Yeo, Taejung; Doo, Seokgwang

    2015-06-01

    Transition metal oxide cathodes for Li-ion batteries offer high energy density and high voltage. Composites of these materials have shown excellent life expectancy and improved thermal performance. In the present work, a comprehensive non-isothermal electrochemical model for a Lithium ion cell with a composite cathode is developed. The present work builds on lithium concentration-dependent diffusivity and thermal gradient of cathode potential, obtained from experiments. The model validation is performed for a wide range of temperature and discharge rates. Excellent agreement is found for high and room temperature with moderate success at low temperatures, which can be attributed to the low fidelity of material properties at low temperature. Although the cell operation is limited by electronic conductivity of NCA at room temperature, at low temperatures a shift in controlling process is seen, and operation is limited by electrolyte transport. At room temperature, the lithium transport in Cathode appears to be the main source of heat generation with entropic heat as the primary contributor at low discharge rates and ohmic heat at high discharge rates respectively. Improvement in electronic conductivity of the cathode is expected to improve the performance of these composite cathodes and pave way for its wider commercialization.

  3. LL13-MatModelRadDetect-PD2Jf Final Report: Materials Modeling for High-Performance Radiation Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lordi, Vincenzo

    The aims of this project are to enable rational materials design for select high-payoff challenges in radiation detection materials by using state-of-the-art predictive atomistic modeling techniques. Three specific high-impact challenges are addressed: (i) design and optimization of electrical contact stacks for TlBr detectors to stabilize temporal response at room-temperature; (ii) identification of chemical design principles of host glass materials for large-volume, low-cost, highperformance glass scintillators; and (iii) determination of the electrical impacts of dislocation networks in Cd 1-xZn xTe (CZT) that limit its performance and usable single-crystal volume. The specific goals are to establish design and process strategies to achievemore » improved materials for high performance detectors. Each of the major tasks is discussed below in three sections, which include the goals for the task and a summary of the major results, followed by a listing of publications that contain the full details, including details of the methodologies used. The appendix lists 12 conference presentations given for this project, including 1 invited talk and 1 invited poster.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabiti, Cristian; Alfonsi, Andrea; Huang, Dongli

    This report collect the effort performed to improve the reliability analysis capabilities of the RAVEN code and explore new opportunity in the usage of surrogate model by extending the current RAVEN capabilities to multi physics surrogate models and construction of surrogate models for high dimensionality fields.

  5. Developing Staffing Models to Support Population Health Management And Quality Oucomes in Ambulatory Care Settings.

    PubMed

    Haas, Sheila A; Vlasses, Frances; Havey, Julia

    2016-01-01

    There are multiple demands and challenges inherent in establishing staffing models in ambulatory heath care settings today. If health care administrators establish a supportive physical and interpersonal health care environment, and develop high-performing interprofessional teams and staffing models and electronic documentation systems that track performance, patients will have more opportunities to receive safe, high-quality evidence-based care that encourages patient participation in decision making, as well as provision of their care. The health care organization must be aligned and responsive to the community within which it resides, fully invested in population health management, and continuously scanning the environment for competitive, regulatory, and external environmental risks. All of these challenges require highly competent providers willing to change attitudes and culture such as movement toward collaborative practice among the interprofessional team including the patient.

  6. NASA High-Reynolds Number Circulation Control Research - Overview of CFD and Planned Experiments

    NASA Technical Reports Server (NTRS)

    Milholen, W. E., II; Jones, Greg S.; Cagle, Christopher M.

    2010-01-01

    A new capability to test active flow control concepts and propulsion simulations at high Reynolds numbers in the National Transonic Facility at the NASA Langley Research Center is being developed. This technique is focused on the use of semi-span models due to their increased model size and relative ease of routing high-pressure air to the model. A new dual flow-path high-pressure air delivery station has been designed, along with a new high performance transonic sem -si pan wing model. The modular wind tunnel model is designed for testing circulation control concepts at both transonic cruise and low-speed high-lift conditions. The ability of the model to test other active flow control techniques will be highlighted. In addition, a new higher capacity semi-span force and moment wind tunnel balance has been completed and calibrated to enable testing at transonic conditions.

  7. Cryogenic, high speed, turbopump bearing cooling requirements

    NASA Technical Reports Server (NTRS)

    Dolan, Fred J.; Gibson, Howard G.; Cannon, James L.; Cody, Joe C.

    1988-01-01

    Although the Space Shuttle Main Engine (SSME) has repeatedly demonstrated the capability to perform during launch, the High Pressure Oxidizer Turbopump (HPOTP) main shaft bearings have not met their 7.5 hour life requirement. A tester is being employed to provide the capability of subjecting full scale bearings and seals to speeds, loads, propellants, temperatures, and pressures which simulate engine operating conditions. The tester design permits much more elaborate instrumentation and diagnostics than could be accommodated in an SSME turbopump. Tests were made to demonstrate the facilities; and the devices' capabilities, to verify the instruments in its operating environment and to establish a performance baseline for the flight type SSME HPOTP Turbine Bearing design. Bearing performance data from tests are being utilized to generate: (1) a high speed, cryogenic turbopump bearing computer mechanical model, and (2) a much improved, very detailed thermal model to better understand bearing internal operating conditions. Parametric tests were also made to determine the effects of speed, axial loads, coolant flow rate, and surface finish degradation on bearing performance.

  8. A unified thermal and vertical trajectory model for the prediction of high altitude balloon performance

    NASA Technical Reports Server (NTRS)

    Carlson, L. A.; Horn, W. J.

    1981-01-01

    A computer model for the prediction of the trajectory and thermal behavior of zero-pressure high altitude balloon was developed. In accord with flight data, the model permits radiative emission and absorption of the lifting gas and daytime gas temperatures above that of the balloon film. It also includes ballasting, venting, and valving. Predictions obtained with the model are compared with flight data from several flights and newly discovered features are discussed.

  9. Evaluation in Appalachian pasture systems of the 1996 (update 2000) National Research Council model for weaning cattle.

    PubMed

    Whetsell, M S; Rayburn, E B; Osborne, P I

    2006-05-01

    This study was conducted to evaluate the accuracy of the National Research Council's (2000) Nutrient Requirements of Beef Cattle computer model when used to predict calf performance during on-farm pasture or dry-lot weaning and backgrounding. Calf performance was measured on 22 farms in 2002 and 8 farms in 2003 that participated in West Virginia Beef Quality Assurance Sale marketing pools. Calves were weaned on pasture (25 farms) or dry-lot (5 farms) and fed supplemental hay, haylage, ground shell corn, soybean hulls, or a commercial concentrate. Concentrates were fed at a rate of 0.0 to 1.5% of BW. The National Research Council (2000) model was used to predict ADG of each group of calves observed on each farm. The model error was measured by calculating residuals (the difference between predicted ADG minus observed ADG). Predicted animal performance was determined using level 1 of the model. Results show that, when using normal on-farm pasture sampling and forage analysis methods, the model error for ADG is high and did not accurately predict the performance of steers or heifers fed high-forage pasture-based diets; the predicted ADG was lower (P < 0.05) than the observed ADG. The estimated intake of low-producing animals was similar to the expected DMI, but for the greater-producing animals it was not. The NRC (2000) beef model may more accurately predict on-farm animal performance in pastured situations if feed analysis values reflect the energy value of the feed, account for selective grazing, and relate empty BW and shrunk BW to NDF.

  10. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    PubMed

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  11. Groundwater Remediation using Bayesian Information-Gap Decision Theory

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2016-12-01

    Probabilistic analyses of groundwater remediation scenarios frequently fail because the probability of an adverse, unanticipated event occurring is often high. In general, models of flow and transport in contaminated aquifers are always simpler than reality. Further, when a probabilistic analysis is performed, probability distributions are usually chosen more for convenience than correctness. The Bayesian Information-Gap Decision Theory (BIGDT) was designed to mitigate the shortcomings of the models and probabilistic decision analyses by leveraging a non-probabilistic decision theory - information-gap decision theory. BIGDT considers possible models that have not been explicitly enumerated and does not require us to commit to a particular probability distribution for model and remediation-design parameters. Both the set of possible models and the set of possible probability distributions grow as the degree of uncertainty increases. The fundamental question that BIGDT asks is "How large can these sets be before a particular decision results in an undesirable outcome?". The decision that allows these sets to be the largest is considered to be the best option. In this way, BIGDT enables robust decision-support for groundwater remediation problems. Here we apply BIGDT to in a representative groundwater remediation scenario where different options for hydraulic containment and pump & treat are being considered. BIGDT requires many model runs and for complex models high-performance computing resources are needed. These analyses are carried out on synthetic problems, but are applicable to real-world problems such as LANL site contaminations. BIGDT is implemented in Julia (a high-level, high-performance dynamic programming language for technical computing) and is part of the MADS framework (http://mads.lanl.gov/ and https://github.com/madsjulia/Mads.jl).

  12. The High-Foot Implosion Campaign on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Hurricane, Omar

    2013-10-01

    The `High-Foot' platform manipulates the laser pulse-shape coming from the National Ignition Facility (NIF) laser to create an indirect drive 3-shock implosion that is significantly more robust against instability growth involving the ablator and also modestly reduces implosion convergence ratio. This tactic gives up on theoretical high-gain in an inertial confinement fusion implosion in order to obtain better control of the implosion and bring experimental performance in-line with calculated performance, yet keeps the absolute capsule performance relatively high. This approach is generally consistent with the philosophy laid out in a recent international workshop on the topic of ignition science on NIF [``Workshop on the Science of Fusion Ignition on NIF,'' Lawrence Livermore National Laboratory Report, LLNL-TR-570412 (2012). Op cit. V. Gocharov and O.A. Hurricane, ``Panel 3 Report: Implosion Hydrodynamics,'' LLNL-TR-562104 (2012)]. Side benefits our the High-Foot pulse-shape modification appear to be improvements in hohlraum behavior--less wall motion achieved through higher pressure He gas fill and improved inner cone laser beam propagation. Another consequence of the `High-Foot' is a higher fuel adiabat, so there is some relation to direct-drive experiments performed at the Laboratory for Laser Energetics (LLE). In this talk, we will cover the various experimental and theoretical motivations for the High-Foot drive as well as cover the experimental results that have come out of the High-Foot experimental campaign. Most notably, at the time of this writing record DT layer implosion performance with record low levels of inferred mix and excellent agreement with one-dimensional implosion models without the aid of mix models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  13. In situ observations of the isotopic composition of methane at the Cabauw tall tower site

    NASA Astrophysics Data System (ADS)

    Röckmann, Thomas; Eyer, Simon; van der Veen, Carina; Popa, Maria E.; Tuzson, Béla; Monteil, Guillaume; Houweling, Sander; Harris, Eliza; Brunner, Dominik; Fischer, Hubertus; Zazzeri, Giulia; Lowry, David; Nisbet, Euan G.; Brand, Willi A.; Necki, Jaroslav M.; Emmenegger, Lukas; Mohn, Joachim

    2016-08-01

    High-precision analyses of the isotopic composition of methane in ambient air can potentially be used to discriminate between different source categories. Due to the complexity of isotope ratio measurements, such analyses have generally been performed in the laboratory on air samples collected in the field. This poses a limitation on the temporal resolution at which the isotopic composition can be monitored with reasonable logistical effort. Here we present the performance of a dual isotope ratio mass spectrometric system (IRMS) and a quantum cascade laser absorption spectroscopy (QCLAS)-based technique for in situ analysis of the isotopic composition of methane under field conditions. Both systems were deployed at the Cabauw Experimental Site for Atmospheric Research (CESAR) in the Netherlands and performed in situ, high-frequency (approx. hourly) measurements for a period of more than 5 months. The IRMS and QCLAS instruments were in excellent agreement with a slight systematic offset of (+0.25 ± 0.04) ‰ for δ13C and (-4.3 ± 0.4) ‰ for δD. This was corrected for, yielding a combined dataset with more than 2500 measurements of both δ13C and δD. The high-precision and high-temporal-resolution dataset not only reveals the overwhelming contribution of isotopically depleted agricultural CH4 emissions from ruminants at the Cabauw site but also allows the identification of specific events with elevated contributions from more enriched sources such as natural gas and landfills. The final dataset was compared to model calculations using the global model TM5 and the mesoscale model FLEXPART-COSMO. The results of both models agree better with the measurements when the TNO-MACC emission inventory is used in the models than when the EDGAR inventory is used. This suggests that high-resolution isotope measurements have the potential to further constrain the methane budget when they are performed at multiple sites that are representative for the entire European domain.

  14. Rotor design for maneuver performance

    NASA Technical Reports Server (NTRS)

    Berry, John D.; Schrage, Daniel

    1986-01-01

    A method of determining the sensitivity of helicopter maneuver performance to changes in basic rotor design parameters is developed. Maneuver performance is measured by the time required, based on a simplified rotor/helicopter performance model, to perform a series of specified maneuvers. This method identifies parameter values which result in minimum time quickly because of the inherent simplicity of the rotor performance model used. For the specific case studied, this method predicts that the minimum time required is obtained with a low disk loading and a relatively high rotor solidity. The method was developed as part of the winning design effort for the American Helicopter Society student design competition for 1984/1985.

  15. High-throughput micro-scale cultivations and chromatography modeling: Powerful tools for integrated process development.

    PubMed

    Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen

    2015-10-01

    Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.

  16. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Bartos, Karen F.; Fite, E. Brian; Shalkhauser, Kurt A.; Sharp, G. Richard

    1991-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/ mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  17. A three-dimensional finite-element thermal/mechanical analytical technique for high-performance traveling wave tubes

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Kurt A.; Bartos, Karen F.; Fite, E. B.; Sharp, G. R.

    1992-01-01

    Current research in high-efficiency, high-performance traveling wave tubes (TWT's) has led to the development of novel thermal/mechanical computer models for use with helical slow-wave structures. A three-dimensional, finite element computer model and analytical technique used to study the structural integrity and thermal operation of a high-efficiency, diamond-rod, K-band TWT designed for use in advanced space communications systems. This analysis focused on the slow-wave circuit in the radiofrequency section of the TWT, where an inherent localized heating problem existed and where failures were observed during earlier cold compression, or 'coining' fabrication technique that shows great potential for future TWT development efforts. For this analysis, a three-dimensional, finite element model was used along with MARC, a commercially available finite element code, to simulate the fabrication of a diamond-rod TWT. This analysis was conducted by using component and material specifications consistent with actual TWT fabrication and was verified against empirical data. The analysis is nonlinear owing to material plasticity introduced by the forming process and also to geometric nonlinearities presented by the component assembly configuration. The computer model was developed by using the high efficiency, K-band TWT design but is general enough to permit similar analyses to be performed on a wide variety of TWT designs and styles. The results of the TWT operating condition and structural failure mode analysis, as well as a comparison of analytical results to test data are presented.

  18. Overview of High-Fidelity Modeling Activities in the Numerical Propulsion System Simulations (NPSS) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2002-01-01

    A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.

  19. Nonlinear Constitutive Relations for High Temperature Application, 1984

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Nonlinear constitutive relations for high temperature applications were discussed. The state of the art in nonlinear constitutive modeling of high temperature materials was reviewed and the need for future research and development efforts in this area was identified. Considerable research efforts are urgently needed in the development of nonlinear constitutive relations for high temperature applications prompted by recent advances in high temperature materials technology and new demands on material and component performance. Topics discussed include: constitutive modeling, numerical methods, material testing, and structural applications.

  20. Spatio-temporal modeling and optimization of a deformable-grating compressor for short high-energy laser pulses

    DOE PAGES

    Qiao, Jie; Papa, J.; Liu, X.

    2015-09-24

    Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less

  1. Aerodynamic Performance of a 0.27-Scale Model of an AH-64 Helicopter with Baseline and Alternate Rotor Blade Sets

    NASA Technical Reports Server (NTRS)

    Kelley, Henry L.

    1990-01-01

    Performance of a 27 percent scale model rotor designed for the AH-64 helicopter (alternate rotor) was measured in hover and forward flight and compared against and AH-64 baseline rotor model. Thrust, rotor tip Mach number, advance ratio, and ground proximity were varied. In hover, at a nominal thrust coefficient of 0.0064, the power savings was about 6.4 percent for the alternate rotor compared to the baseline. The corresponding thrust increase at this condition was approx. 4.5 percent which represents an equivalent full scale increase in lift capability of about 660 lbs. Comparable results were noted in forward flight except for the high thrust, high speed cases investigated where the baseline rotor was slightly superior. Reduced performance at the higher thrusts and speeds was likely due to Reynolds number effects and blade elasticity differences.

  2. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.

  3. High-performing trauma teams: frequency of behavioral markers of a shared mental model displayed by team leaders and quality of medical performance.

    PubMed

    Johnsen, Bjørn Helge; Westli, Heidi Kristina; Espevik, Roar; Wisborg, Torben; Brattebø, Guttorm

    2017-11-10

    High quality team leadership is important for the outcome of medical emergencies. However, the behavioral marker of leadership are not well defined. The present study investigated frequency of behavioral markers of shared mental models (SMM) on quality of medical management. Training video recordings of 27 trauma teams simulating emergencies were analyzed according to team -leader's frequency of shared mental model behavioral markers. The results showed a positive correlation of quality of medical management with leaders sharing information without an explicit demand for the information ("push" of information) and with leaders communicating their situational awareness (SA) and demonstrating implicit supporting behavior. When separating the sample into higher versus lower performing teams, the higher performing teams had leaders who displayed a greater frequency of "push" of information and communication of SA and supportive behavior. No difference was found for the behavioral marker of team initiative, measured as bringing up suggestions to other teammembers. The results of this study emphasize the team leader's role in initiating and updating a team's shared mental model. Team leaders should also set expectations for acceptable interaction patterns (e.g., promoting information exchange) and create a team climate that encourages behaviors, such as mutual performance monitoring, backup behavior, and adaptability to enhance SMM.

  4. Performance Modeling in CUDA Streams - A Means for High-Throughput Data Processing

    PubMed Central

    Li, Hao; Yu, Di; Kumar, Anand; Tu, Yi-Cheng

    2015-01-01

    Push-based database management system (DBMS) is a new type of data processing software that streams large volume of data to concurrent query operators. The high data rate of such systems requires large computing power provided by the query engine. In our previous work, we built a push-based DBMS named G-SDMS to harness the unrivaled computational capabilities of modern GPUs. A major design goal of G-SDMS is to support concurrent processing of heterogenous query processing operations and enable resource allocation among such operations. Understanding the performance of operations as a result of resource consumption is thus a premise in the design of G-SDMS. With NVIDIA’s CUDA framework as the system implementation platform, we present our recent work on performance modeling of CUDA kernels running concurrently under a runtime mechanism named CUDA stream. Specifically, we explore the connection between performance and resource occupancy of compute-bound kernels and develop a model that can predict the performance of such kernels. Furthermore, we provide an in-depth anatomy of the CUDA stream mechanism and summarize the main kernel scheduling disciplines in it. Our models and derived scheduling disciplines are verified by extensive experiments using synthetic and real-world CUDA kernels. PMID:26566545

  5. Achieving High Performance on the i860 Microprocessor

    NASA Technical Reports Server (NTRS)

    Lee, King; Kutler, Paul (Technical Monitor)

    1998-01-01

    The i860 is a high performance microprocessor used in the Intel Touchstone project. This paper proposes a paradigm for programming the i860 that is modelled on the vector instructions of the Cray computers. Fortran callable assembler subroutines were written that mimic the concurrent vector instructions of the Cray. Cache takes the place of vector registers. Using this paradigm we have achieved twice the performance of compiled code on a traditional solve.

  6. Evaluating Multi-Input/Multi-Output Digital Control Systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek

    1994-01-01

    Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.

  7. Application of Wavelet Filters in an Evaluation of Photochemical Model Performance

    EPA Science Inventory

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...

  8. First-Order Model Management With Variable-Fidelity Physics Applied to Multi-Element Airfoil Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, N. M.; Nielsen, E. J.; Lewis, R. M.; Anderson, W. K.

    2000-01-01

    First-order approximation and model management is a methodology for a systematic use of variable-fidelity models or approximations in optimization. The intent of model management is to attain convergence to high-fidelity solutions with minimal expense in high-fidelity computations. The savings in terms of computationally intensive evaluations depends on the ability of the available lower-fidelity model or a suite of models to predict the improvement trends for the high-fidelity problem, Variable-fidelity models can be represented by data-fitting approximations, variable-resolution models. variable-convergence models. or variable physical fidelity models. The present work considers the use of variable-fidelity physics models. We demonstrate the performance of model management on an aerodynamic optimization of a multi-element airfoil designed to operate in the transonic regime. Reynolds-averaged Navier-Stokes equations represent the high-fidelity model, while the Euler equations represent the low-fidelity model. An unstructured mesh-based analysis code FUN2D evaluates functions and sensitivity derivatives for both models. Model management for the present demonstration problem yields fivefold savings in terms of high-fidelity evaluations compared to optimization done with high-fidelity computations alone.

  9. Compound activity prediction using models of binding pockets or ligand properties in 3D

    PubMed Central

    Kufareva, Irina; Chen, Yu-Chen; Ilatovskiy, Andrey V.; Abagyan, Ruben

    2014-01-01

    Transient interactions of endogenous and exogenous small molecules with flexible binding sites in proteins or macromolecular assemblies play a critical role in all biological processes. Current advances in high-resolution protein structure determination, database development, and docking methodology make it possible to design three-dimensional models for prediction of such interactions with increasing accuracy and specificity. Using the data collected in the Pocketome encyclopedia, we here provide an overview of two types of the three-dimensional ligand activity models, pocket-based and ligand property-based, for two important classes of proteins, nuclear and G-protein coupled receptors. For half the targets, the pocket models discriminate actives from property matched decoys with acceptable accuracy (the area under ROC curve, AUC, exceeding 84%) and for about one fifth of the targets with high accuracy (AUC > 95%). The 3D ligand property field models performed better than 95% in half of the cases. The high performance models can already become a basis of activity predictions for new chemicals. Family-wide benchmarking of the models highlights strengths of both approaches and helps identify their inherent bottlenecks and challenges. PMID:23116466

  10. Comparison of Themodynamic and Transport Property Models for Computing Equilibrium High Enthalpy Flows

    NASA Astrophysics Data System (ADS)

    Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik

    2017-11-01

    To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.

  11. Performance of the high-resolution atmospheric model HRRR-AK for correcting geodetic observations from spaceborne radars

    PubMed Central

    Gong, W; Meyer, F J; Webley, P; Morton, D

    2013-01-01

    [1] Atmospheric phase delays are considered to be one of the main performance limitations for high-quality satellite radar techniques, especially when applied to ground deformation monitoring. Numerical weather prediction (NWP) models are widely seen as a promising tool for the mitigation of atmospheric delays as they can provide knowledge of the atmospheric conditions at the time of Synthetic Aperture Radar data acquisition. However, a thorough statistical analysis of the performance of using NWP production in radar signal correction is missing to date. This study provides a quantitative analysis of the accuracy in using operational NWP products for signal delay correction in satellite radar geodetic remote sensing. The study focuses on the temperate, subarctic, and Arctic climate regions due to a prevalence of relevant geophysical signals in these areas. In this study, the operational High Resolution Rapid Refresh over the Alaska region (HRRR-AK) model is used and evaluated. Five test sites were selected over Alaska (AK), USA, covering a wide range of climatic regimes that are commonly encountered in high-latitude regions. The performance of the HRRR-AK NWP model for correcting absolute atmospheric range delays of radar signals is assessed by comparing to radiosonde observations. The average estimation accuracy for the one-way zenith total atmospheric delay from 24 h simulations was calculated to be better than ∼14 mm. This suggests that the HRRR-AK operational products are a good data source for spaceborne geodetic radar observations atmospheric delay correction, if the geophysical signal to be observed is larger than 20 mm. PMID:25973360

  12. Human performance modeling for system of systems analytics :soldier fatigue.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in Septembermore » 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.« less

  13. Head-target tracking control of well drilling

    NASA Astrophysics Data System (ADS)

    Agzamov, Z. V.

    2018-05-01

    The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marion, Bill; Smith, Benjamin

    Using performance data from some of the millions of installed photovoltaic (PV) modules with micro-inverters may afford the opportunity to provide ground-based solar resource data critical for developing PV projects. The method used back-solves for the direct normal irradiance (DNI) and the diffuse horizontal irradiance (DHI) from the micro-inverter ac production data. When the derived values of DNI and DHI were then used to model the performance of other PV systems, the annual mean bias deviations were within +/- 4%, and only 1% greater than when the PV performance was modeled using high quality irradiance measurements. An uncertainty analysis showsmore » the method better suited for modeling PV performance than using satellite-based global horizontal irradiance.« less

  15. Performance Analysis of GFDL's GCM Line-By-Line Radiative Transfer Model on GPU and MIC Architectures

    NASA Astrophysics Data System (ADS)

    Menzel, R.; Paynter, D.; Jones, A. L.

    2017-12-01

    Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.

  16. Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation.

    PubMed

    Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling

    2009-06-01

    This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 microm can be achieved.

  17. Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation

    NASA Astrophysics Data System (ADS)

    Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling

    2009-06-01

    This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 μm can be achieved.

  18. Integrated water system simulation by considering hydrological and biogeochemical processes: model development, with parameter sensitivity and autocalibration

    NASA Astrophysics Data System (ADS)

    Zhang, Y. Y.; Shao, Q. X.; Ye, A. Z.; Xing, H. T.; Xia, J.

    2016-02-01

    Integrated water system modeling is a feasible approach to understanding severe water crises in the world and promoting the implementation of integrated river basin management. In this study, a classic hydrological model (the time variant gain model: TVGM) was extended to an integrated water system model by coupling multiple water-related processes in hydrology, biogeochemistry, water quality, and ecology, and considering the interference of human activities. A parameter analysis tool, which included sensitivity analysis, autocalibration and model performance evaluation, was developed to improve modeling efficiency. To demonstrate the model performances, the Shaying River catchment, which is the largest highly regulated and heavily polluted tributary of the Huai River basin in China, was selected as the case study area. The model performances were evaluated on the key water-related components including runoff, water quality, diffuse pollution load (or nonpoint sources) and crop yield. Results showed that our proposed model simulated most components reasonably well. The simulated daily runoff at most regulated and less-regulated stations matched well with the observations. The average correlation coefficient and Nash-Sutcliffe efficiency were 0.85 and 0.70, respectively. Both the simulated low and high flows at most stations were improved when the dam regulation was considered. The daily ammonium-nitrogen (NH4-N) concentration was also well captured with the average correlation coefficient of 0.67. Furthermore, the diffuse source load of NH4-N and the corn yield were reasonably simulated at the administrative region scale. This integrated water system model is expected to improve the simulation performances with extension to more model functionalities, and to provide a scientific basis for the implementation in integrated river basin managements.

  19. Study on Practical Application of Turboprop Engine Condition Monitoring and Fault Diagnostic System Using Fuzzy-Neuro Algorithms

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong; Kim, Keunwoo

    2013-03-01

    The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.

  20. Development of Fully-Integrated Micromagnetic Actuator Technologies

    DTIC Science & Technology

    2015-07-13

    nonexistent because of certain design and fabrication challenges— primarily the inability to integrate high-performance, permanent - magnet ( magnetically ... efficiency necessary for certain applications. To enable the development of high-performance magnetic actuator technologies, the original research plan...developed permanent - magnet materials in more complex microfabrication process flows Objective 2: Design, model, and optimize a novel multi- magnet

  1. Space Shuttle Main Engine structural analysis and data reduction/evaluation. Volume 3B: High pressure fuel turbo-pump preburner pump bearing assembly analysis

    NASA Technical Reports Server (NTRS)

    Power, Gloria B.; Violett, Rebeca S.

    1989-01-01

    The analysis performed on the High Pressure Oxidizer Turbopump (HPOTP) preburner pump bearing assembly located on the Space Shuttle Main Engine (SSME) is summarized. An ANSYS finite element model for the inlet assembly was built and executed. Thermal and static analyses were performed.

  2. Metabolic profiling of Hoodia, Chamomile, Terminalia Species and evaluation of commercial preparations using Ultra-High Performance Quadrupole Time of Flight-Mass Spectrometry

    USDA-ARS?s Scientific Manuscript database

    Ultra-High Performance-Quadrupole Time of Flight Mass Spectrometr(UHPLC-QToF-MS)profiling has become an impattant tool for identification of marker compounds and generation of metabolic patterns that could be interrogated using chemometric modeling software. Chemometric approaches can be used to ana...

  3. A Study of a High Performing, High Poverty Elementary School on the Texas-Mexico Border

    ERIC Educational Resources Information Center

    Lopez, Cynthia Iris

    2012-01-01

    Transforming low performing schools to ensure the academic success of Hispanic children situated in poverty remains an educational challenge. External factors impacting student learning are often targeted as the main reasons for poor academic achievement, thereby advancing the culturally deficit model. This study is about an elementary school that…

  4. Academically Buoyant Students Are Less Anxious about and Perform Better in High-Stakes Examinations

    ERIC Educational Resources Information Center

    Putwain, David W.; Daly, Anthony L.; Chamberlain, Suzanne; Sadreddini, Shireen

    2015-01-01

    Background: Prior research has shown that test anxiety is negatively related to academic buoyancy, but it is not known whether test anxiety is an antecedent or outcome of academic buoyancy. Furthermore, it is not known whether academic buoyancy is related to performance on high-stakes examinations. Aims: To test a model specifying reciprocal…

  5. Test Scores, Dropout Rates, and Transfer Rates as Alternative Indicators of High School Performance

    ERIC Educational Resources Information Center

    Rumberger, Russell W.; Palardy, Gregory J.

    2005-01-01

    This study investigated the relationships among several different indicators of high school performance: test scores, dropout rates, transfer rates, and attrition rates. Hierarchical linear models were used to analyze panel data from a sample of 14,199 students who took part in the National Education Longitudinal Survey of 1988. The results…

  6. Performance variation due to stiffness in a tuna-inspired flexible foil model.

    PubMed

    Rosic, Mariel-Luisa N; Thornycroft, Patrick J M; Feilich, Kara L; Lucas, Kelsey N; Lauder, George V

    2017-01-17

    Tuna are fast, economical swimmers in part due to their stiff, high aspect ratio caudal fins and streamlined bodies. Previous studies using passive caudal fin models have suggested that while high aspect ratio tail shapes such as a tuna's generally perform well, tail performance cannot be determined from shape alone. In this study, we analyzed the swimming performance of tuna-tail-shaped hydrofoils of a wide range of stiffnesses, heave amplitudes, and frequencies to determine how stiffness and kinematics affect multiple swimming performance parameters for a single foil shape. We then compared the foil models' kinematics with published data from a live swimming tuna to determine how well the hydrofoil models could mimic fish kinematics. Foil kinematics over a wide range of motion programs generally showed a minimum lateral displacement at the narrowest part of the foil, and, immediately anterior to that, a local area of large lateral body displacement. These two kinematic patterns may enhance thrust in foils of intermediate stiffness. Stiffness and kinematics exhibited subtle interacting effects on hydrodynamic efficiency, with no one stiffness maximizing both thrust and efficiency. Foils of intermediate stiffnesses typically had the greatest coefficients of thrust at the highest heave amplitudes and frequencies. The comparison of foil kinematics with tuna kinematics showed that tuna motion is better approximated by a zero angle of attack foil motion program than by programs that do not incorporate pitch. These results indicate that open questions in biomechanics may be well served by foil models, given appropriate choice of model characteristics and control programs. Accurate replication of biological movements will require refinement of motion control programs and physical models, including the creation of models of variable stiffness.

  7. Cycle analysis of MCFC/gas turbine system

    NASA Astrophysics Data System (ADS)

    Musa, Abdullatif; Alaktiwi, Abdulsalam; Talbi, Mosbah

    2017-11-01

    High temperature fuel cells such as the solid oxide fuel cell (SOFC) and the molten carbonate fuel cell (MCFC) are considered extremely suitable for electrical power plant application. The molten carbonate fuel cell (MCFC) performances is evaluated using validated model for the internally reformed (IR) fuel cell. This model is integrated in Aspen Plus™. Therefore, several MCFC/Gas Turbine systems are introduced and investigated. One of this a new cycle is called a heat recovery (HR) cycle. In the HR cycle, a regenerator is used to preheat water by outlet air compressor. So the waste heat of the outlet air compressor and the exhaust gases of turbine are recovered and used to produce steam. This steam is injected in the gas turbine, resulting in a high specific power and a high thermal efficiency. The cycles are simulated in order to evaluate and compare their performances. Moreover, the effects of an important parameters such as the ambient air temperature on the cycle performance are evaluated. The simulation results show that the HR cycle has high efficiency.

  8. High performance addition-type thermoplastics (ATTs) - Evidence for the formation of a Diels-Alder adduct in the reaction of an acetylene-terminated material and a bismaleimide

    NASA Technical Reports Server (NTRS)

    Pater, R. H.; Soucek, M. D.; Chang, A. C.; Partos, R. D.

    1991-01-01

    Recently, the concept and demonstration of a new versatile synthetic reaction for making a large number of high-performance addition-type thermoplastics (ATTs) were reported. The synthesis shows promise for providing polymers having an attractive combination of easy processability, good toughness, respectable high temperature mechanical performance, and excellent thermo-oxidative stability. The new chemistry involves the reaction of an acetylene-terminated material with a bismaleimide or benzoquinone. In order to clarify the reaction mechanism, model compound studies were undertaken in solutions as well as in the solid state. The reaction products were purified by flash chromatography and characterized by conventional analytical techniques including NMR, FT-IR, UV-visible, mass spectroscopy, and high pressure liquid chromatography. The results are presented of the model compound studies which strongly support the formation of a Diels-Alder adduct in the reaction of an acetylene-terminated compound and a bismaleimide or benzoquinone.

  9. Monte Carlo Analysis of the Battery-Type High Temperature Gas Cooled Reactor

    NASA Astrophysics Data System (ADS)

    Grodzki, Marcin; Darnowski, Piotr; Niewiński, Grzegorz

    2017-12-01

    The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an `early design' variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.

  10. Exciton delocalization incorporated drift-diffusion model for bulk-heterojunction organic solar cells

    NASA Astrophysics Data System (ADS)

    Wang, Zi Shuai; Sha, Wei E. I.; Choy, Wallace C. H.

    2016-12-01

    Modeling the charge-generation process is highly important to understand device physics and optimize power conversion efficiency of bulk-heterojunction organic solar cells (OSCs). Free carriers are generated by both ultrafast exciton delocalization and slow exciton diffusion and dissociation at the heterojunction interface. In this work, we developed a systematic numerical simulation to describe the charge-generation process by a modified drift-diffusion model. The transport, recombination, and collection of free carriers are incorporated to fully capture the device response. The theoretical results match well with the state-of-the-art high-performance organic solar cells. It is demonstrated that the increase of exciton delocalization ratio reduces the energy loss in the exciton diffusion-dissociation process, and thus, significantly improves the device efficiency, especially for the short-circuit current. By changing the exciton delocalization ratio, OSC performances are comprehensively investigated under the conditions of short-circuit and open-circuit. Particularly, bulk recombination dependent fill factor saturation is unveiled and understood. As a fundamental electrical analysis of the delocalization mechanism, our work is important to understand and optimize the high-performance OSCs.

  11. Assessment of the NeQuick-2 and IRI-Plas 2017 models using global and long-term GNSS measurements

    NASA Astrophysics Data System (ADS)

    Okoh, Daniel; Onwuneme, Sylvester; Seemala, Gopi; Jin, Shuanggen; Rabiu, Babatunde; Nava, Bruno; Uwamahoro, Jean

    2018-05-01

    The global ionospheric models NeQuick and IRI-Plas have been widely used. However, their uncertainties are not clear at global scale and long term. In this paper, a climatologic assessment of the NeQuick and IRI-Plas models is investigated at a global scale from global navigation satellite system (GNSS) observations. GNSS observations from 36 globally distributed locations were used to evaluate performances of both NeQuick-2 and IRI-Plas 2017 models from January 2006 to July 2017, covering more than the 11-year period of a solar cycle. An hourly interval of diurnal profiles computed on monthly basis was used to measure deviations of the model estimations from corresponding GNSS VTEC observations. Results show that both models are fairly accurate in trends with the GNSS measurements. The NeQuick predictions were generally better than the IRI-Plas predictions in most of the stations and the times. The mean annual prediction errors for the IRI-Plas model typically varied from about 3 TECU at the high latitude stations to about 12 TECU at the low latitude stations, while for the NeQuick the values are respectively about 2-7 TECU. Out of a total 4497 months in which GNSS data were available for all the stations put together for the entire period covered in this work, the NeQuick model was observed to perform better in about 83% of the months while the IRI-Plas performed better in about 17% of the months. The IRI-Plas generally performed better than the NeQuick at certain locations (e.g. DAV1, KERG, and ADIS). For both models, the most of the deviations were witnessed during local daytimes and during seasons that receive maximum solar radiation for various locations. In particular, the IRI-Plas model predictions were improved during periods of increased solar activity at the low latitude stations. The IRI-Plas model overestimates the GNSS VTEC values, except during high solar activity years at some high latitude stations. The NeQuick underestimates the TEC values during the high solar activity years and overestimates it during local daytime for low and moderate solar activity years, but not as much as the IRI-Plas does.

  12. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    NASA Astrophysics Data System (ADS)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process and in sample creation.

  13. Performance Benefits for Wave Rotor-Topped Gas Turbine Engines

    NASA Technical Reports Server (NTRS)

    Jones, Scott M.; Welch, Gerard E.

    1996-01-01

    The benefits of wave rotor-topping in turboshaft engines, subsonic high-bypass turbofan engines, auxiliary power units, and ground power units are evaluated. The thermodynamic cycle performance is modeled using a one-dimensional steady-state code; wave rotor performance is modeled using one-dimensional design/analysis codes. Design and off-design engine performance is calculated for baseline engines and wave rotor-topped engines, where the wave rotor acts as a high pressure spool. The wave rotor-enhanced engines are shown to have benefits in specific power and specific fuel flow over the baseline engines without increasing turbine inlet temperature. The off-design steady-state behavior of a wave rotor-topped engine is shown to be similar to a conventional engine. Mission studies are performed to quantify aircraft performance benefits for various wave rotor cycle and weight parameters. Gas turbine engine cycles most likely to benefit from wave rotor-topping are identified. Issues of practical integration and the corresponding technical challenges with various engine types are discussed.

  14. Performance model-directed data sieving for high-performance I/O

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yong; Lu, Yin; Amritkar, Prathamesh

    2014-09-10

    Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how tomore » perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.« less

  15. Validation of High Frequency (HF) Propagation Prediction Models in the Arctic region

    NASA Astrophysics Data System (ADS)

    Athieno, R.; Jayachandran, P. T.

    2014-12-01

    Despite the emergence of modern techniques for long distance communication, Ionospheric communication in the high frequency (HF) band (3-30 MHz) remains significant to both civilian and military users. However, the efficient use of the ever-varying ionosphere as a propagation medium is dependent on the reliability of ionospheric and HF propagation prediction models. Most available models are empirical implying that data collection has to be sufficiently large to provide good intended results. The models we present were developed with little data from the high latitudes which necessitates their validation. This paper presents the validation of three long term High Frequency (HF) propagation prediction models over a path within the Arctic region. Measurements of the Maximum Usable Frequency for a 3000 km range (MUF (3000) F2) for Resolute, Canada (74.75° N, 265.00° E), are obtained from hand-scaled ionograms generated by the Canadian Advanced Digital Ionosonde (CADI). The observations have been compared with predictions obtained from the Ionospheric Communication Enhanced Profile Analysis Program (ICEPAC), Voice of America Coverage Analysis Program (VOACAP) and International Telecommunication Union Recommendation 533 (ITU-REC533) for 2009, 2011, 2012 and 2013. A statistical analysis shows that the monthly predictions seem to reproduce the general features of the observations throughout the year though it is more evident in the winter and equinox months. Both predictions and observations show a diurnal and seasonal variation. The analysed models did not show large differences in their performances. However, there are noticeable differences across seasons for the entire period analysed: REC533 gives a better performance in winter months while VOACAP has a better performance for both equinox and summer months. VOACAP gives a better performance in the daily predictions compared to ICEPAC though, in general, the monthly predictions seem to agree more with the observations compared to the daily predictions.

  16. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  17. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    PubMed Central

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785

  18. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    NASA Astrophysics Data System (ADS)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the prediction of the ET0 of any station using the input data of the nearby station. The performance of the CHS models in the modeling the ET0 was better in all the cases when compared to that of the original HS.

  19. A psychoecological model of academic performance among Hispanic adolescents.

    PubMed

    Chun, Heejung; Dickson, Ginger

    2011-12-01

    Although the number of students who complete high school continues to rise, dramatic differences in school success remain across racial/ethnic groups. The current study addressed Hispanic adolescents' academic performance by investigating the relationships of parental involvement, culturally responsive teaching, sense of school belonging, and academic self-efficacy and academic performance. Participants were 478 (51.5% female) Hispanic 7th graders in the US-Mexico borderlands. Based on Bronfenbrenner's ecological systems theory, a structural model was tested. Results showed that the proposed model was supported by demonstrating significant indirect effects of parental involvement, culturally responsive teaching, and sense of school belonging on academic performance. Furthermore, academic self-efficacy was found to mediate the relationships between parental involvement, culturally responsive teaching, and sense of school belonging and academic performance. The current study provides a useful psychoecological model to inform educators and psychologists who seek to meet the needs of Hispanic students.

  20. HRST architecture modeling and assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comstock, D.A.

    1997-01-01

    This paper presents work supporting the assessment of advanced concept options for the Highly Reusable Space Transportation (HRST) study. It describes the development of computer models as the basis for creating an integrated capability to evaluate the economic feasibility and sustainability of a variety of system architectures. It summarizes modeling capabilities for use on the HRST study to perform sensitivity analysis of alternative architectures (consisting of different combinations of highly reusable vehicles, launch assist systems, and alternative operations and support concepts) in terms of cost, schedule, performance, and demand. In addition, the identification and preliminary assessment of alternative market segmentsmore » for HRST applications, such as space manufacturing, space tourism, etc., is described. Finally, the development of an initial prototype model that can begin to be used for modeling alternative HRST concepts at the system level is presented. {copyright} {ital 1997 American Institute of Physics.}« less

  1. Nonlinear stability and control study of highly maneuverable high performance aircraft, phase 2

    NASA Technical Reports Server (NTRS)

    Mohler, R. R.

    1992-01-01

    This research should lead to the development of new nonlinear methodologies for the adaptive control and stability analysis of high angle-of-attack aircraft such as the F18 (HARV). The emphasis has been on nonlinear adaptive control, but associated model development, system identification, stability analysis and simulation is performed in some detail as well. Various models under investigation for different purposes are summarized in tabular form. Models and simulation for the longitudinal dynamics have been developed for all types except the nonlinear ordinary differential equation model. Briefly, studies completed indicate that nonlinear adaptive control can outperform linear adaptive control for rapid maneuvers with large changes in alpha. The transient responses are compared where the desired alpha varies from 5 degrees to 60 degrees to 30 degrees and back to 5 degrees in all about 16 sec. Here, the horizontal stabilator is the only control used with an assumed first-order linear actuator with a 1/30 sec time constant.

  2. Predicting the breakdown strength and lifetime of nanocomposites using a multi-scale modeling approach

    NASA Astrophysics Data System (ADS)

    Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.

    2017-08-01

    It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.

  3. Development, Analysis and Testing of the High Speed Research Flexible Semispan Model

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Spain, Charles V.; Turnock, David L.; Rausch, Russ D.; Hamouda, M-Nabil; Vogler, William A.; Stockwell, Alan E.

    1999-01-01

    This report presents the work performed by Lockheed Martin Engineering and Sciences (LMES) in support of the High Speed Research (HSR) Flexible Semispan Model (FSM) wind-tunnel test. The test was conducted in order to assess the aerodynamic and aeroelastic character of a flexible high speed civil transport wing. Data was acquired for the purpose of code validation and trend evaluation for this type of wing. The report describes a number of activities in preparing for and conducting the wind-tunnel test. These included coordination of the design and fabrication, development of analytical models, analysis/hardware correlation, performance of laboratory tests, monitoring of model safety issues, and wind-tunnel data acquisition and reduction. Descriptions and relevant evaluations associated with the pretest data are given in sections 1 through 6, followed by pre- and post-test flutter analysis in section 7, and the results of the aerodynamics/loads test in section 8. Finally, section 9 provides some recommendations based on lessons learned throughout the FSM program.

  4. FE Simulation Models for Hot Stamping an Automobile Component with Tailor-Welded High-Strength Steels

    NASA Astrophysics Data System (ADS)

    Tang, Bingtao; Wang, Qiaoling; Wei, Zhaohui; Meng, Xianju; Yuan, Zhengjun

    2016-05-01

    Ultra-high-strength in sheet metal parts can be achieved with hot stamping process. To improve the crash performance and save vehicle weight, it is necessary to produce components with tailored properties. The use of tailor-welded high-strength steel is a relatively new hot stamping process for saving weight and obtaining desired local stiffness and crash performance. The simulation of hot stamping boron steel, especially tailor-welded blanks (TWBs) stamping, is more complex and challenging. Information about thermal/mechanical properties of tools and sheet materials, heat transfer, and friction between the deforming material and the tools is required in detail. In this study, the boron-manganese steel B1500HS and high-strength low-alloy steel B340LA are tailor welded and hot stamped. In order to precisely simulate the hot stamping process, modeling and simulation of hot stamping tailor-welded high-strength steels, including phase transformation modeling, thermal modeling, and thermal-mechanical modeling, is investigated. Meanwhile, the welding zone of tailor-welded blanks should be sufficiently accurate to describe thermal, mechanical, and metallurgical parameters. FE simulation model using TWBs with the thickness combination of 1.6 mm boron steel and 1.2 mm low-alloy steel is established. In order to evaluate the mechanical properties of the hot stamped automotive component (mini b-pillar), hardness and microstructure at each region are investigated. The comparisons between simulated results and experimental observations show the reliability of thermo-mechanical and metallurgical modeling strategies of TWBs hot stamping process.

  5. A Framework for Widespread Replication of a Highly Spatially Resolved Childhood Lead Exposure Risk Model

    PubMed Central

    Kim, Dohyeong; Galeano, M. Alicia Overstreet; Hull, Andrew; Miranda, Marie Lynn

    2008-01-01

    Background Preventive approaches to childhood lead poisoning are critical for addressing this longstanding environmental health concern. Moreover, increasing evidence of cognitive effects of blood lead levels < 10 μg/dL highlights the need for improved exposure prevention interventions. Objectives Geographic information system–based childhood lead exposure risk models, especially if executed at highly resolved spatial scales, can help identify children most at risk of lead exposure, as well as prioritize and direct housing and health-protective intervention programs. However, developing highly resolved spatial data requires labor-and time-intensive geocoding and analytical processes. In this study we evaluated the benefit of increased effort spent geocoding in terms of improved performance of lead exposure risk models. Methods We constructed three childhood lead exposure risk models based on established methods but using different levels of geocoded data from blood lead surveillance, county tax assessors, and the 2000 U.S. Census for 18 counties in North Carolina. We used the results to predict lead exposure risk levels mapped at the individual tax parcel unit. Results The models performed well enough to identify high-risk areas for targeted intervention, even with a relatively low level of effort on geocoding. Conclusions This study demonstrates the feasibility of widespread replication of highly spatially resolved childhood lead exposure risk models. The models guide resource-constrained local health and housing departments and community-based organizations on how best to expend their efforts in preventing and mitigating lead exposure risk in their communities. PMID:19079729

  6. Evaluation of a black-footed ferret resource utilization function model

    USGS Publications Warehouse

    Eads, D.A.; Millspaugh, J.J.; Biggins, D.E.; Jachowski, D.S.; Livieri, T.M.

    2011-01-01

    Resource utilization function (RUF) models permit evaluation of potential habitat for endangered species; ideally such models should be evaluated before use in management decision-making. We evaluated the predictive capabilities of a previously developed black-footed ferret (Mustela nigripes) RUF. Using the population-level RUF, generated from ferret observations at an adjacent yet distinct colony, we predicted the distribution of ferrets within a black-tailed prairie dog (Cynomys ludovicianus) colony in the Conata Basin, South Dakota, USA. We evaluated model performance, using data collected during post-breeding spotlight surveys (2007-2008) by assessing model agreement via weighted compositional analysis and count-metrics. Compositional analysis of home range use and colony-level availability, and core area use and home range availability, demonstrated ferret selection of the predicted Very high and High occurrence categories in 2007 and 2008. Simple count-metrics corroborated these findings and suggested selection of the Very high category in 2007 and the Very high and High categories in 2008. Collectively, these results suggested that the RUF was useful in predicting occurrence and intensity of space use of ferrets at our study site, the 2 objectives of the RUF. Application of this validated RUF would increase the resolution of habitat evaluations, permitting prediction of the distribution of ferrets within distinct colonies. Additional model evaluation at other sites, on other black-tailed prairie dog colonies of varying resource configuration and size, would increase understanding of influences upon model performance and the general utility of the RUF. ?? 2011 The Wildlife Society.

  7. Mechanical Performance of Asphalt Mortar Containing Hydrated Lime and EAFSS at Low and High Temperatures.

    PubMed

    Moon, Ki Hoon; Falchetto, Augusto Cannone; Wang, Di; Riccardi, Chiara; Wistuba, Michael P

    2017-07-03

    In this paper, the possibility of improving the global response of asphalt materials for pavement applications through the use of hydrated lime and Electric Arc-Furnace Steel Slag (EAFSS) was investigated. For this purpose, a set of asphalt mortars was prepared by mixing two different asphalt binders with fine granite aggregate together with hydrated lime or EAFSS at three different percentages. Bending Beam Rheometer (BBR) creep tests and Dynamic Shear Rheometer (DSR) complex modulus tests were performed to evaluate the material response both at low and high temperature. Then, the rheological Huet model was fitted to the BBR creep results for estimating the impact of filler content on the model parameters. It was found that an addition of hydrated lime and EAFSS up to 10% and 5%, respectively, results in satisfactory low-temperature performance with a substantial improvement of the high-temperature behavior.

  8. High-precision buffer circuit for suppression of regenerative oscillation

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Hare, David A.; Tcheng, Ping

    1995-01-01

    Precision analog signal conditioning electronics have been developed for wind tunnel model attitude inertial sensors. This application requires low-noise, stable, microvolt-level DC performance and a high-precision buffered output. Capacitive loading of the operational amplifier output stages due to the wind tunnel analog signal distribution facilities caused regenerative oscillation and consequent rectification bias errors. Oscillation suppression techniques commonly used in audio applications were inadequate to maintain the performance requirements for the measurement of attitude for wind tunnel models. Feedback control theory is applied to develop a suppression technique based on a known compensation (snubber) circuit, which provides superior oscillation suppression with high output isolation and preserves the low-noise low-offset performance of the signal conditioning electronics. A practical design technique is developed to select the parameters for the compensation circuit to suppress regenerative oscillation occurring when typical shielded cable loads are driven.

  9. Mechanical Performance of Asphalt Mortar Containing Hydrated Lime and EAFSS at Low and High Temperatures

    PubMed Central

    Moon, Ki Hoon; Wang, Di; Riccardi, Chiara; Wistuba, Michael P.

    2017-01-01

    In this paper, the possibility of improving the global response of asphalt materials for pavement applications through the use of hydrated lime and Electric Arc-Furnace Steel Slag (EAFSS) was investigated. For this purpose, a set of asphalt mortars was prepared by mixing two different asphalt binders with fine granite aggregate together with hydrated lime or EAFSS at three different percentages. Bending Beam Rheometer (BBR) creep tests and Dynamic Shear Rheometer (DSR) complex modulus tests were performed to evaluate the material response both at low and high temperature. Then, the rheological Huet model was fitted to the BBR creep results for estimating the impact of filler content on the model parameters. It was found that an addition of hydrated lime and EAFSS up to 10% and 5%, respectively, results in satisfactory low-temperature performance with a substantial improvement of the high-temperature behavior. PMID:28773100

  10. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; Dawson, Andrew

    2017-03-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelization to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. In this paper, we present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform model simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13 % for the shallow water model.

  11. Assessing effects of variation in global climate data sets on spatial predictions from climate envelope models

    USGS Publications Warehouse

    Romañach, Stephanie; Watling, James I.; Fletcher, Robert J.; Speroterra, Carolina; Bucklin, David N.; Brandt, Laura A.; Pearlstine, Leonard G.; Escribano, Yesenia; Mazzotti, Frank J.

    2014-01-01

    Climate change poses new challenges for natural resource managers. Predictive modeling of species–environment relationships using climate envelope models can enhance our understanding of climate change effects on biodiversity, assist in assessment of invasion risk by exotic organisms, and inform life-history understanding of individual species. While increasing interest has focused on the role of uncertainty in future conditions on model predictions, models also may be sensitive to the initial conditions on which they are trained. Although climate envelope models are usually trained using data on contemporary climate, we lack systematic comparisons of model performance and predictions across alternative climate data sets available for model training. Here, we seek to fill that gap by comparing variability in predictions between two contemporary climate data sets to variability in spatial predictions among three alternative projections of future climate. Overall, correlations between monthly temperature and precipitation variables were very high for both contemporary and future data. Model performance varied across algorithms, but not between two alternative contemporary climate data sets. Spatial predictions varied more among alternative general-circulation models describing future climate conditions than between contemporary climate data sets. However, we did find that climate envelope models with low Cohen's kappa scores made more discrepant spatial predictions between climate data sets for the contemporary period than did models with high Cohen's kappa scores. We suggest conservation planners evaluate multiple performance metrics and be aware of the importance of differences in initial conditions for spatial predictions from climate envelope models.

  12. In use performance of catalytic converters on properly maintained high mileage vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabourin, M.A.; Larson, R.E.; Donahue, K.S.

    1986-01-01

    A test program to evaluate the performance of catalytic converters from fifty-six 1981 and 1982 model year high mileage properly maintained in-use vehicles (from 21 engine families) was performed by the Certification Division of the Office of Mobile Sources (EPA). The program is called the Catalyst Change Program. All program vehicles were screened for proper maintenance and for mileages that ranged from 35,000 to 60,000 miles. Among vehicles belonging to 21 high sales volume and high technology engine and emission control system designs tested, poor catalyst performance was determined to be a significant contributor to emissions failure of properly-maintained vehiclesmore » at or near their warranted useful life mileage.« less

  13. High-performance web services for querying gene and variant annotation.

    PubMed

    Xin, Jiwen; Mark, Adam; Afrasiabi, Cyrus; Tsueng, Ginger; Juchler, Moritz; Gopal, Nikhil; Stupp, Gregory S; Putman, Timothy E; Ainscough, Benjamin J; Griffith, Obi L; Torkamani, Ali; Whetzel, Patricia L; Mungall, Christopher J; Mooney, Sean D; Su, Andrew I; Wu, Chunlei

    2016-05-06

    Efficient tools for data management and integration are essential for many aspects of high-throughput biology. In particular, annotations of genes and human genetic variants are commonly used but highly fragmented across many resources. Here, we describe MyGene.info and MyVariant.info, high-performance web services for querying gene and variant annotation information. These web services are currently accessed more than three million times permonth. They also demonstrate a generalizable cloud-based model for organizing and querying biological annotation information. MyGene.info and MyVariant.info are provided as high-performance web services, accessible at http://mygene.info and http://myvariant.info . Both are offered free of charge to the research community.

  14. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  15. NEXT Ion Thruster Performance Dispersion Analyses

    NASA Technical Reports Server (NTRS)

    Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The NEXT ion thruster is a low specific mass, high performance thruster with a nominal throttling range of 0.5 to 7 kW. Numerous engineering model and one prototype model thrusters have been manufactured and tested. Of significant importance to propulsion system performance is thruster-to-thruster performance dispersions. This type of information can provide a bandwidth of expected performance variations both on a thruster and a component level. Knowledge of these dispersions can be used to more conservatively predict thruster service life capability and thruster performance for mission planning, facilitate future thruster performance comparisons, and verify power processor capabilities are compatible with the thruster design. This study compiles the test results of five engineering model thrusters and one flight-like thruster to determine unit-to-unit dispersions in thruster performance. Component level performance dispersion analyses will include discharge chamber voltages, currents, and losses; accelerator currents, electron backstreaming limits, and perveance limits; and neutralizer keeper and coupling voltages and the spot-to-plume mode transition flow rates. Thruster level performance dispersion analyses will include thrust efficiency.

  16. Probabilistic material strength degradation model for Inconel 718 components subjected to high temperature, high-cycle and low-cycle mechanical fatigue, creep and thermal fatigue effects

    NASA Technical Reports Server (NTRS)

    Bast, Callie C.; Boyce, Lola

    1995-01-01

    This report presents the results of both the fifth and sixth year effort of a research program conducted for NASA-LeRC by The University of Texas at San Antonio (UTSA). The research included on-going development of methodology for a probabilistic material strength degradation model. The probabilistic model, in the form of a postulated randomized multifactor equation, provides for quantification of uncertainty in the lifetime material strength of aerospace propulsion system components subjected to a number of diverse random effects. This model is embodied in the computer program entitled PROMISS, which can include up to eighteen different effects. Presently, the model includes five effects that typically reduce lifetime strength: high temperature, high-cycle mechanical fatigue, low-cycle mechanical fatigue, creep and thermal fatigue. Statistical analysis was conducted on experimental Inconel 718 data obtained from the open literature. This analysis provided regression parameters for use as the model's empirical material constants, thus calibrating the model specifically for Inconel 718. Model calibration was carried out for five variables, namely, high temperature, high-cycle and low-cycle mechanical fatigue, creep and thermal fatigue. Methodology to estimate standard deviations of these material constants for input into the probabilistic material strength model was developed. Using an updated version of PROMISS, entitled PROMISS93, a sensitivity study for the combined effects of high-cycle mechanical fatigue, creep and thermal fatigue was performed. Then using the current version of PROMISS, entitled PROMISS94, a second sensitivity study including the effect of low-cycle mechanical fatigue, as well as, the three previous effects was performed. Results, in the form of cumulative distribution functions, illustrated the sensitivity of lifetime strength to any current value of an effect. In addition, verification studies comparing a combination of high-cycle mechanical fatigue and high temperature effects by model to the combination by experiment were conducted. Thus, for Inconel 718, the basic model assumption of independence between effects was evaluated. Results from this limited verification study strongly supported this assumption.

  17. Scale model performance test investigation of mixed flow exhaust systems for an energy efficient engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1983-01-01

    As part of the NASA Energy Efficient Engine program, scale-model performance tests of a mixed flow exhaust system were conducted. The tests were used to evaluate the performance of exhaust system mixers for high-bypass, mixed-flow turbofan engines. The tests indicated that: (1) mixer penetration has the most significant affect on both mixing effectiveness and mixer pressure loss; (2) mixing/tailpipe length improves mixing effectiveness; (3) gap reduction between the mixer and centerbody increases high mixing effectiveness; (4) mixer cross-sectional shape influences mixing effectiveness; (5) lobe number affects mixing degree; and (6) mixer aerodynamic pressure losses are a function of secondary flows inherent to the lobed mixer concept.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat; Cole, Wesley

    This poster is based on the paper of the same name, presented at the IEEE Power & Energy Society General Meeting, July18, 2016. Power sector capacity expansion models (CEMs) have a broad range of spatial resolutions. This paper uses the Regional Energy Deployment System (ReEDS) model, a long-term national scale electric sector CEM, to evaluate the value of high spatial resolution for CEMs. ReEDS models the United States with 134 load balancing areas (BAs) and captures the variability in existing generation parameters, future technology costs, performance, and resource availability using very high spatial resolution data, especially for wind and solarmore » modeled at 356 resource regions. In this paper we perform planning studies at three different spatial resolutions - native resolution (134 BAs), state-level, and NERC region level - and evaluate how results change under different levels of spatial aggregation in terms of renewable capacity deployment and location, associated transmission builds, and system costs. The results are used to ascertain the value of high geographically resolved models in terms of their impact on relative competitiveness among renewable energy resources.« less

  19. Capsule modeling of high foot implosion experiments on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, D. S.; Kritcher, A. L.; Milovich, J. L.

    This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less

  20. Laser Lightcraft Performance

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Wei, Hong

    2000-01-01

    The purpose of this study is to establish the technical ground for modeling the physics of laser powered pulse detonation phenomenon. The principle of the laser power propulsion is that when high-powered laser is focused at a small area near the surface of a thruster, the intense energy causes the electrical breakdown of the working fluid (e.g. air) and forming high speed plasma (known as the inverse Bremsstrahlung, IB, effect). The intense heat and high pressure created in the plasma consequently causes the surrounding to heat up and expand until the thrust producing shock waves are formed. This complex process of gas ionization, increase in radiation absorption and the forming of plasma and shock waves will be investigated in the development of the present numerical model. In the first phase of this study, laser light focusing, radiation absorption and shock wave propagation over the entire pulsed cycle are modeled. The model geometry and test conditions of known benchmark experiments such as those in Myrabo's experiment will be employed in the numerical model validation simulations. The calculated performance data will be compared to the test data.

  1. Structure-based capacitance modeling and power loss analysis for the latest high-performance slant field-plate trench MOSFET

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kenya; Sudo, Masaki; Omura, Ichiro

    2018-04-01

    Field-plate trench MOSFETs (FP-MOSFETs), with the features of ultralow on-resistance and very low gate–drain charge, are currently the mainstream of high-performance applications and their advancement is continuing as low-voltage silicon power devices. However, owing to their structure, their output capacitance (C oss), which leads to main power loss, remains to be a problem, especially in megahertz switching. In this study, we propose a structure-based capacitance model of FP-MOSFETs for calculating power loss easily under various conditions. Appropriate equations were modeled for C oss curves as three divided components. Output charge (Q oss) and stored energy (E oss) that were calculated using the model corresponded well to technology computer-aided design (TCAD) simulation, and we validated the accuracy of the model quantitatively. In the power loss analysis of FP-MOSFETs, turn-off loss was sufficiently suppressed, however, mainly Q oss loss increased depending on switching frequency. This analysis reveals that Q oss may become a significant issue in next-generation high-efficiency FP-MOSFETs.

  2. Capsule modeling of high foot implosion experiments on the National Ignition Facility

    DOE PAGES

    Clark, D. S.; Kritcher, A. L.; Milovich, J. L.; ...

    2017-03-21

    This study summarizes the results of detailed, capsule-only simulations of a set of high foot implosion experiments conducted on the National Ignition Facility (NIF). These experiments span a range of ablator thicknesses, laser powers, and laser energies, and modeling these experiments as a set is important to assess whether the simulation model can reproduce the trends seen experimentally as the implosion parameters were varied. Two-dimensional (2D) simulations have been run including a number of effects—both nominal and off-nominal—such as hohlraum radiation asymmetries, surface roughness, the capsule support tent, and hot electron pre-heat. Selected three-dimensional simulations have also been run tomore » assess the validity of the 2D axisymmetric approximation. As a composite, these simulations represent the current state of understanding of NIF high foot implosion performance using the best and most detailed computational model available. While the most detailed simulations show approximate agreement with the experimental data, it is evident that the model remains incomplete and further refinements are needed. Nevertheless, avenues for improved performance are clearly indicated.« less

  3. Modeling the behavior of an earthquake base-isolated building.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coveney, V. A.; Jamil, S.; Johnson, D. E.

    1997-11-26

    Protecting a structure against earthquake excitation by supporting it on laminated elastomeric bearings has become a widely accepted practice. The ability to perform accurate simulation of the system, including FEA of the bearings, would be desirable--especially for key installations. In this paper attempts to model the behavior of elastomeric earthquake bearings are outlined. Attention is focused on modeling highly-filled, low-modulus, high-damping elastomeric isolator systems; comparisons are made between standard triboelastic solid model predictions and test results.

  4. Early experiences in developing and managing the neuroscience gateway.

    PubMed

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T

    2015-02-01

    The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.

  5. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  6. Low Noise Exhaust Nozzle Technology Development

    NASA Technical Reports Server (NTRS)

    Majjigi, R. K.; Balan, C.; Mengle, V.; Brausch, J. F.; Shin, H.; Askew, J. W.

    2005-01-01

    NASA and the U.S. aerospace industry have been assessing the economic viability and environmental acceptability of a second-generation supersonic civil transport, or High Speed Civil Transport (HSCT). Development of a propulsion system that satisfies strict airport noise regulations and provides high levels of cruise and transonic performance with adequate takeoff performance, at an acceptable weight, is critical to the success of any HSCT program. The principal objectives were to: 1. Develop a preliminary design of an innovative 2-D exhaust nozzle with the goal of meeting FAR36 Stage III noise levels and providing high levels of cruise performance with a high specific thrust for Mach 2.4 HSCT with a range of 5000 nmi and a payload of 51,900 lbm, 2. Employ advanced acoustic and aerodynamic codes during preliminary design, 3. Develop a comprehensive acoustic and aerodynamic database through scale-model testing of low-noise, high-performance, 2-D nozzle configurations, based on the preliminary design, and 4. Verify acoustic and aerodynamic predictions by means of scale-model testing. The results were: 1. The preliminary design of a 2-D, convergent/divergent suppressor ejector nozzle for a variable-cycle engine powered, Mach 2.4 HSCT was evolved, 2. Noise goals were predicted to be achievable for three takeoff scenarios, and 3. Impact of noise suppression, nozzle aerodynamic performance, and nozzle weight on HSCT takeoff gross weight were assessed.

  7. Early experiences in developing and managing the neuroscience gateway

    PubMed Central

    Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.

    2015-01-01

    SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124

  8. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  9. A comparative assessment of GIS-based data mining models and a novel ensemble model in groundwater well potential mapping

    NASA Astrophysics Data System (ADS)

    Naghibi, Seyed Amir; Moghaddam, Davood Davoodi; Kalantar, Bahareh; Pradhan, Biswajeet; Kisi, Ozgur

    2017-05-01

    In recent years, application of ensemble models has been increased tremendously in various types of natural hazard assessment such as landslides and floods. However, application of this kind of robust models in groundwater potential mapping is relatively new. This study applied four data mining algorithms including AdaBoost, Bagging, generalized additive model (GAM), and Naive Bayes (NB) models to map groundwater potential. Then, a novel frequency ratio data mining ensemble model (FREM) was introduced and evaluated. For this purpose, eleven groundwater conditioning factors (GCFs), including altitude, slope aspect, slope angle, plan curvature, stream power index (SPI), river density, distance from rivers, topographic wetness index (TWI), land use, normalized difference vegetation index (NDVI), and lithology were mapped. About 281 well locations with high potential were selected. Wells were randomly partitioned into two classes for training the models (70% or 197) and validating them (30% or 84). AdaBoost, Bagging, GAM, and NB algorithms were employed to get groundwater potential maps (GPMs). The GPMs were categorized into potential classes using natural break method of classification scheme. In the next stage, frequency ratio (FR) value was calculated for the output of the four aforementioned models and were summed, and finally a GPM was produced using FREM. For validating the models, area under receiver operating characteristics (ROC) curve was calculated. The ROC curve for prediction dataset was 94.8, 93.5, 92.6, 92.0, and 84.4% for FREM, Bagging, AdaBoost, GAM, and NB models, respectively. The results indicated that FREM had the best performance among all the models. The better performance of the FREM model could be related to reduction of over fitting and possible errors. Other models such as AdaBoost, Bagging, GAM, and NB also produced acceptable performance in groundwater modelling. The GPMs produced in the current study may facilitate groundwater exploitation by determining high and very high groundwater potential zones.

  10. Evaluation of CMIP5 twentieth century rainfall simulation over the equatorial East Africa

    NASA Astrophysics Data System (ADS)

    Ongoma, Victor; Chen, Haishan; Gao, Chujie

    2018-02-01

    This study assesses the performance of 22 Coupled Model Intercomparison Project Phase 5 (CMIP5) historical simulations of rainfall over East Africa (EA) against reanalyzed datasets during 1951-2005. The datasets were sourced from Global Precipitation Climatology Centre (GPCC) and Climate Research Unit (CRU). The metrics used to rank CMIP5 Global Circulation Models (GCMs) based on their performance in reproducing the observed rainfall include correlation coefficient, standard deviation, bias, percentage bias, root mean square error, and trend. Performances of individual models vary widely. The overall performance of the models over EA is generally low. The models reproduce the observed bimodal rainfall over EA. However, majority of them overestimate and underestimate the October-December (OND) and March-May (MAM) rainfall, respectively. The monthly (inter-annual) correlation between model and reanalyzed is high (low). More than a third of the models show a positive bias of the annual rainfall. High standard deviation in rainfall is recorded in the Lake Victoria Basin, central Kenya, and eastern Tanzania. A number of models reproduce the spatial standard deviation of rainfall during MAM season as compared to OND. The top eight models that produce rainfall over EA relatively well are as follows: CanESM2, CESM1-CAM5, CMCC-CESM, CNRM-CM5, CSIRO-Mk3-6-0, EC-EARTH, INMCM4, and MICROC5. Although these results form a fairly good basis for selection of GCMs for carrying out climate projections and downscaling over EA, it is evident that there is still need for critical improvement in rainfall-related processes in the models assessed. Therefore, climate users are advised to use the projections of rainfall from CMIP5 models over EA cautiously when making decisions on adaptation to or mitigation of climate change.

  11. Nonlinear autoregressive neural networks with external inputs for forecasting of typhoon inundation level.

    PubMed

    Ouyang, Huei-Tau

    2017-08-01

    Accurate inundation level forecasting during typhoon invasion is crucial for organizing response actions such as the evacuation of people from areas that could potentially flood. This paper explores the ability of nonlinear autoregressive neural networks with exogenous inputs (NARX) to predict inundation levels induced by typhoons. Two types of NARX architecture were employed: series-parallel (NARX-S) and parallel (NARX-P). Based on cross-correlation analysis of rainfall and water-level data from historical typhoon records, 10 NARX models (five of each architecture type) were constructed. The forecasting ability of each model was assessed by considering coefficient of efficiency (CE), relative time shift error (RTS), and peak water-level error (PE). The results revealed that high CE performance could be achieved by employing more model input variables. Comparisons of the two types of model demonstrated that the NARX-S models outperformed the NARX-P models in terms of CE and RTS, whereas both performed exceptionally in terms of PE and without significant difference. The NARX-S and NARX-P models with the highest overall performance were identified and their predictions were compared with those of traditional ARX-based models. The NARX-S model outperformed the ARX-based models in all three indexes, whereas the NARX-P model exhibited comparable CE performance and superior RTS and PE performance.

  12. A study on high subsonic airfoil flows in relatively high Reynolds number by using OpenFOAM

    NASA Astrophysics Data System (ADS)

    Nakao, Shinichiro; Kashitani, Masashi; Miyaguni, Takeshi; Yamaguchi, Yutaka

    2014-04-01

    In the present study, numerical calculations of the flow-field around the airfoil model are performed by using the OpenFOAM in high subsonic flows. The airfoil model is NACA 64A010. The maximum thickness is 10 % of the chord length. The SonicFOAM and the RhoCentralFOAM are selected as the solver in high subsonic flows. The grid point is 158,000 and the Mach numbers are 0.277 and 0.569 respectively. The CFD data are compared with the experimental data performed by the cryogenic wind tunnel in the past. The results are as follows. The numerical results of the pressure coefficient distribution on the model surface calculated by the SonicFOAM solver showed good agreement with the experimental data measured by the cryogenic wind tunnel. And the data calculated by the SonicFOAM have the capability for the quantitative comparison of the experimental data at low angle of attack.

  13. Structural modeling and optimization of a joined-wing configuration of a High-Altitude Long-Endurance (HALE) aircraft

    NASA Astrophysics Data System (ADS)

    Kaloyanova, Valentina B.

    Recent research trends have indicated an interest in High-Altitude, Long-Endurance (HALE) aircraft as a low-cost alternative to certain space missions, such as telecommunication relay, environmental sensing and military reconnaissance. HALE missions require a light vehicle flying at low speed in the stratosphere at altitudes of 60,000-80,000 ft, with a continuous loiter time of up to several days. To provide high lift and low drag at these high altitudes, where the air density is low, the wing area should be increased, i.e., high-aspect-ratio wings are necessary. Due to its large span and lightweight, the wing structure is very flexible. To reduce the structural deformation, and increase the total lift in a long-spanned wing, a sensorcraft model with a joined-wing configuration, proposed by AFRL, is employed. The joined-wing encompasses a forward wing, which is swept back with a positive dihedral angle, and connected with an aft wing, which is swept forward. The joined-wing design combines structural strength, high aerodynamic performance and efficiency. As a first step to study the joined-wing structural behavior an 1-D approximation model is developed. The 1-D approximation is a simple structural model created using ANSYS BEAM4 elements to present a possible approach for the aerodynamics-structure coupling. The pressure loads from the aerodynamic analysis are integrated numerically to obtain the resultant aerodynamic forces and moments (spanwise lift and pitching moment distributions, acting at the aerodynamic center). These are applied on the 1-D structural model. A linear static analysis is performed under this equivalent load, and the deformed shape of the 1-D model is used to obtain the deformed shape of the actual 3-D joined wing, i.e. deformed aerodynamic surface grid. To date in the existing studies, only simplified structural models have been examined. In the present work, in addition to the simple 1-D beam model, a semi-monocoque structural model is developed. All stringers, skin panels, ribs and spars are represented by appropriate elements in a finite-element model. Also, the model accounts for the fuel weight and sensorcraft antennae housed within the wings. Linear and nonlinear static analyses under the aerodynamic load are performed. The stress distribution in the wing as well as deformation is explored. Starting with a structural model with uniform mass distribution, a design optimization is performed to achieve a fully stressed design. As the joined-wing structure is prone to buckling, after the design optimization is complete linear and nonlinear bucking analyses are performed to study the global joined-wing structural instability, the load magnitude at which it is expected to occur, and the buckling mode. The buckled shape of the aft wing (which is subjected to compression) is found to resemble that of a fixed-pinned column. The linear buckling analysis overestimates the buckling load. However, even the nonlinear buckling analysis results in a load factor higher than 3, i.e. the wing structure is buckling safe under its current loading conditions. As the region of the joint has a very complicated geometry that has adverse effects in the flow and stress behavior an independent, more finely meshed model (submodel) of the joint region is generated and analyzed. A detailed discussion of the stress distribution obtained in the joint region via the submodeling technique is presented in this study as well. It is found out that compared to its structural response, the joint adverse effects are much more pronounced in its aerodynamic response, so it is suggested for future studies the geometry of the joint to be optimized based on its aerodynamic performance. As this design and analysis study is aimed towards developing a realistic structural representation of the innovative joined-wing configuration, in addition to the "global", or upper-level optimization, a local level design optimization is performed as well. At the lower (local) level detailed models of wing structural panels are used to compute more complex failure modes and to design the details that are not included in the upper (global) level model. Proper coordination between local skin-stringer panel models and the global joined-wing model prevents inconsistency between the upper- (global) and lower- (local) level design models. (Abstract shortened by UMI.)

  14. Selective Dry Etch for Defining Ohmic Contacts for High Performance ZnO TFTs

    DTIC Science & Technology

    2014-03-27

    scale, high-frequency ZnO thin - film transistors (TFTs) could be fabricated. Molybdenum, tantalum, titanium tungsten 10-90, and tungsten metallic contact... thin - film transistor layout utilized in the thesis research . . . . . 42 3.4 Process Flow Diagram for Optical and e-Beam Devices...TFT thin - film transistor TLM transmission line model UV ultra-violet xvii SELECTIVE DRY ETCH FOR DEFINING OHMIC CONTACTS FOR HIGH PERFORMANCE ZnO TFTs

  15. Department of Defense In-House RDT and E Activities: Management Analysis Report for Fiscal Year 1993

    DTIC Science & Technology

    1994-11-01

    A worldwide unique lab because it houses a high - speed modeling and simulation system, a prototype...E Division, San Diego, CA: High Performance Computing Laboratory providing a wide range of advanced computer systems for the scientific investigation...Machines CM-200 and a 256-node Thinking Machines CM-S. The CM-5 is in a very large memory, ( high performance 32 Gbytes, >4 0 OFlop) coafiguration,

  16. High pressure common rail injection system modeling and control.

    PubMed

    Wang, H P; Zheng, D; Tian, Y

    2016-07-01

    In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A Hybrid Actuation System Demonstrating Significantly Enhanced Electromechanical Performance

    NASA Technical Reports Server (NTRS)

    Su, Ji; Xu, Tian-Bing; Zhang, Shujun; Shrout, Thomas R.; Zhang, Qiming

    2004-01-01

    A hybrid actuation system (HYBAS) utilizing advantages of a combination of electromechanical responses of an electroactive polymer (EAP), an electrostrictive copolymer, and an electroactive ceramic single crystal, PZN-PT single crystal, has been developed. The system employs the contribution of the actuation elements cooperatively and exhibits a significantly enhanced electromechanical performance compared to the performances of the device made of each constituting material, the electroactive polymer or the ceramic single crystal, individually. The theoretical modeling of the performances of the HYBAS is in good agreement with experimental observation. The consistence between the theoretical modeling and experimental test make the design concept an effective route for the development of high performance actuating devices for many applications. The theoretical modeling, fabrication of the HYBAS and the initial experimental results will be presented and discussed.

  18. Boosting drug named entity recognition using an aggregate classifier.

    PubMed

    Korkontzelos, Ioannis; Piliouras, Dimitrios; Dowsey, Andrew W; Ananiadou, Sophia

    2015-10-01

    Drug named entity recognition (NER) is a critical step for complex biomedical NLP tasks such as the extraction of pharmacogenomic, pharmacodynamic and pharmacokinetic parameters. Large quantities of high quality training data are almost always a prerequisite for employing supervised machine-learning techniques to achieve high classification performance. However, the human labour needed to produce and maintain such resources is a significant limitation. In this study, we improve the performance of drug NER without relying exclusively on manual annotations. We perform drug NER using either a small gold-standard corpus (120 abstracts) or no corpus at all. In our approach, we develop a voting system to combine a number of heterogeneous models, based on dictionary knowledge, gold-standard corpora and silver annotations, to enhance performance. To improve recall, we employed genetic programming to evolve 11 regular-expression patterns that capture common drug suffixes and used them as an extra means for recognition. Our approach uses a dictionary of drug names, i.e. DrugBank, a small manually annotated corpus, i.e. the pharmacokinetic corpus, and a part of the UKPMC database, as raw biomedical text. Gold-standard and silver annotated data are used to train maximum entropy and multinomial logistic regression classifiers. Aggregating drug NER methods, based on gold-standard annotations, dictionary knowledge and patterns, improved the performance on models trained on gold-standard annotations, only, achieving a maximum F-score of 95%. In addition, combining models trained on silver annotations, dictionary knowledge and patterns are shown to achieve comparable performance to models trained exclusively on gold-standard data. The main reason appears to be the morphological similarities shared among drug names. We conclude that gold-standard data are not a hard requirement for drug NER. Combining heterogeneous models build on dictionary knowledge can achieve similar or comparable classification performance with that of the best performing model trained on gold-standard annotations. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Interactive Correlation Analysis and Visualization of Climate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less

  20. Cable testing for Fermilab's high field magnets using small racetrack coils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feher, S.; Ambrosio, G.; Andreev, N.

    As part of the High Field Magnet program at Fermilab simple magnets have been designed utilizing small racetrack coils based on a sound mechanical structure and bladder technique developed by LBNL. Two of these magnets have been built in order to test Nb{sub 3}Sn cables used in cos-theta dipole models. The powder-in-tube strand based cable exhibited excellent performance. It reached its critical current limit within 14 quenches. Modified jelly roll strand based cable performance was limited by magnetic instabilities at low fields as previously tested dipole models which used similar cable.

  1. Distributed multi-criteria model evaluation and spatial association analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Laura; Pfister, Stephan

    2015-04-01

    Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the high spatial association with the aridity index (ratio of mean annual precipitation to mean annual potential evapotranspiration). This association was still significant when controlling for slopes which manifested the second highest spatial association. In line with these findings, overall model efficiency of the entire Mississippi watershed appeared better when weighted with mean observed river discharge. Furthermore, the model received the highest rating with regards to PBIAS and was judged worst when considering NSE as the most comprehensive indicator. No universal performance indicator exists that considers all aspects of a hydrograph. Therefore, sound model evaluation must take into account multiple criteria. Since model efficiency varies in space which is masked by aggregated ratings spatially explicit model goodness should be communicated as standard praxis - at least as a measure of spatial variability of indicators. Furthermore, transparent documentation of the evaluation procedure also with regards to weighting of aggregated model performance is crucial but often lacking in published research. Finally, the high spatial association between model performance and aridity highlights the need to improve modelling schemes for arid conditions as priority over other aspects that might weaken model goodness.

  2. Physical fitness predicts technical-tactical and time-motion profile in simulated Judo and Brazilian Jiu-Jitsu matches

    PubMed Central

    Gentil, Paulo; Bueno, João C.A.; Follmer, Bruno; Marques, Vitor A.; Del Vecchio, Fabrício B.

    2018-01-01

    Background Among combat sports, Judo and Brazilian Jiu-Jitsu (BJJ) present elevated physical fitness demands from the high-intensity intermittent efforts. However, information regarding how metabolic and neuromuscular physical fitness is associated with technical-tactical performance in Judo and BJJ fights is not available. This study aimed to relate indicators of physical fitness with combat performance variables in Judo and BJJ. Methods The sample consisted of Judo (n = 16) and BJJ (n = 24) male athletes. At the first meeting, the physical tests were applied and, in the second, simulated fights were performed for later notational analysis. Results The main findings indicate: (i) high reproducibility of the proposed instrument and protocol used for notational analysis in a mobile device; (ii) differences in the technical-tactical and time-motion patterns between modalities; (iii) performance-related variables are different in Judo and BJJ; and (iv) regression models based on metabolic fitness variables may account for up to 53% of the variances in technical-tactical and/or time-motion variables in Judo and up to 31% in BJJ, whereas neuromuscular fitness models can reach values up to 44 and 73% of prediction in Judo and BJJ, respectively. When all components are combined, they can explain up to 90% of high intensity actions in Judo. Discussion In conclusion, performance prediction models in simulated combat indicate that anaerobic, aerobic and neuromuscular fitness variables contribute to explain time-motion variables associated with high intensity and technical-tactical variables in Judo and BJJ fights. PMID:29844991

  3. Methodology for comparing worldwide performance of diverse weight-constrained high energy laser systems

    NASA Astrophysics Data System (ADS)

    Bartell, Richard J.; Perram, Glen P.; Fiorino, Steven T.; Long, Scott N.; Houle, Marken J.; Rice, Christopher A.; Manning, Zachary P.; Bunch, Dustin W.; Krizo, Matthew J.; Gravley, Liesebet E.

    2005-06-01

    The Air Force Institute of Technology's Center for Directed Energy has developed a software model, the High Energy Laser End-to-End Operational Simulation (HELEEOS), under the sponsorship of the High Energy Laser Joint Technology Office (JTO), to facilitate worldwide comparisons across a broad range of expected engagement scenarios of expected performance of a diverse range of weight-constrained high energy laser system types. HELEEOS has been designed to meet JTO's goals of supporting a broad range of analyses applicable to the operational requirements of all the military services, constraining weapon effectiveness through accurate engineering performance assessments allowing its use as an investment strategy tool, and the establishment of trust among military leaders. HELEEOS is anchored to respected wave optics codes and all significant degradation effects, including thermal blooming and optical turbulence, are represented in the model. The model features operationally oriented performance metrics, e.g. dwell time required to achieve a prescribed probability of kill and effective range. Key features of HELEEOS include estimation of the level of uncertainty in the calculated Pk and generation of interactive nomographs to allow the user to further explore a desired parameter space. Worldwide analyses are enabled at five wavelengths via recently available databases capturing climatological, seasonal, diurnal, and geographical spatial-temporal variability in atmospheric parameters including molecular and aerosol absorption and scattering profiles and optical turbulence strength. Examples are provided of the impact of uncertainty in weight-power relationships, coupled with operating condition variability, on results of performance comparisons between chemical and solid state lasers.

  4. A high-throughput screening approach to discovering good forms of biologically inspired visual representation.

    PubMed

    Pinto, Nicolas; Doukhan, David; DiCarlo, James J; Cox, David D

    2009-11-01

    While many models of biological object recognition share a common set of "broad-stroke" properties, the performance of any one model depends strongly on the choice of parameters in a particular instantiation of that model--e.g., the number of units per layer, the size of pooling kernels, exponents in normalization operations, etc. Since the number of such parameters (explicit or implicit) is typically large and the computational cost of evaluating one particular parameter set is high, the space of possible model instantiations goes largely unexplored. Thus, when a model fails to approach the abilities of biological visual systems, we are left uncertain whether this failure is because we are missing a fundamental idea or because the correct "parts" have not been tuned correctly, assembled at sufficient scale, or provided with enough training. Here, we present a high-throughput approach to the exploration of such parameter sets, leveraging recent advances in stream processing hardware (high-end NVIDIA graphic cards and the PlayStation 3's IBM Cell Processor). In analogy to high-throughput screening approaches in molecular biology and genetics, we explored thousands of potential network architectures and parameter instantiations, screening those that show promising object recognition performance for further analysis. We show that this approach can yield significant, reproducible gains in performance across an array of basic object recognition tasks, consistently outperforming a variety of state-of-the-art purpose-built vision systems from the literature. As the scale of available computational power continues to expand, we argue that this approach has the potential to greatly accelerate progress in both artificial vision and our understanding of the computational underpinning of biological vision.

  5. Conditional High-Order Boltzmann Machines for Supervised Relation Learning.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

    2017-09-01

    Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

  6. Aerosol Impacts on Cirrus Clouds and High-Power Laser Transmission: A Combined Satellite Observation and Modeling Approach

    DTIC Science & Technology

    2009-03-22

    indirect effect (AIE) index determined from the slope of the fitted linear equation involving cloud particle size vs. aerosol optical depth is about a... raindrop . The model simulations were performed for a 48-hour period, starting at 00Z on 29 March 2007, about 20 hours prior to ABL test flight time...UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) MS. KRISTEN LUND UNIV OF CALIFORNIA LOS ANGELES, CA 90095 8. PERFORMING

  7. Paradigm of pretest risk stratification before coronary computed tomography.

    PubMed

    Jensen, Jesper Møller; Ovrehus, Kristian A; Nielsen, Lene H; Jensen, Jesper K; Larsen, Henrik M; Nørgaard, Bjarne L

    2009-01-01

    The optimal method of determining the pretest risk of coronary artery disease as a patient selection tool before coronary multidetector computed tomography (MDCT) is unknown. We investigated the ability of 3 different clinical risk scores to predict the outcome of coronary MDCT. This was a retrospective study of 551 patients consecutively referred for coronary MDCT on a suspicion of coronary artery disease. Diamond-Forrester, Duke, and Morise risk models were used to predict coronary artery stenosis (>50%) as assessed by coronary MDCT. The models were compared by receiver operating characteristic analysis. The distribution of low-, intermediate-, and high-risk persons, respectively, was established and compared for each of the 3 risk models. Overall, all risk prediction models performed equally well. However, the Duke risk model classified the low-risk patients more correctly than did the other models (P < 0.01). In patients without coronary artery calcification (CAC), the predictive value of the Duke risk model was superior to the other risk models (P < 0.05). Currently available risk prediction models seem to perform better in patients without CAC. Between the risk prediction models, there was a significant discrepancy in the distribution of patients at low, intermediate, or high risk (P < 0.01). The 3 risk prediction models perform equally well, although the Duke risk score may have advantages in subsets of patients. The choice of risk prediction model affects the referral pattern to MDCT. Copyright (c) 2009 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  8. An approach to secure weather and climate models against hardware faults

    NASA Astrophysics Data System (ADS)

    Düben, Peter; Dawson, Andrew

    2017-04-01

    Enabling Earth System models to run efficiently on future supercomputers is a serious challenge for model development. Many publications study efficient parallelisation to allow better scaling of performance on an increasing number of computing cores. However, one of the most alarming threats for weather and climate predictions on future high performance computing architectures is widely ignored: the presence of hardware faults that will frequently hit large applications as we approach exascale supercomputing. Changes in the structure of weather and climate models that would allow them to be resilient against hardware faults are hardly discussed in the model development community. We present an approach to secure the dynamical core of weather and climate models against hardware faults using a backup system that stores coarse resolution copies of prognostic variables. Frequent checks of the model fields on the backup grid allow the detection of severe hardware faults, and prognostic variables that are changed by hardware faults on the model grid can be restored from the backup grid to continue model simulations with no significant delay. To justify the approach, we perform simulations with a C-grid shallow water model in the presence of frequent hardware faults. As long as the backup system is used, simulations do not crash and a high level of model quality can be maintained. The overhead due to the backup system is reasonable and additional storage requirements are small. Runtime is increased by only 13% for the shallow water model.

  9. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  10. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    NASA Astrophysics Data System (ADS)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  11. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Hybrid ray-FDTD model for the simulation of the ultrasonic inspection of CFRP parts

    NASA Astrophysics Data System (ADS)

    Jezzine, Karim; Ségur, Damien; Ecault, Romain; Dominguez, Nicolas; Calmon, Pierre

    2017-02-01

    Carbon Fiber Reinforced Polymers (CFRP) are commonly used in structural parts in the aeronautic industry, to reduce the weight of aircraft while maintaining high mechanical performances. Simulation of the ultrasonic inspections of these parts has to face the highly heterogeneous and anisotropic characteristics of these materials. To model the propagation of ultrasound in these composite structures, we propose two complementary approaches. The first one is based on a ray model predicting the propagation of the ultrasound in an anisotropic effective medium obtained from a homogenization of the material. The ray model is designed to deal with possibly curved parts and subsequent continuously varying anisotropic orientations. The second approach is based on the coupling of the ray model, and a finite difference scheme in time domain (FDTD). The ray model handles the ultrasonic propagation between the transducer and the FDTD computation zone that surrounds the composite part. In this way, the computational efficiency is preserved and the ultrasound scattering by the composite structure can be predicted. Inspections of flat or curved composite panels, as well as stiffeners can be performed. The models have been implemented in the CIVA software platform and compared to experiments. We also present an application of the simulation to the performance demonstration of the adaptive inspection technique SAUL (Surface Adaptive Ultrasound).

  13. Research and development of energy-efficient appliance motor-compressors. Volume IV. Production demonstration and field test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, M.G.; Sauber, R.S.

    Two models of a high-efficiency compressor were manufactured in a pilot production run. These compressors were for low back-pressure applications. While based on a production compressor, there were many changes that required production process changes. Some changes were performed within our company and others were made by outside vendors. The compressors were used in top mount refrigerator-freezers and sold in normal distribution channels. Forty units were placed in residences for a one-year field test. Additional compressors were built so that a life test program could be performed. The results of the field test reveal a 27.0% improvement in energy consumptionmore » for the 18 ft/sup 3/ high-efficiency model and a 15.6% improvement in the 21 ft/sup 3/ improvement in the 21 ft/sup 3/ high-efficiency model as compared to the standard production unit.« less

  14. Oral diseases associated with condition-specific oral health-related quality of life and school performance of Thai primary school children: A hierarchical approach.

    PubMed

    Kaewkamnerdpong, Issarapong; Krisdapong, Sudaduang

    2018-06-01

    To assess the hierarchical associations between children's school performance and condition-specific (CS) oral health-related quality of life (OHRQoL), school absence, oral status, sociodemographic and economic status (SDES) and social capital; and to investigate the associations between CS OHRQoL and related oral status, adjusting for SDES and social capital. Data on 925 sixth grade children in Sakaeo province, Thailand, were collected through oral examinations for dental caries and oral hygiene, social capital questionnaires, OHRQoL interviews using the Child-Oral Impacts on Daily Performances index, parental self-administered questionnaires and school documents. A hierarchical conceptual framework was developed, and independent variables were hierarchically entered into multiple logistic models for CS OHRQoL and linear regression models for school performance. After adjusting for SDES and social capital, children with high DMFT or DT scores were significantly threefold more likely to have CS impacts attributed to dental caries. However, poor oral hygiene was not significantly associated with CS impacts attributed to gingival disease. High DMFT scores were significantly associated with lower school performance, whereas high Simplified Oral Hygiene Index scores were not. The final model showed that CS impacts attributed to dental caries and school absence accounted for the association between DMFT score and school performance. Dental caries was associated with CS impacts on OHRQoL, and exerted its effect on school performance through the CS impacts and school absence. There was no association between oral hygiene and CS impacts on OHRQoL or school performance. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Integration of car-body flexibility into train-track coupling system dynamics analysis

    NASA Astrophysics Data System (ADS)

    Ling, Liang; Zhang, Qing; Xiao, Xinbiao; Wen, Zefeng; Jin, Xuesong

    2018-04-01

    The resonance vibration of flexible car-bodies greatly affects the dynamics performances of high-speed trains. In this paper, we report a three-dimensional train-track model to capture the flexible vibration features of high-speed train carriages based on the flexible multi-body dynamics approach. The flexible car-body is modelled using both the finite element method (FEM) and the multi-body dynamics (MBD) approach, in which the rigid motions are obtained by using the MBD theory and the structure deformation is calculated by the FEM and the modal superposition method. The proposed model is applied to investigate the influence of the flexible vibration of car-bodies on the dynamics performances of train-track systems. The dynamics performances of a high-speed train running on a slab track, including the car-body vibration behaviour, the ride comfort, and the running safety, calculated by the numerical models with rigid and flexible car-bodies are compared in detail. The results show that the car-body flexibility not only significantly affects the vibration behaviour and ride comfort of rail carriages, but also can has an important influence on the running safety of trains. The rigid car-body model underestimates the vibration level and ride comfort of rail vehicles, and ignoring carriage torsional flexibility in the curving safety evaluation of trains is conservative.

  16. High-performance wireless powering for peripheral nerve neuromodulation systems.

    PubMed

    Tanabe, Yuji; Ho, John S; Liu, Jiayin; Liao, Song-Yan; Zhen, Zhe; Hsu, Stephanie; Shuto, Chika; Zhu, Zi-Yi; Ma, Andrew; Vassos, Christopher; Chen, Peter; Tse, Hung Fat; Poon, Ada S Y

    2017-01-01

    Neuromodulation of peripheral nerves with bioelectronic devices is a promising approach for treating a wide range of disorders. Wireless powering could enable long-term operation of these devices, but achieving high performance for miniaturized and deeply placed devices remains a technological challenge. We report the miniaturized integration of a wireless powering system in soft neuromodulation device (15 mm length, 2.7 mm diameter) and demonstrate high performance (about 10%) during in vivo wireless stimulation of the vagus nerve in a porcine animal model. The increased performance is enabled by the generation of a focused and circularly polarized field that enhances efficiency and provides immunity to polarization misalignment. These performance characteristics establish the clinical potential of wireless powering for emerging therapies based on neuromodulation.

  17. High-performance wireless powering for peripheral nerve neuromodulation systems

    PubMed Central

    Liu, Jiayin; Liao, Song-Yan; Zhen, Zhe; Hsu, Stephanie; Shuto, Chika; Zhu, Zi-Yi; Ma, Andrew; Vassos, Christopher; Chen, Peter; Tse, Hung Fat; Poon, Ada S. Y.

    2017-01-01

    Neuromodulation of peripheral nerves with bioelectronic devices is a promising approach for treating a wide range of disorders. Wireless powering could enable long-term operation of these devices, but achieving high performance for miniaturized and deeply placed devices remains a technological challenge. We report the miniaturized integration of a wireless powering system in soft neuromodulation device (15 mm length, 2.7 mm diameter) and demonstrate high performance (about 10%) during in vivo wireless stimulation of the vagus nerve in a porcine animal model. The increased performance is enabled by the generation of a focused and circularly polarized field that enhances efficiency and provides immunity to polarization misalignment. These performance characteristics establish the clinical potential of wireless powering for emerging therapies based on neuromodulation. PMID:29065141

  18. MrBayes tgMC3++: A High Performance and Resource-Efficient GPU-Oriented Phylogenetic Analysis Method.

    PubMed

    Ling, Cheng; Hamada, Tsuyoshi; Gao, Jingyang; Zhao, Guoguang; Sun, Donghong; Shi, Weifeng

    2016-01-01

    MrBayes is a widespread phylogenetic inference tool harnessing empirical evolutionary models and Bayesian statistics. However, the computational cost on the likelihood estimation is very expensive, resulting in undesirably long execution time. Although a number of multi-threaded optimizations have been proposed to speed up MrBayes, there are bottlenecks that severely limit the GPU thread-level parallelism of likelihood estimations. This study proposes a high performance and resource-efficient method for GPU-oriented parallelization of likelihood estimations. Instead of having to rely on empirical programming, the proposed novel decomposition storage model implements high performance data transfers implicitly. In terms of performance improvement, a speedup factor of up to 178 can be achieved on the analysis of simulated datasets by four Tesla K40 cards. In comparison to the other publicly available GPU-oriented MrBayes, the tgMC 3 ++ method (proposed herein) outperforms the tgMC 3 (v1.0), nMC 3 (v2.1.1) and oMC 3 (v1.00) methods by speedup factors of up to 1.6, 1.9 and 2.9, respectively. Moreover, tgMC 3 ++ supports more evolutionary models and gamma categories, which previous GPU-oriented methods fail to take into analysis.

  19. High-performance computing on GPUs for resistivity logging of oil and gas wells

    NASA Astrophysics Data System (ADS)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  20. The Pitch-Matching Ability of High School Choral Students: A Justification for Continued Direct Instruction

    ERIC Educational Resources Information Center

    Riegle, Aaron M.; Gerrity, Kevin W.

    2011-01-01

    The purpose of this study was to determine the pitch-matching ability of high school choral students. Years of piano experience, middle school performance experience, and model were considered as variables that might affect pitch-matching ability. Gender of participants was also considered when identifying the effectiveness of each model.…

Top