Sample records for scale high performance

  1. Performance Under Stress Conditions During Multidisciplinary Team Immersive Pediatric Simulations.

    PubMed

    Ghazali, Daniel Aiham; Darmian-Rafei, Ivan; Ragot, Stéphanie; Oriot, Denis

    2018-06-01

    The primary objective was to determine whether technical and nontechnical performances were in some way correlated during immersive simulation. Performance was measured among French Emergency Medical Service workers at an individual and a team level. Secondary objectives were to assess stress response through collection of physiologic markers (salivary cortisol, heart rate, the proportion derived by dividing the number of interval differences of successive normal-to-normal intervals > 50 ms by the total number of normal-to-normal intervals [pNN50], low- and high-frequency ratio) and affective data (self-reported stress, confidence, and dissatisfaction), and to correlate them to performance scores. Prospective observational study performed as part of a larger randomized controlled trial. Medical simulation laboratory. Forty-eight participants distributed among 12 Emergency Medical System teams. Individual and team performance measures and individual stress response were assessed during a high-fidelity simulation. Technical performance was assessed by the intraosseous access performance scale and the Team Average Performance Assessment Scale; nontechnical performance by the Behavioral Assessment Tool for leaders, and the Clinical Teamwork Scale. Stress markers (salivary cortisol, heart rate, pNN50, low- and high-frequency ratio) were measured both before (T1) and after the session (T2). Participants self-reported stress before and during the simulation, self-confidence, and perception of dissatisfaction with team performance, rated on a scale from 0 to 10. Scores (out of 100 total points, mean ± SD) were intraosseous equals to 65.6 ± 14.4, Team Average Performance Assessment Scale equals to 44.6 ± 18.1, Behavioral Assessment Tool equals to 49.5 ± 22.0, Clinical Teamwork Scale equals to 50.3 ± 18.5. There was a strong correlation between Behavioral Assessment Tool and Clinical Teamwork Scale (Rho = 0.97; p = 0.001), and Behavioral Assessment Tool and Team Average Performance Assessment Scale (Rho = 0.73; p = 0.02). From T1 to T2, all stress markers (salivary cortisol, heart rate, pNN50, and low- and high-frequency ratio) displayed an increase in stress level (p < 0.001 for all). Self-confidence was positively correlated with performance (Clinical Teamwork Scale: Rho = 0.47; p = 0.001, Team Average Performance Assessment Scale: Rho = 0.46; p = 0.001). Dissatisfaction was negatively correlated with performance (Rho = -0.49; p = 0.0008 with Behavioral Assessment Tool, Rho = -0.47; p = 0.001 with Clinical Teamwork Scale, Rho = -0.51; p = 0.0004 with Team Average Performance Assessment Scale). No correlation between stress response and performance was found. There was a positive correlation between leader (Behavioral Assessment Tool) and team (Clinical Teamwork Scale and Team Average Performance Assessment Scale) performances. These performance scores were positively correlated with self-confidence and negatively correlated with dissatisfaction.

  2. Evaluation of high fidelity patient simulator in assessment of performance of anaesthetists.

    PubMed

    Weller, J M; Bloch, M; Young, S; Maze, M; Oyesola, S; Wyner, J; Dob, D; Haire, K; Durbridge, J; Walker, T; Newble, D

    2003-01-01

    There is increasing emphasis on performance-based assessment of clinical competence. The High Fidelity Patient Simulator (HPS) may be useful for assessment of clinical practice in anaesthesia, but needs formal evaluation of validity, reliability, feasibility and effect on learning. We set out to assess the reliability of a global rating scale for scoring simulator performance in crisis management. Using a global rating scale, three judges independently rated videotapes of anaesthetists in simulated crises in the operating theatre. Five anaesthetists then independently rated subsets of these videotapes. There was good agreement between raters for medical management, behavioural attributes and overall performance. Agreement was high for both the initial judges and the five additional raters. Using a global scale to assess simulator performance, we found good inter-rater reliability for scoring performance in a crisis. We estimate that two judges should provide a reliable assessment. High fidelity simulation should be studied further for assessing clinical performance.

  3. Children of psychotic mothers. An evaluation of 1-year-olds on a test of object permanence.

    PubMed

    Gamer, E; Gallant, D; Grunebaum, H

    1976-03-01

    Fifteen 1-year-old infants at high risk for later psychopathologic behavior were tested on the Piaget Object Scale. Their performance was compared to that of a matched group of controls at low risk. Results indicate a trend in the high-risk group toward lowered object scale performance. Affective styles were found to vary between the groups. The high-risk infants, particularly those with low scores on the object scale, demonstrated more intense anxiety.

  4. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  5. High-Stakes Accountability: Student Anxiety and Large-Scale Testing

    ERIC Educational Resources Information Center

    von der Embse, Nathaniel P.; Witmer, Sara E.

    2014-01-01

    This study examined the relationship between student anxiety about high-stakes testing and their subsequent test performance. The FRIEDBEN Test Anxiety Scale was administered to 1,134 11th-grade students, and data were subsequently collected on their statewide assessment performance. Test anxiety was a significant predictor of test performance…

  6. Controlled crystallization and granulation of nano-scale β-Ni(OH) 2 cathode materials for high power Ni-MH batteries

    NASA Astrophysics Data System (ADS)

    He, Xiangming; Li, Jianjun; Cheng, Hongwei; Jiang, Changyin; Wan, Chunrong

    A novel synthesis of controlled crystallization and granulation was attempted to prepare nano-scale β-Ni(OH) 2 cathode materials for high power Ni-MH batteries. Nano-scale β-Ni(OH) 2 and Co(OH) 2 with a diameter of 20 nm were prepared by controlled crystallization, mixed by ball milling, and granulated to form about 5 μm spherical grains by spray drying granulation. Both the addition of nano-scale Co(OH) 2 and granulation significantly enhanced electrochemical performance of nano-scale Ni(OH) 2. The XRD and TEM analysis shown that there were a large amount of defects among the crystal lattice of as-prepared nano-scale Ni(OH) 2, and the DTA-TG analysis shown that it had both lower decomposition temperature and higher decomposition reaction rate, indicating less thermal stability, as compared with conventional micro-scale Ni(OH) 2, and indicating that it had higher electrochemical performance. The granulated grains of nano-scale Ni(OH) 2 mixed with nano-scale Co(OH) 2 at Co/Ni = 1/20 presented the highest specific capacity reaching its theoretical value of 289 mAh g -1 at 1 C, and also exhibited much improved electrochemical performance at high discharge capacity rate up to 10 C. The granulated grains of nano-scale β-Ni(OH) 2 mixed with nano-scale Co(OH) 2 is a promising cathode active material for high power Ni-MH batteries.

  7. Label reading, numeracy and Food&Nutrition involvement.

    PubMed

    Mulders, Maria Dgh; Corneille, O; Klein, O

    2018-06-07

    The purpose of this study was to investigate objective performance on a nutrition label comprehension task, and the influence of numeracy and food-related involvement on this performance level. A pilot study (n = 45) was run to prepare the scales in French. For the main study (n = 101), participants provided demographic information and answered the nutrition label survey, the short numeracy scale and two different food-related involvement scales (i.e. the food involvement scale and the nutrition involvement scale). Both studies were conducted online, and consent was obtained from all participants. Participants answered correctly only two-thirds of the nutrition label task items. Numeracy and food involvement scores were positively correlated with performance on this task. Finally, food involvement interacted with numeracy. Specifically, people scoring low in numeracy performed generally more poorly on the task, but if they had high food involvement scores, their performance increased. This suggests that high food-related motivation may compensate for poor numeracy skills when dealing with nutrition labels. Copyright © 2018. Published by Elsevier Ltd.

  8. A new deadlock resolution protocol and message matching algorithm for the extreme-scale simulator

    DOE PAGES

    Engelmann, Christian; Naughton, III, Thomas J.

    2016-03-22

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different HPC architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1)~a new deadlock resolution protocol to reduce the parallel discrete event simulation overhead and (2)~a new simulated MPI message matchingmore » algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement. The simulation overhead for running the NAS Parallel Benchmark suite was reduced from 102% to 0% for the embarrassingly parallel (EP) benchmark and from 1,020% to 238% for the conjugate gradient (CG) benchmark. xSim offers a highly accurate simulation mode for better tracking of injected MPI process failures. Furthermore, with highly accurate simulation, the overhead was reduced from 3,332% to 204% for EP and from 37,511% to 13,808% for CG.« less

  9. Visual/motion cue mismatch in a coordinated roll maneuver

    NASA Technical Reports Server (NTRS)

    Shirachi, D. K.; Shirley, R. S.

    1981-01-01

    The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.

  10. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O'Neill, B. J.; Nolting, C.; Edmon, P.; Donnert, J. M. F.; Jones, T. W.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  11. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  12. Scale effect challenges in urban hydrology highlighted with a Fully Distributed Model and High-resolution rainfall data

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2017-04-01

    Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.

  13. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  14. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE PAGES

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    2017-01-16

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  15. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it maymore » be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.« less

  16. High Host Specificity in Encarsia diaspidicola (Hymenoptera: Aphelinidae), a Biological Control Candidate Against the White Peach Scale in Hawaii

    USDA-ARS?s Scientific Manuscript database

    Pre-introductory host specificity tests were performed with Encarsia diaspidicola, a biological control candidate against the invasive white peach scale, Pseudaulacaspis pentagona. False oleander scale, P. cockerelli, coconut scale, Aspidiotus destructor, cycad scale, Aulacaspis yasumatsui, greenh...

  17. Ultra-high-Q phononic resonators on-chip at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Kharel, Prashanta; Chu, Yiwen; Power, Michael; Renninger, William H.; Schoelkopf, Robert J.; Rakich, Peter T.

    2018-06-01

    Long-lived, high-frequency phonons are valuable for applications ranging from optomechanics to emerging quantum systems. For scientific as well as technological impact, we seek high-performance oscillators that offer a path toward chip-scale integration. Confocal bulk acoustic wave resonators have demonstrated an immense potential to support long-lived phonon modes in crystalline media at cryogenic temperatures. So far, these devices have been macroscopic with cm-scale dimensions. However, as we push these oscillators to high frequencies, we have an opportunity to radically reduce the footprint as a basis for classical and emerging quantum technologies. In this paper, we present novel design principles and simple microfabrication techniques to create high performance chip-scale confocal bulk acoustic wave resonators in a wide array of crystalline materials. We tailor the acoustic modes of such resonators to efficiently couple to light, permitting us to perform a non-invasive laser-based phonon spectroscopy. Using this technique, we demonstrate an acoustic Q-factor of 2.8 × 107 (6.5 × 106) for chip-scale resonators operating at 12.7 GHz (37.8 GHz) in crystalline z-cut quartz (x-cut silicon) at cryogenic temperatures.

  18. Wafer scale fabrication of carbon nanotube thin film transistors with high yield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tian, Boyuan; Liang, Xuelei, E-mail: liangxl@pku.edu.cn, E-mail: ssxie@iphy.ac.cn; Yan, Qiuping

    Carbon nanotube thin film transistors (CNT-TFTs) are promising candidates for future high performance and low cost macro-electronics. However, most of the reported CNT-TFTs are fabricated in small quantities on a relatively small size substrate. The yield of large scale fabrication and the performance uniformity of devices on large size substrates should be improved before the CNT-TFTs reach real products. In this paper, 25 200 devices, with various geometries (channel width and channel length), were fabricated on 4-in. size ridged and flexible substrates. Almost 100% device yield were obtained on a rigid substrate with high out-put current (>8 μA/μm), high on/off current ratiomore » (>10{sup 5}), and high mobility (>30 cm{sup 2}/V·s). More importantly, uniform performance in 4-in. area was achieved, and the fabrication process can be scaled up. The results give us more confidence for the real application of the CNT-TFT technology in the near future.« less

  19. A high performance lithium–sulfur battery enabled by a fish-scale porous carbon/sulfur composite and symmetric fluorinated diethoxyethane electrolyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Mengyao; Su, ChiCheung; He, Meinan

    A high performance lithium–sulfur (Li–S) battery comprising a symmetric fluorinated diethoxyethane electrolyte coupled with a fish-scale porous carbon/S composite electrode was demonstrated. 1,2-Bis(1,1,2,2-tetrafluoroethoxy)ethane (TFEE) was first studied as a new electrolyte solvent for Li–S chemistry. When co-mixed with 1,3-dioxolane (DOL), the DOL/TFEE electrolyte suppressed the polysulfide dissolution and shuttling reaction. Lastly, when coupled with a fish-scale porous carbon/S composite electrode, the Li–S cell exhibited a significantly high capacity retention of 99.5% per cycle for 100 cycles, which is far superior to the reported numerous systems.

  20. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  1. A high performance lithium–sulfur battery enabled by a fish-scale porous carbon/sulfur composite and symmetric fluorinated diethoxyethane electrolyte

    DOE PAGES

    Gao, Mengyao; Su, ChiCheung; He, Meinan; ...

    2017-03-07

    A high performance lithium–sulfur (Li–S) battery comprising a symmetric fluorinated diethoxyethane electrolyte coupled with a fish-scale porous carbon/S composite electrode was demonstrated. 1,2-Bis(1,1,2,2-tetrafluoroethoxy)ethane (TFEE) was first studied as a new electrolyte solvent for Li–S chemistry. When co-mixed with 1,3-dioxolane (DOL), the DOL/TFEE electrolyte suppressed the polysulfide dissolution and shuttling reaction. Lastly, when coupled with a fish-scale porous carbon/S composite electrode, the Li–S cell exhibited a significantly high capacity retention of 99.5% per cycle for 100 cycles, which is far superior to the reported numerous systems.

  2. Heat Transfer Enhancement in High Performance Heat Sink Channels by Autonomous, Aero-Elastic Reed Fluttering

    NASA Astrophysics Data System (ADS)

    Jha, Sourabh; Crittenden, Thomas; Glezer, Ari

    2016-11-01

    Heat transport within high aspect ratio, rectangular mm-scale channels that model segments of a high-performance, air-cooled heat sink is enhanced by the formation of unsteady small-scale vortical motions induced by autonomous, aeroelastic fluttering of cantilevered planar thin-film reeds. The flow mechanisms and scaling of the interactions between the reed and the channel flow are explored to overcome the limits of forced convection heat transport from air-side heat exchangers. High-resolution PIV measurements in a testbed model show that undulations of the reed's surface lead to formation and advection of vorticity concentrations, and to alternate shedding of spanwise CW and CCW vortices. These vortices scale with the reed motion amplitude, and ultimately result in motions of decreasing scales and enhanced dissipation that are reminiscent of a turbulent flow. The vorticity shedding lead to strong enhancement in heat transfer that increases with the Reynolds number of the base flow (e.g., the channel's thermal coefficient of performance is enhanced by 2.4-fold and 9-fold for base flow Re = 4,000 and 17,400, respectively, with corresponding decreases of 50 and 77% in the required channel flow rates). This is demonstrated in heat sinks for improving the thermal performance of low-Re thermoelectric power plant air-cooled condensers, where the global air-side pressure losses can be significantly reduced by lowering the required air volume flow rate at a given heat flux and surface temperature. AFOSR and NSF-EPRI.

  3. Semiconductor Materials for High Frequency Solid State Sources.

    DTIC Science & Technology

    1985-01-18

    saturation on near and submicron-scale device performance. The motivation for this is as follows: Presently, individual semiconductors are accepted or...basis of all FET scaling procedures; and is a major motivating factor for going to submicron structures. This scaling was tested with the 4 following...performance. The motivation for this is as follows: Presently, individual semiconductors are accepted or rejected as candidate device materials based, in

  4. RAID-2: Design and implementation of a large scale disk array controller

    NASA Technical Reports Server (NTRS)

    Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.

    1992-01-01

    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.

  5. A Short History of Performance Assessment: Lessons Learned.

    ERIC Educational Resources Information Center

    Madaus, George F.; O'Dwyer, Laura M.

    1999-01-01

    Places performance assessment in the context of high-stakes uses, describes underlying technologies, and outlines the history of performance testing from 210 B.C.E. to the present. Historical issues of fairness, efficiency, cost, and infrastructure influence contemporary efforts to use performance assessments in large-scale, high-stakes testing…

  6. Comparison of sub-scaled to full-scaled aircrafts in simulation environment for air traffic management

    NASA Astrophysics Data System (ADS)

    Elbakary, Mohamed I.; Iftekharuddin, Khan M.; Papelis, Yiannis; Newman, Brett

    2017-05-01

    Air Traffic Management (ATM) concepts are commonly tested in simulation to obtain preliminary results and validate the concepts before adoption. Recently, the researchers found that simulation is not enough because of complexity associated with ATM concepts. In other words, full-scale tests must eventually take place to provide compelling performance evidence before adopting full implementation. Testing using full-scale aircraft produces a high-cost approach that yields high-confidence results but simulation provides a low-risk/low-cost approach with reduced confidence on the results. One possible approach to increase the confidence of the results and simultaneously reduce the risk and the cost is using unmanned sub-scale aircraft in testing new concepts for ATM. This paper presents the simulation results of using unmanned sub-scale aircraft in implementing ATM concepts compared to the full scale aircraft. The results of simulation show that the performance of sub-scale is quite comparable to that of the full-scale which validates use of the sub-scale in testing new ATM concepts. Keywords: Unmanned

  7. High Performance Nanofiltration Membrane for Effective Removal of Perfluoroalkyl Substances at High Water Recovery.

    PubMed

    Boo, Chanhee; Wang, Yunkun; Zucker, Ines; Choo, Youngwoo; Osuji, Chinedum O; Elimelech, Menachem

    2018-05-31

    We demonstrate the fabrication of a loose, negatively charged nanofiltration (NF) membrane with tailored selectivity for the removal of perfluoroalkyl substances with reduced scaling potential. A selective polyamide layer was fabricated on top of a polyethersulfone support via interfacial polymerization of trimesoyl chloride and a mixture of piperazine and bipiperidine. Incorporating high molecular weight bipiperidine during the interfacial polymerization enables the formation of a loose, nanoporous selective layer structure. The fabricated NF membrane possessed a negative surface charge and had a pore diameter of ~1.2 nm, much larger than a widely used commercial NF membrane (i.e., NF270 with pore diameter of ~0.8 nm). We evaluated the performance of the fabricated NF membrane for the rejection of different salts (i.e., NaCl, CaCl2, and Na2SO4) and perfluorooctanoic acid (PFOA). The fabricated NF membrane exhibited a high retention of PFOA (~90%) while allowing high passage of scale-forming cations (i.e., calcium). We further performed gypsum scaling experiments to demonstrate lower scaling potential of the fabricated loose porous NF membrane compared to NF membranes having a dense selective layer under solution conditions simulating high water recovery. Our results demonstrate that properly designed NF membranes are a critical component of a high recovery NF system, which provide an efficient and sustainable solution for remediation of groundwater contaminated with perfluoroalkyl substances.

  8. Factor- and Item-Level Analyses of the 38-Item Activities Scale for Kids-Performance

    ERIC Educational Resources Information Center

    Bagley, Anita M.; Gorton, George E.; Bjornson, Kristie; Bevans, Katherine; Stout, Jean L.; Narayanan, Unni; Tucker, Carole A.

    2011-01-01

    Aim: Children and adolescents highly value their ability to participate in relevant daily life and recreational activities. The Activities Scale for Kids-performance (ASKp) instrument measures the frequency of performance of 30 common childhood activities, and has been shown to be valid and reliable. A revised and expanded 38-item ASKp (ASKp38)…

  9. Multi-scale gyrokinetic simulations of an Alcator C-Mod, ELM-y H-mode plasma

    NASA Astrophysics Data System (ADS)

    Howard, N. T.; Holland, C.; White, A. E.; Greenwald, M.; Rodriguez-Fernandez, P.; Candy, J.; Creely, A. J.

    2018-01-01

    High fidelity, multi-scale gyrokinetic simulations capable of capturing both ion ({k}θ {ρ }s∼ { O }(1.0)) and electron-scale ({k}θ {ρ }e∼ { O }(1.0)) turbulence were performed in the core of an Alcator C-Mod ELM-y H-mode discharge which exhibits reactor-relevant characteristics. These simulations, performed with all experimental inputs and realistic ion to electron mass ratio ({({m}i/{m}e)}1/2=60.0) provide insight into the physics fidelity that may be needed for accurate simulation of the core of fusion reactor discharges. Three multi-scale simulations and series of separate ion and electron-scale simulations performed using the GYRO code (Candy and Waltz 2003 J. Comput. Phys. 186 545) are presented. As with earlier multi-scale results in L-mode conditions (Howard et al 2016 Nucl. Fusion 56 014004), both ion and multi-scale simulations results are compared with experimentally inferred ion and electron heat fluxes, as well as the measured values of electron incremental thermal diffusivities—indicative of the experimental electron temperature profile stiffness. Consistent with the L-mode results, cross-scale coupling is found to play an important role in the simulation of these H-mode conditions. Extremely stiff ion-scale transport is observed in these high-performance conditions which is shown to likely play and important role in the reproduction of measurements of perturbative transport. These results provide important insight into the role of multi-scale plasma turbulence in the core of reactor-relevant plasmas and establish important constraints on the the fidelity of models needed for predictive simulations.

  10. Conceptual design and analysis of a dynamic scale model of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.

    1994-01-01

    This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.

  11. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    PubMed

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  12. A HUMAN AUTOMATION INTERACTION CONCEPT FOR A SMALL MODULAR REACTOR CONTROL ROOM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Blanc, Katya; Spielman, Zach; Hill, Rachael

    Many advanced nuclear power plant (NPP) designs incorporate higher degrees of automation than the existing fleet of NPPs. Automation is being introduced or proposed in NPPs through a wide variety of systems and technologies, such as advanced displays, computer-based procedures, advanced alarm systems, and computerized operator support systems. Additionally, many new reactor concepts, both full scale and small modular reactors, are proposing increased automation and reduced staffing as part of their concept of operations. However, research consistently finds that there is a fundamental tradeoff between system performance with increased automation and reduced human performance. There is a need to addressmore » the question of how to achieve high performance and efficiency of high levels of automation without degrading human performance. One example of a new NPP concept that will utilize greater degrees of automation is the SMR concept from NuScale Power. The NuScale Power design requires 12 modular units to be operated in one single control room, which leads to a need for higher degrees of automation in the control room. Idaho National Laboratory (INL) researchers and NuScale Power human factors and operations staff are working on a collaborative project to address the human performance challenges of increased automation and to determine the principles that lead to optimal performance in highly automated systems. This paper will describe this concept in detail and will describe an experimental test of the concept. The benefits and challenges of the approach will be discussed.« less

  13. A FRAMEWORK FOR FINE-SCALE COMPUTATIONAL FLUID DYNAMICS AIR QUALITY MODELING AND ANALYSIS

    EPA Science Inventory

    Fine-scale Computational Fluid Dynamics (CFD) simulation of pollutant concentrations within roadway and building microenvironments is feasible using high performance computing. Unlike currently used regulatory air quality models, fine-scale CFD simulations are able to account rig...

  14. Lamellae spatial distribution modulates fracture behavior and toughness of african pangolin scales

    DOE PAGES

    Chon, Michael J.; Daly, Matthew; Wang, Bin; ...

    2017-06-10

    Pangolin scales form a durable armor whose hierarchical structure offers an avenue towards high performance bio-inspired materials design. In this paper, the fracture resistance of African pangolin scales is examined using single edge crack three-point bend fracture testing in order to understand toughening mechanisms arising from the structures of natural mammalian armors. In these mechanical tests, the influence of material orientation and hydration level are examined. The fracture experiments reveal an exceptional fracture resistance due to crack deflection induced by the internal spatial orientation of lamellae. An order of magnitude increase in the measured fracture resistance due to scale hydration,more » reaching up to ~ 25 kJ/m 2 was measured. Post-mortem analysis of the fracture samples was performed using a combination of optical and electron microscopy, and X-ray computerized tomography. Interestingly, the crack profile morphologies are observed to follow paths outlined by the keratinous lamellae structure of the pangolin scale. Most notably, the inherent structure of pangolin scales offers a pathway for crack deflection and fracture toughening. Finally, the results of this study are expected to be useful as design principles for high performance biomimetic applications.« less

  15. Lamellae spatial distribution modulates fracture behavior and toughness of african pangolin scales.

    PubMed

    Chon, Michael J; Daly, Matthew; Wang, Bin; Xiao, Xianghui; Zaheri, Alireza; Meyers, Marc A; Espinosa, Horacio D

    2017-12-01

    Pangolin scales form a durable armor whose hierarchical structure offers an avenue towards high performance bio-inspired materials design. In this study, the fracture resistance of African pangolin scales is examined using single edge crack three-point bend fracture testing in order to understand toughening mechanisms arising from the structures of natural mammalian armors. In these mechanical tests, the influence of material orientation and hydration level are examined. The fracture experiments reveal an exceptional fracture resistance due to crack deflection induced by the internal spatial orientation of lamellae. An order of magnitude increase in the measured fracture resistance due to scale hydration, reaching up to ~ 25kJ/m 2 was measured. Post-mortem analysis of the fracture samples was performed using a combination of optical and electron microscopy, and X-ray computerized tomography. Interestingly, the crack profile morphologies are observed to follow paths outlined by the keratinous lamellae structure of the pangolin scale. Most notably, the inherent structure of pangolin scales offers a pathway for crack deflection and fracture toughening. The results of this study are expected to be useful as design principles for high performance biomimetic applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Lamellae spatial distribution modulates fracture behavior and toughness of african pangolin scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chon, Michael J.; Daly, Matthew; Wang, Bin

    Pangolin scales form a durable armor whose hierarchical structure offers an avenue towards high performance bio-inspired materials design. In this paper, the fracture resistance of African pangolin scales is examined using single edge crack three-point bend fracture testing in order to understand toughening mechanisms arising from the structures of natural mammalian armors. In these mechanical tests, the influence of material orientation and hydration level are examined. The fracture experiments reveal an exceptional fracture resistance due to crack deflection induced by the internal spatial orientation of lamellae. An order of magnitude increase in the measured fracture resistance due to scale hydration,more » reaching up to ~ 25 kJ/m 2 was measured. Post-mortem analysis of the fracture samples was performed using a combination of optical and electron microscopy, and X-ray computerized tomography. Interestingly, the crack profile morphologies are observed to follow paths outlined by the keratinous lamellae structure of the pangolin scale. Most notably, the inherent structure of pangolin scales offers a pathway for crack deflection and fracture toughening. Finally, the results of this study are expected to be useful as design principles for high performance biomimetic applications.« less

  17. Development of combinatorial chemistry methods for coatings: high-throughput adhesion evaluation and scale-up of combinatorial leads.

    PubMed

    Potyrailo, Radislav A; Chisholm, Bret J; Morris, William G; Cawse, James N; Flanagan, William P; Hassib, Lamyaa; Molaison, Chris A; Ezbiansky, Karin; Medford, George; Reitz, Hariklia

    2003-01-01

    Coupling of combinatorial chemistry methods with high-throughput (HT) performance testing and measurements of resulting properties has provided a powerful set of tools for the 10-fold accelerated discovery of new high-performance coating materials for automotive applications. Our approach replaces labor-intensive steps with automated systems for evaluation of adhesion of 8 x 6 arrays of coating elements that are discretely deposited on a single 9 x 12 cm plastic substrate. Performance of coatings is evaluated with respect to their resistance to adhesion loss, because this parameter is one of the primary considerations in end-use automotive applications. Our HT adhesion evaluation provides previously unavailable capabilities of high speed and reproducibility of testing by using a robotic automation, an expanded range of types of tested coatings by using the coating tagging strategy, and an improved quantitation by using high signal-to-noise automatic imaging. Upon testing, the coatings undergo changes that are impossible to quantitatively predict using existing knowledge. Using our HT methodology, we have developed several coatings leads. These HT screening results for the best coating compositions have been validated on the traditional scales of coating formulation and adhesion loss testing. These validation results have confirmed the superb performance of combinatorially developed coatings over conventional coatings on the traditional scale.

  18. Effects of Rotor Blade Scaling on High-Pressure Turbine Unsteady Loading

    NASA Astrophysics Data System (ADS)

    Lastiwka, Derek; Chang, Dongil; Tavoularis, Stavros

    2013-03-01

    The present work is a study of the effects of rotor blade scaling of a single-stage high pressure turbine on the time-averaged turbine performance and on parameters that influence vibratory stresses on the rotor blades and stator vanes. Three configurations have been considered: a reference case with 36 rotor blades and 24 stator vanes, a case with blades upscaled by 12.5%, and a case with blades downscaled by 10%. The present results demonstrate that blade scaling effects were essentially negligible on the time-averaged turbine performance, but measurable on the unsteady surface pressure fluctuations, which were intensified as blade size was increased. In contrast, blade torque fluctuations increased significantly as blade size decreased. Blade scaling effects were also measurable on the vanes.

  19. Non-parametric early seizure detection in an animal model of temporal lobe epilepsy

    NASA Astrophysics Data System (ADS)

    Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.

    2008-03-01

    The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.

  20. A low cost, high energy density, and long cycle life potassium-sulfur battery for grid-scale energy storage.

    PubMed

    Lu, Xiaochuan; Bowden, Mark E; Sprenkle, Vincent L; Liu, Jun

    2015-10-21

    A potassium-sulfur battery using K(+) -conducting beta-alumina as the electrolyte to separate a molten potassium metal anode and a sulfur cathode is presented. The results indicate that the battery can operate at as low as 150 °C with excellent performance. This study demonstrates a new type of high-performance metal-sulfur battery that is ideal for grid-scale energy-storage applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Scaling of Ion Thrusters to Low Power

    NASA Technical Reports Server (NTRS)

    Patterson, Michael J.; Grisnik, Stanley P.; Soulas, George C.

    1998-01-01

    Analyses were conducted to examine ion thruster scaling relationships in detail to determine performance limits, and lifetime expectations for thruster input power levels below 0.5 kW. This was motivated by mission analyses indicating the potential advantages of high performance, high specific impulse systems for small spacecraft. The design and development status of a 0.1-0.3 kW prototype small thruster and its components are discussed. Performance goals include thruster efficiencies on the order of 40% to 54% over a specific impulse range of 2000 to 3000 seconds, with a lifetime in excess of 8000 hours at full power. Thruster technologies required to achieve the performance and lifetime targets are identified.

  2. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  3. Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Lauer, Frank

    This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less

  4. Prospects for spinel-stabilized, high-capacity lithium-ion battery cathodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croy, Jason R.; Park, Joong Sun; Shin, Youngho

    Herein we report early results on efforts to optimize the electrochemical performance of a cathode composed of a lithium- and manganese-rich “layered-layered-spinel” material for lithium-ion battery applications. Pre-pilot scale synthesis leads to improved particle properties compared with lab-scale efforts, resulting in high capacities (≳200 mAh/g) and good energy densities (>700 Wh/kg) in tests with lithium-ion cells. Subsequent surface modifications give further improvements in rate capabilities and high-voltage stability. These results bode well for advances in the performance of this class of lithium- and manganese-rich cathode materials.

  5. Prospects for spinel-stabilized, high-capacity lithium-ion battery cathodes

    DOE PAGES

    Croy, Jason R.; Park, Joong Sun; Shin, Youngho; ...

    2016-10-13

    Herein we report early results on efforts to optimize the electrochemical performance of a cathode composed of a lithium- and manganese-rich “layered-layered-spinel” material for lithium-ion battery applications. Pre-pilot scale synthesis leads to improved particle properties compared with lab-scale efforts, resulting in high capacities (≳200 mAh/g) and good energy densities (>700 Wh/kg) in tests with lithium-ion cells. Subsequent surface modifications give further improvements in rate capabilities and high-voltage stability. These results bode well for advances in the performance of this class of lithium- and manganese-rich cathode materials.

  6. The Effect of Home-based Daily Journal Writing in Korean Adolescents with Smartphone Addiction.

    PubMed

    Lee, Hyuk; Seo, Min Jae; Choi, Tae Young

    2016-05-01

    Despite the benefits of smartphones, many adverse effects have emerged. However, to date, there was no particular approach to treat or prevent smartphone addiction. The aim of this study was to evaluate the therapeutic effectiveness of a home-based daily journal of smartphone use (HDJ-S) in Korean adolescents. Three hundred thirty five middle school students participated in this study. The severity of smartphone addiction was measured using the Korean Smartphone Addiction Proneness Scale. The ability to control smartphone use was evaluated with the Motive Scale for Smartphone Regulation. We used the Parents' Concerns for Children's Smartphone Activities Scale to measure parental monitoring and supervision of adolescents' smartphone activities. The Korean Smartphone Addiction Proneness Scale classified subjects into high risk and non-high risk for smartphone addiction, according to total scores. Forty six participants (14%) were high risk for smartphone addiction. The high risk group performed the HDJ-S for two weeks, and the same scales were subsequently assessed. After performing the HDJ-S, the total scores of the Korean Smartphone Addiction Proneness Scale decreased significantly in the high risk group (P < 0.001). There was a significant increase in the total scores of the Parents' Concerns for Children's Smartphone Activities Scale in the high risk group between baseline and following two weeks of treatment (P < 0.05). The HDJ-S was effective for adolescents with smartphone addiction and increased the parents' concerns for their children's smartphone activities. We suggested that HDJ-S would be considered as a treatment and prevention for smartphone addiction.

  7. The Effect of Home-based Daily Journal Writing in Korean Adolescents with Smartphone Addiction

    PubMed Central

    2016-01-01

    Despite the benefits of smartphones, many adverse effects have emerged. However, to date, there was no particular approach to treat or prevent smartphone addiction. The aim of this study was to evaluate the therapeutic effectiveness of a home-based daily journal of smartphone use (HDJ-S) in Korean adolescents. Three hundred thirty five middle school students participated in this study. The severity of smartphone addiction was measured using the Korean Smartphone Addiction Proneness Scale. The ability to control smartphone use was evaluated with the Motive Scale for Smartphone Regulation. We used the Parents’ Concerns for Children’s Smartphone Activities Scale to measure parental monitoring and supervision of adolescents’ smartphone activities. The Korean Smartphone Addiction Proneness Scale classified subjects into high risk and non-high risk for smartphone addiction, according to total scores. Forty six participants (14%) were high risk for smartphone addiction. The high risk group performed the HDJ-S for two weeks, and the same scales were subsequently assessed. After performing the HDJ-S, the total scores of the Korean Smartphone Addiction Proneness Scale decreased significantly in the high risk group (P < 0.001). There was a significant increase in the total scores of the Parents’ Concerns for Children’s Smartphone Activities Scale in the high risk group between baseline and following two weeks of treatment (P < 0.05). The HDJ-S was effective for adolescents with smartphone addiction and increased the parents’ concerns for their children’s smartphone activities. We suggested that HDJ-S would be considered as a treatment and prevention for smartphone addiction. PMID:27134499

  8. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    PubMed

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Comparison of NASA-TLX scale, Modified Cooper-Harper scale and mean inter-beat interval as measures of pilot mental workload during simulated flight tasks.

    PubMed

    Mansikka, Heikki; Virtanen, Kai; Harris, Don

    2018-04-30

    The sensitivity of NASA-TLX scale, modified Cooper-Harper (MCH) scale and the mean inter-beat interval (IBI) of successive heart beats, as measures of pilot mental workload (MWL), were evaluated in a flight training device (FTD). Operational F/A-18C pilots flew instrument approaches with varying task loads. Pilots' performance, subjective MWL ratings and IBI were measured. Based on the pilots' performance, three performance categories were formed; high-, medium- and low-performance. Values of the subjective rating scales and IBI were compared between categories. It was found that all measures were able to differentiate most task conditions and there was a strong, positive correlation between NASA-TLX and MCH scale. An explicit link between IBI, NASA-TLX, MCH and performance was demonstrated. While NASA-TLX, MCH and IBI have all been previously used to measure MWL, this study is the first one to investigate their association in a modern FTD, using a realistic flying mission and operational pilots.

  10. Children Born at Risk: What's Happening in Kindergarten?

    ERIC Educational Resources Information Center

    Reich, Jill N.; And Others

    1993-01-01

    Compared high-risk (n=20) versus low-risk (n=15) children in their performance on Wechsler Preschool Primary Intelligence Scale at pre- and postkindergarten levels. High-risk group had, at birth, experienced prematurity and/or illness. Both groups demonstrated increases in performance: high-risk children showed increases predominantly in…

  11. DOE High Performance Computing Operational Review (HPCOR): Enabling Data-Driven Scientific Discovery at HPC Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerber, Richard; Allcock, William; Beggio, Chris

    2014-10-17

    U.S. Department of Energy (DOE) High Performance Computing (HPC) facilities are on the verge of a paradigm shift in the way they deliver systems and services to science and engineering teams. Research projects are producing a wide variety of data at unprecedented scale and level of complexity, with community-specific services that are part of the data collection and analysis workflow. On June 18-19, 2014 representatives from six DOE HPC centers met in Oakland, CA at the DOE High Performance Operational Review (HPCOR) to discuss how they can best provide facilities and services to enable large-scale data-driven scientific discovery at themore » DOE national laboratories. The report contains findings from that review.« less

  12. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  13. Building and measuring a high performance network architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less

  14. Simplified Summative Temporal Bone Dissection Scale Demonstrates Equivalence to Existing Measures.

    PubMed

    Pisa, Justyn; Gousseau, Michael; Mowat, Stephanie; Westerberg, Brian; Unger, Bert; Hochman, Jordan B

    2018-01-01

    Emphasis on patient safety has created the need for quality assessment of fundamental surgical skills. Existing temporal bone rating scales are laborious, subject to evaluator fatigue, and contain inconsistencies when conferring points. To address these deficiencies, a novel binary assessment tool was designed and validated against a well-established rating scale. Residents completed a mastoidectomy with posterior tympanotomy on identical 3D-printed temporal bone models. Four neurotologists evaluated each specimen using a validated scale (Welling) and a newly developed "CanadaWest" scale, with scoring repeated after a 4-week interval. Nineteen participants were clustered into junior, intermediate, and senior cohorts. An ANOVA found significant differences between performance of the junior-intermediate and junior-senior cohorts for both Welling and CanadaWest scales ( P < .05). Neither scale found a significant difference between intermediate-senior resident performance ( P > .05). Cohen's kappa found strong intrarater reliability (0.711) with a high degree of interrater reliability of (0.858) for the CanadaWest scale, similar to scores on the Welling scale of (0.713) and (0.917), respectively. The CanadaWest scale was facile and delineated performance by experience level with strong intrarater reliability. Comparable to the validated Welling Scale, it distinguished junior from senior trainees but was challenged in differentiating intermediate and senior trainee performance.

  15. Impact of spatial variability and sampling design on model performance

    NASA Astrophysics Data System (ADS)

    Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes

    2017-04-01

    Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.

  16. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  17. High performance cellular level agent-based simulation with FLAME for the GPU.

    PubMed

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  18. An experimental investigation of the flow physics of high-lift systems

    NASA Technical Reports Server (NTRS)

    Thomas, Flint O.; Nelson, R. C.

    1995-01-01

    This progress report is a series of overviews outlining experiments on the flow physics of confluent boundary layers for high-lift systems. The research objectives include establishing the role of confluent boundary layer flow physics in high-lift production; contrasting confluent boundary layer structures for optimum and non-optimum C(sub L) cases; forming a high quality, detailed archival data base for CFD/modelling; and examining the role of relaminarization and streamline curvature. Goals of this research include completing LDV study of an optimum C(sub L) case; performing detailed LDV confluent boundary layer surveys for multiple non-optimum C(sub L) cases; obtaining skin friction distributions for both optimum and non-optimum C(sub L) cases for scaling purposes; data analysis and inner and outer variable scaling; setting-up and performing relaminarization experiments; and a final report establishing the role of leading edge confluent boundary layer flow physics on high-lift performance.

  19. Large-scale synthesis of high-quality hexagonal boron nitride nanosheets for large-area graphene electronics.

    PubMed

    Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo

    2012-02-08

    Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society

  20. Recent advances in micro-scale and nano-scale high-performance liquid-phase chromatography for proteome research.

    PubMed

    Tao, Dingyin; Zhang, Lihua; Shan, Yichu; Liang, Zhen; Zhang, Yukui

    2011-01-01

    High-performance liquid chromatography-electrospray ionization tandem mass spectrometry (HPLC-ESI-MS-MS) is regarded as one of the most powerful techniques for separation and identification of proteins. Recently, much effort has been made to improve the separation capacity, detection sensitivity, and analysis throughput of micro- and nano-HPLC, by increasing column length, reducing column internal diameter, and using integrated techniques. Development of HPLC columns has also been rapid, as a result of the use of submicrometer packing materials and monolithic columns. All these innovations result in clearly improved performance of micro- and nano-HPLC for proteome research.

  1. Assessing Technical Performance and Determining the Learning Curve in Cleft Palate Surgery Using a High-Fidelity Cleft Palate Simulator.

    PubMed

    Podolsky, Dale J; Fisher, David M; Wong Riff, Karen W; Szasz, Peter; Looi, Thomas; Drake, James M; Forrest, Christopher R

    2018-06-01

    This study assessed technical performance in cleft palate repair using a newly developed assessment tool and high-fidelity cleft palate simulator through a longitudinal simulation training exercise. Three residents performed five and one resident performed nine consecutive endoscopically recorded cleft palate repairs using a cleft palate simulator. Two fellows in pediatric plastic surgery and two expert cleft surgeons also performed recorded simulated repairs. The Cleft Palate Objective Structured Assessment of Technical Skill (CLOSATS) and end-product scales were developed to assess performance. Two blinded cleft surgeons assessed the recordings and the final repairs using the CLOSATS, end-product scale, and a previously developed global rating scale. The average procedure-specific (CLOSATS), global rating, and end-product scores increased logarithmically after each successive simulation session for the residents. Reliability of the CLOSATS (average item intraclass correlation coefficient (ICC), 0.85 ± 0.093) and global ratings (average item ICC, 0.91 ± 0.02) among the raters was high. Reliability of the end-product assessments was lower (average item ICC, 0.66 ± 0.15). Standard setting linear regression using an overall cutoff score of 7 of 10 corresponded to a pass score for the CLOSATS and the global score of 44 (maximum, 60) and 23 (maximum, 30), respectively. Using logarithmic best-fit curves, 6.3 simulation sessions are required to reach the minimum standard. A high-fidelity cleft palate simulator has been developed that improves technical performance in cleft palate repair. The simulator and technical assessment scores can be used to determine performance before operating on patients.

  2. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  3. Predicting the breakdown strength and lifetime of nanocomposites using a multi-scale modeling approach

    NASA Astrophysics Data System (ADS)

    Huang, Yanhui; Zhao, He; Wang, Yixing; Ratcliff, Tyree; Breneman, Curt; Brinson, L. Catherine; Chen, Wei; Schadler, Linda S.

    2017-08-01

    It has been found that doping dielectric polymers with a small amount of nanofiller or molecular additive can stabilize the material under a high field and lead to increased breakdown strength and lifetime. Choosing appropriate fillers is critical to optimizing the material performance, but current research largely relies on experimental trial and error. The employment of computer simulations for nanodielectric design is rarely reported. In this work, we propose a multi-scale modeling approach that employs ab initio, Monte Carlo, and continuum scales to predict the breakdown strength and lifetime of polymer nanocomposites based on the charge trapping effect of the nanofillers. The charge transfer, charge energy relaxation, and space charge effects are modeled in respective hierarchical scales by distinctive simulation techniques, and these models are connected together for high fidelity and robustness. The preliminary results show good agreement with the experimental data, suggesting its promise for use in the computer aided material design of high performance dielectrics.

  4. Opportunities for nonvolatile memory systems in extreme-scale high-performance computing

    DOE PAGES

    Vetter, Jeffrey S.; Mittal, Sparsh

    2015-01-12

    For extreme-scale high-performance computing systems, system-wide power consumption has been identified as one of the key constraints moving forward, where DRAM main memory systems account for about 30 to 50 percent of a node's overall power consumption. As the benefits of device scaling for DRAM memory slow, it will become increasingly difficult to keep memory capacities balanced with increasing computational rates offered by next-generation processors. However, several emerging memory technologies related to nonvolatile memory (NVM) devices are being investigated as an alternative for DRAM. Moving forward, NVM devices could offer solutions for HPC architectures. Researchers are investigating how to integratemore » these emerging technologies into future extreme-scale HPC systems and how to expose these capabilities in the software stack and applications. In addition, current results show several of these strategies could offer high-bandwidth I/O, larger main memory capacities, persistent data structures, and new approaches for application resilience and output postprocessing, such as transaction-based incremental checkpointing and in situ visualization, respectively.« less

  5. Investigating the Potential of Deep Neural Networks for Large-Scale Classification of Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.

    2017-05-01

    Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.

  6. A Fault-Tolerant Radiation-Robust Mass Storage Concept for Highly Scaled Flash Memory

    NASA Astrophysics Data System (ADS)

    Fuchs, Cristian M.; Trinitis, Carsten; Appel, Nicolas; Langer, Martin

    2015-09-01

    Future spacemissions will require vast amounts of data to be stored and processed aboard spacecraft. While satisfying operational mission requirements, storage systems must guarantee data integrity and recover damaged data throughout the mission. NAND-flash memories have become popular for space-borne high performance mass memory scenarios, though future storage concepts will rely upon highly scaled flash or other memory technologies. With modern flash memory, single bit erasure coding and RAID based concepts are insufficient. Thus, a fully run-time configurable, high performance, dependable storage concept, requiring a minimal set of logic or software. The solution is based on composite erasure coding and can be adjusted for altered mission duration or changing environmental conditions.

  7. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  8. Microfluidic biolector-microfluidic bioprocess control in microtiter plates.

    PubMed

    Funke, Matthias; Buchenauer, Andreas; Schnakenberg, Uwe; Mokwa, Wilfried; Diederichs, Sylvia; Mertens, Alan; Müller, Carsten; Kensy, Frank; Büchs, Jochen

    2010-10-15

    In industrial-scale biotechnological processes, the active control of the pH-value combined with the controlled feeding of substrate solutions (fed-batch) is the standard strategy to cultivate both prokaryotic and eukaryotic cells. On the contrary, for small-scale cultivations, much simpler batch experiments with no process control are performed. This lack of process control often hinders researchers to scale-up and scale-down fermentation experiments, because the microbial metabolism and thereby the growth and production kinetics drastically changes depending on the cultivation strategy applied. While small-scale batches are typically performed highly parallel and in high throughput, large-scale cultivations demand sophisticated equipment for process control which is in most cases costly and difficult to handle. Currently, there is no technical system on the market that realizes simple process control in high throughput. The novel concept of a microfermentation system described in this work combines a fiber-optic online-monitoring device for microtiter plates (MTPs)--the BioLector technology--together with microfluidic control of cultivation processes in volumes below 1 mL. In the microfluidic chip, a micropump is integrated to realize distinct substrate flow rates during fed-batch cultivation in microscale. Hence, a cultivation system with several distinct advantages could be established: (1) high information output on a microscale; (2) many experiments can be performed in parallel and be automated using MTPs; (3) this system is user-friendly and can easily be transferred to a disposable single-use system. This article elucidates this new concept and illustrates applications in fermentations of Escherichia coli under pH-controlled and fed-batch conditions in shaken MTPs. Copyright 2010 Wiley Periodicals, Inc.

  9. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  10. Aerodynamic Performance of a 0.27-Scale Model of an AH-64 Helicopter with Baseline and Alternate Rotor Blade Sets

    NASA Technical Reports Server (NTRS)

    Kelley, Henry L.

    1990-01-01

    Performance of a 27 percent scale model rotor designed for the AH-64 helicopter (alternate rotor) was measured in hover and forward flight and compared against and AH-64 baseline rotor model. Thrust, rotor tip Mach number, advance ratio, and ground proximity were varied. In hover, at a nominal thrust coefficient of 0.0064, the power savings was about 6.4 percent for the alternate rotor compared to the baseline. The corresponding thrust increase at this condition was approx. 4.5 percent which represents an equivalent full scale increase in lift capability of about 660 lbs. Comparable results were noted in forward flight except for the high thrust, high speed cases investigated where the baseline rotor was slightly superior. Reduced performance at the higher thrusts and speeds was likely due to Reynolds number effects and blade elasticity differences.

  11. LLMapReduce: Multi-Level Map-Reduce for High Performance Data Analysis

    DTIC Science & Technology

    2016-05-23

    LLMapReduce works with several schedulers such as SLURM, Grid Engine and LSF. Keywords—LLMapReduce; map-reduce; performance; scheduler; Grid Engine ...SLURM; LSF I. INTRODUCTION Large scale computing is currently dominated by four ecosystems: supercomputing, database, enterprise , and big data [1...interconnects [6]), High performance math libraries (e.g., BLAS [7, 8], LAPACK [9], ScaLAPACK [10]) designed to exploit special processing hardware, High

  12. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skinner, David; Verdier, Francesca; Anand, Harsh

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems.more » An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.« less

  13. High-throughput micro-scale cultivations and chromatography modeling: Powerful tools for integrated process development.

    PubMed

    Baumann, Pascal; Hahn, Tobias; Hubbuch, Jürgen

    2015-10-01

    Upstream processes are rather complex to design and the productivity of cells under suitable cultivation conditions is hard to predict. The method of choice for examining the design space is to execute high-throughput cultivation screenings in micro-scale format. Various predictive in silico models have been developed for many downstream processes, leading to a reduction of time and material costs. This paper presents a combined optimization approach based on high-throughput micro-scale cultivation experiments and chromatography modeling. The overall optimized system must not necessarily be the one with highest product titers, but the one resulting in an overall superior process performance in up- and downstream. The methodology is presented in a case study for the Cherry-tagged enzyme Glutathione-S-Transferase from Escherichia coli SE1. The Cherry-Tag™ (Delphi Genetics, Belgium) which can be fused to any target protein allows for direct product analytics by simple VIS absorption measurements. High-throughput cultivations were carried out in a 48-well format in a BioLector micro-scale cultivation system (m2p-Labs, Germany). The downstream process optimization for a set of randomly picked upstream conditions producing high yields was performed in silico using a chromatography modeling software developed in-house (ChromX). The suggested in silico-optimized operational modes for product capturing were validated subsequently. The overall best system was chosen based on a combination of excellent up- and downstream performance. © 2015 Wiley Periodicals, Inc.

  14. Need for cognition and cognitive performance from a cross-cultural perspective: examples of academic success and solving anagrams.

    PubMed

    Gülgöz, S

    2001-01-01

    The cross-cultural validity of the Need for Cognition Scale and its relationship with cognitive performance were investigated in two studies. In the first study, the relationships between the scale and university entrance scores, course grades, study skills, and social desirability were examined. Using the short form of the Turkish version of the Need for Cognition Scale (S. Gülöz & C. J. Sadowski, 1995) no correlation with academic performance was found but there was significant correlation with a study skills scale and a social desirability scale created for this study. When regression analysis was used to predict grade point average, the Need for Cognition Scale was a significant predictor. In the second study, participants low or high in need for cognition solved multiple-solution anagrams. The instructions preceding the task set the participants' expectations regarding task difficulty. An interaction between expectation and need for cognition indicated that participants with low need for cognition performed worse when they expected difficult problems. Results of the two studies showed that need for cognition has cross-cultural validity and that its effect on cognitive performance was mediated by other variables.

  15. Monthly streamflow forecasting using continuous wavelet and multi-gene genetic programming combination

    NASA Astrophysics Data System (ADS)

    Hadi, Sinan Jasim; Tombul, Mustafa

    2018-06-01

    Streamflow is an essential component of the hydrologic cycle in the regional and global scale and the main source of fresh water supply. It is highly associated with natural disasters, such as droughts and floods. Therefore, accurate streamflow forecasting is essential. Forecasting streamflow in general and monthly streamflow in particular is a complex process that cannot be handled by data-driven models (DDMs) only and requires pre-processing. Wavelet transformation is a pre-processing technique; however, application of continuous wavelet transformation (CWT) produces many scales that cause deterioration in the performance of any DDM because of the high number of redundant variables. This study proposes multigene genetic programming (MGGP) as a selection tool. After the CWT analysis, it selects important scales to be imposed into the artificial neural network (ANN). A basin located in the southeast of Turkey is selected as case study to prove the forecasting ability of the proposed model. One month ahead downstream flow is used as output, and downstream flow, upstream, rainfall, temperature, and potential evapotranspiration with associated lags are used as inputs. Before modeling, wavelet coherence transformation (WCT) analysis was conducted to analyze the relationship between variables in the time-frequency domain. Several combinations were developed to investigate the effect of the variables on streamflow forecasting. The results indicated a high localized correlation between the streamflow and other variables, especially the upstream. In the models of the standalone layout where the data were entered to ANN and MGGP without CWT, the performance is found poor. In the best-scale layout, where the best scale of the CWT identified as the highest correlated scale is chosen and enters to ANN and MGGP, the performance increased slightly. Using the proposed model, the performance improved dramatically particularly in forecasting the peak values because of the inclusion of several scales in which seasonality and irregularity can be captured. Using hydrological and meteorological variables also improved the ability to forecast the streamflow.

  16. Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).

    PubMed

    Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando

    2014-07-02

    Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  17. Fruit fly scale robots can hover longer with flapping wings than with spinning wings.

    PubMed

    Hawkes, Elliot W; Lentink, David

    2016-10-01

    Hovering flies generate exceptionally high lift, because their wings generate a stable leading edge vortex. Micro flying robots with a similar wing design can generate similar high lift by either flapping or spinning their wings. While it requires less power to spin a wing, the overall efficiency depends also on the actuator system driving the wing. Here, we present the first holistic analysis to calculate how long a fly-inspired micro robot can hover with flapping versus spinning wings across scales. We integrate aerodynamic data with data-driven scaling laws for actuator, electronics and mechanism performance from fruit fly to hummingbird scales. Our analysis finds that spinning wings driven by rotary actuators are superior for robots with wingspans similar to hummingbirds, yet flapping wings driven by oscillatory actuators are superior at fruit fly scale. This crossover is driven by the reduction in performance of rotary compared with oscillatory actuators at smaller scale. Our calculations emphasize that a systems-level analysis is essential for trading-off flapping versus spinning wings for micro flying robots. © 2016 The Author(s).

  18. Acoustic Treatment Design Scaling Methods. Volume 5; Analytical and Experimental Data Correlation

    NASA Technical Reports Server (NTRS)

    Chien, W. E.; Kraft, R. E.; Syed, A. A.

    1999-01-01

    The primary purpose of the study presented in this volume is to present the results and data analysis of in-duct transmission loss measurements. Transmission loss testing was performed on full-scale, 1/2-scale, and 115-scale treatment panel samples. The objective of the study was to compare predicted and measured transmission loss for full-scale and subscale panels in an attempt to evaluate the variations in suppression between full- and subscale panels which were ostensibly of equivalent design. Generally, the results indicated an unsatisfactory agreement between measurement and prediction, even for full-scale. This was attributable to difficulties encountered in obtaining sufficiently accurate test results, even with extraordinary care in calibrating the instrumentation and performing the test. Test difficulties precluded the ability to make measurements at frequencies high enough to be representative of subscale liners. It is concluded that transmission loss measurements without ducts and data acquisition facilities specifically designed to operate with the precision and complexity required for high subscale frequency ranges are inadequate for evaluation of subscale treatment effects.

  19. Fruit fly scale robots can hover longer with flapping wings than with spinning wings

    PubMed Central

    Lentink, David

    2016-01-01

    Hovering flies generate exceptionally high lift, because their wings generate a stable leading edge vortex. Micro flying robots with a similar wing design can generate similar high lift by either flapping or spinning their wings. While it requires less power to spin a wing, the overall efficiency depends also on the actuator system driving the wing. Here, we present the first holistic analysis to calculate how long a fly-inspired micro robot can hover with flapping versus spinning wings across scales. We integrate aerodynamic data with data-driven scaling laws for actuator, electronics and mechanism performance from fruit fly to hummingbird scales. Our analysis finds that spinning wings driven by rotary actuators are superior for robots with wingspans similar to hummingbirds, yet flapping wings driven by oscillatory actuators are superior at fruit fly scale. This crossover is driven by the reduction in performance of rotary compared with oscillatory actuators at smaller scale. Our calculations emphasize that a systems-level analysis is essential for trading-off flapping versus spinning wings for micro flying robots. PMID:27707903

  20. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  1. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  2. High-Performance Complementary Transistors and Medium-Scale Integrated Circuits Based on Carbon Nanotube Thin Films.

    PubMed

    Yang, Yingjun; Ding, Li; Han, Jie; Zhang, Zhiyong; Peng, Lian-Mao

    2017-04-25

    Solution-derived carbon nanotube (CNT) network films with high semiconducting purity are suitable materials for the wafer-scale fabrication of field-effect transistors (FETs) and integrated circuits (ICs). However, it is challenging to realize high-performance complementary metal-oxide semiconductor (CMOS) FETs with high yield and stability on such CNT network films, and this difficulty hinders the development of CNT-film-based ICs. In this work, we developed a doping-free process for the fabrication of CMOS FETs based on solution-processed CNT network films, in which the polarity of the FETs was controlled using Sc or Pd as the source/drain contacts to selectively inject carriers into the channels. The fabricated top-gated CMOS FETs showed high symmetry between the characteristics of n- and p-type devices and exhibited high-performance uniformity and excellent scalability down to a gate length of 1 μm. Many common types of CMOS ICs, including typical logic gates, sequential circuits, and arithmetic units, were constructed based on CNT films, and the fabricated ICs exhibited rail-to-rail outputs because of the high noise margin of CMOS circuits. In particular, 4-bit full adders consisting of 132 CMOS FETs were realized with 100% yield, thereby demonstrating that this CMOS technology shows the potential to advance the development of medium-scale CNT-network-film-based ICs.

  3. Fabrication of coronagraph masks and laboratory scale star-shade masks: characteristics, defects, and performance

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Kunjithapatham; Riggs, A. J. Eldorado; Cady, Eric; White, Victor; Yee, Karl; Wilson, Daniel; Echternach, Pierre; Muller, Richard; Mejia Prada, Camilo; Seo, Byoung-Joon; Shi, Fang; Ryan, Daniel; Fregoso, Santos; Metzman, Jacob; Wilson, Robert Casey

    2017-09-01

    NASA WFIRST mission has planned to include a coronagraph instrument to find and characterize exoplanets. Masks are needed to suppress the host star light to better than 10-8 - 10-9 level contrast over a broad bandwidth to enable the coronagraph mission objectives. Such masks for high contrast coronagraphic imaging require various fabrication technologies to meet a wide range of specifications, including precise shapes, micron scale island features, ultra-low reflectivity regions, uniformity, wave front quality, etc. We present the technologies employed at JPL to produce these pupil plane and image plane coronagraph masks, and lab-scale external occulter masks, highlighting accomplishments from the high contrast imaging testbed (HCIT) at JPL and from the high contrast imaging lab (HCIL) at Princeton University. Inherent systematic and random errors in fabrication and their impact on coronagraph performance are discussed with model predictions and measurements.

  4. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  5. WRF Test on IBM BG/L:Toward High Performance Application to Regional Climate Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, H S

    The effects of climate change will mostly be felt on local to regional scales (Solomon et al., 2007). To develop better forecast skill in regional climate change, an integrated multi-scale modeling capability (i.e., a pair of global and regional climate models) becomes crucially important in understanding and preparing for the impacts of climate change on the temporal and spatial scales that are critical to California's and nation's future environmental quality and economical prosperity. Accurate knowledge of detailed local impact on the water management system from climate change requires a resolution of 1km or so. To this end, a high performancemore » computing platform at the petascale appears to be an essential tool in providing such local scale information to formulate high quality adaptation strategies for local and regional climate change. As a key component of this modeling system at LLNL, the Weather Research and Forecast (WRF) model is implemented and tested on the IBM BG/L machine. The objective of this study is to examine the scaling feature of WRF on BG/L for the optimal performance, and to assess the numerical accuracy of WRF solution on BG/L.« less

  6. The high velocity, high adiabat, ``Bigfoot'' campaign and tests of indirect-drive implosion scaling

    NASA Astrophysics Data System (ADS)

    Casey, Daniel

    2017-10-01

    To achieve hotspot ignition, inertial confinement fusion (ICF) implosions must achieve high hotspot internal energy that is inertially confined by a dense shell of DT fuel. To accomplish this, implosions are designed to achieve high peak implosion velocity, good energy coupling between the hotspot and imploding shell, and high areal-density at stagnation. However, experiments have shown that achieving these simultaneously is extremely challenging, partly because of inherent tradeoffs between these three interrelated requirements. The Bigfoot approach is to intentionally trade off high convergence, and therefore areal-density, in favor of high implosion velocity and good coupling between the hotspot and shell. This is done by intentionally colliding the shocks in the DT ice layer. This results in a short laser pulse which improves hohlraum symmetry and predictability while the reduced compression improves hydrodynamic stability. The results of this campaign will be reviewed and include demonstrated low-mode symmetry control at two different hohlraum geometries (5.75 mm and 5.4 mm diameters) and at two different target scales (5.4 mm and 6.0 mm hohlraum diameters) spanning 300-430 TW in laser power and 0.8-1.7 MJ in laser energy. Results of the 10% scaling between these designs for the hohlraum and capsule will be presented. Hydrodynamic instability growth from engineering features like the capsule fill tube are currently thought to be a significant perturbation to the target performance and a major factor in reducing its performance compared to calculations. Evidence supporting this hypothesis as well as plans going forward will be presented. Ongoing experiments are attempting to measure the impact on target performance from increase in target scale, and the preliminary results will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Anaerobic co-digestion of high-strength organic wastes pretreated by thermal hydrolysis.

    PubMed

    Choi, Gyucheol; Kim, Jaai; Lee, Seungyong; Lee, Changsoo

    2018-06-01

    Thermal hydrolysis (TH) pretreatment was investigated for the anaerobic digestion (AD) of a mixture of high-strength organic wastes (i.e., dewatered human feces, dewatered sewage sludge, and food wastewater) at laboratory scale to simulate a full-scale plant and evaluate its feasibility. The reactors maintained efficient and stable performance at a hydraulic retention time of 20 days, which may be not sufficient for the mesophilic AD of high-suspended-solid wastes, despite the temporal variations in organic load. The addition of FeCl 3 was effective in controlling H 2 S and resulted in significant changes in the microbial community structure, particularly the methanogens. The temporary interruption in feeding or temperature control led to immediate performance deterioration, but it recovered rapidly when normal operations were resumed. The overall results suggest that the AD process coupled with TH pretreatment can provide an efficient, robust, and resilient system to manage high-suspended-solid wastes, supporting the feasibility of its full-scale implementation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Performance estimation for a highly loaded eight-blade propeller combined with an advanced technology turboshaft engine

    NASA Technical Reports Server (NTRS)

    Morris, S. J., Jr.

    1979-01-01

    Performance estimation, weights, and scaling laws for an eight-blade highly loaded propeller combined with an advanced turboshaft engine are presented. The data are useful for planned aircraft mission studies using the turboprop propulsion system. Comparisons are made between the performance of the 1990+ technology turboprop propulsion system and the performance of both a current technology turbofan and an 1990+ technology turbofan.

  9. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less

  10. Spatially-Resolved Characterization Techniques to Investigate Impact Damage in Ultra-High Performance Concretes

    DTIC Science & Technology

    2013-04-01

    Concretes G eo te ch n ic al a n d S tr u ct u re s La b or at or y Robert D. Moser, Paul G. Allison, and Mei Q. Chandler April 2013 Approved...Impact Damage in Ultra-High Performance Concretes Robert D. Moser, Paul G. Allison, and Mei Q. Chandler Geotechnical and Structures Laboratory US...Portland Cement concrete (OPC) and Ultra-High Performance Concretes (UHPCs) under high-strain impact and penetration loads at lower length scales

  11. Reading Fluency as a Predictor of Reading Proficiency in Low-Performing, High-Poverty Schools

    ERIC Educational Resources Information Center

    Baker, Scott K.; Smolkowski, Keith; Katz, Rachell; Fien, Hank; Seeley, John R.; Kame'enui, Edward J.; Beck, Carrie Thomas

    2008-01-01

    The purpose of this study was to examine oral reading fluency (ORF) in the context of a large-scale federal reading initiative conducted in low performing, high poverty schools. The objectives were to (a) investigate the relation between ORF and comprehensive reading tests, (b) examine whether slope of performance over time on ORF predicted…

  12. Bioreactor Scalability: Laboratory-Scale Bioreactor Design Influences Performance, Ecology, and Community Physiology in Expanded Granular Sludge Bed Bioreactors

    PubMed Central

    Connelly, Stephanie; Shin, Seung G.; Dillon, Robert J.; Ijaz, Umer Z.; Quince, Christopher; Sloan, William T.; Collins, Gavin

    2017-01-01

    Studies investigating the feasibility of new, or improved, biotechnologies, such as wastewater treatment digesters, inevitably start with laboratory-scale trials. However, it is rarely determined whether laboratory-scale results reflect full-scale performance or microbial ecology. The Expanded Granular Sludge Bed (EGSB) bioreactor, which is a high-rate anaerobic digester configuration, was used as a model to address that knowledge gap in this study. Two laboratory-scale idealizations of the EGSB—a one-dimensional and a three- dimensional scale-down of a full-scale design—were built and operated in triplicate under near-identical conditions to a full-scale EGSB. The laboratory-scale bioreactors were seeded using biomass obtained from the full-scale bioreactor, and, spent water from the distillation of whisky from maize was applied as substrate at both scales. Over 70 days, bioreactor performance, microbial ecology, and microbial community physiology were monitored at various depths in the sludge-beds using 16S rRNA gene sequencing (V4 region), specific methanogenic activity (SMA) assays, and a range of physical and chemical monitoring methods. SMA assays indicated dominance of the hydrogenotrophic pathway at full-scale whilst a more balanced activity profile developed during the laboratory-scale trials. At each scale, Methanobacterium was the dominant methanogenic genus present. Bioreactor performance overall was better at laboratory-scale than full-scale. We observed that bioreactor design at laboratory-scale significantly influenced spatial distribution of microbial community physiology and taxonomy in the bioreactor sludge-bed, with 1-D bioreactor types promoting stratification of each. In the 1-D laboratory bioreactors, increased abundance of Firmicutes was associated with both granule position in the sludge bed and increased activity against acetate and ethanol as substrates. We further observed that stratification in the sludge-bed in 1-D laboratory-scale bioreactors was associated with increased richness in the underlying microbial community at species (OTU) level and improved overall performance. PMID:28507535

  13. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    PubMed

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  14. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    PubMed Central

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  15. Computational study of 3-D hot-spot initiation in shocked insensitive high-explosive

    NASA Astrophysics Data System (ADS)

    Najjar, F. M.; Howard, W. M.; Fried, L. E.; Manaa, M. R.; Nichols, A., III; Levesque, G.

    2012-03-01

    High-explosive (HE) material consists of large-sized grains with micron-sized embedded impurities and pores. Under various mechanical/thermal insults, these pores collapse generating hightemperature regions leading to ignition. A hydrodynamic study has been performed to investigate the mechanisms of pore collapse and hot spot initiation in TATB crystals, employing a multiphysics code, ALE3D, coupled to the chemistry module, Cheetah. This computational study includes reactive dynamics. Two-dimensional high-resolution large-scale meso-scale simulations have been performed. The parameter space is systematically studied by considering various shock strengths, pore diameters and multiple pore configurations. Preliminary 3-D simulations are undertaken to quantify the 3-D dynamics.

  16. Atomic-Scale Origin of Long-Term Stability and High Performance of p-GaN Nanowire Arrays for Photocatalytic Overall Pure Water Splitting.

    PubMed

    Kibria, Md Golam; Qiao, Ruimin; Yang, Wanli; Boukahil, Idris; Kong, Xianghua; Chowdhury, Faqrul Alam; Trudeau, Michel L; Ji, Wei; Guo, Hong; Himpsel, F J; Vayssieres, Lionel; Mi, Zetian

    2016-10-01

    The atomic-scale origin of the unusually high performance and long-term stability of wurtzite p-GaN oriented nanowire arrays is revealed. Nitrogen termination of both the polar (0001¯) top face and the nonpolar (101¯0) side faces of the nanowires is essential for long-term stability and high efficiency. Such a distinct atomic configuration ensures not only stability against (photo) oxidation in air and in water/electrolyte but, as importantly, also provides the necessary overall reverse crystal polarization needed for efficient hole extraction in p-GaN. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Cross-sectional fluctuation scaling in the high-frequency illiquidity of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Cai, Qing; Gao, Xing-Lu; Zhou, Wei-Xing; Stanley, H. Eugene

    2018-03-01

    Taylor's law of temporal and ensemble fluctuation scaling has been ubiquitously observed in diverse complex systems including financial markets. Stock illiquidity is an important nonadditive financial quantity, which is found to comply with Taylor's temporal fluctuation scaling law. In this paper, we perform the cross-sectional analysis of the 1 min high-frequency illiquidity time series of Chinese stocks and unveil the presence of Taylor's law of ensemble fluctuation scaling. The estimated daily Taylor scaling exponent fluctuates around 1.442. We find that Taylor's scaling exponents of stock illiquidity do not relate to the ensemble mean and ensemble variety of returns. Our analysis uncovers a new scaling law of financial markets and might stimulate further investigations for a better understanding of financial markets' dynamics.

  18. Design of an omnidirectional single-point photodetector for large-scale spatial coordinate measurement

    NASA Astrophysics Data System (ADS)

    Xie, Hongbo; Mao, Chensheng; Ren, Yongjie; Zhu, Jigui; Wang, Chao; Yang, Lei

    2017-10-01

    In high precision and large-scale coordinate measurement, one commonly used approach to determine the coordinate of a target point is utilizing the spatial trigonometric relationships between multiple laser transmitter stations and the target point. A light receiving device at the target point is the key element in large-scale coordinate measurement systems. To ensure high-resolution and highly sensitive spatial coordinate measurement, a high-performance and miniaturized omnidirectional single-point photodetector (OSPD) is greatly desired. We report one design of OSPD using an aspheric lens, which achieves an enhanced reception angle of -5 deg to 45 deg in vertical and 360 deg in horizontal. As the heart of our OSPD, the aspheric lens is designed in a geometric model and optimized by LightTools Software, which enables the reflection of a wide-angle incident light beam into the single-point photodiode. The performance of home-made OSPD is characterized with working distances from 1 to 13 m and further analyzed utilizing developed a geometric model. The experimental and analytic results verify that our device is highly suitable for large-scale coordinate metrology. The developed device also holds great potential in various applications such as omnidirectional vision sensor, indoor global positioning system, and optical wireless communication systems.

  19. Diagnosing isopycnal diffusivity in an eddying, idealized midlatitude ocean basin via Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT)

    DOE PAGES

    Wolfram, Phillip J.; Ringler, Todd D.; Maltrud, Mathew E.; ...

    2015-08-01

    Isopycnal diffusivity due to stirring by mesoscale eddies in an idealized, wind-forced, eddying, midlatitude ocean basin is computed using Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT). Simulation is performed via LIGHT within the Model for Prediction across Scales Ocean (MPAS-O). Simulations are performed at 4-, 8-, 16-, and 32-km resolution, where the first Rossby radius of deformation (RRD) is approximately 30 km. Scalar and tensor diffusivities are estimated at each resolution based on 30 ensemble members using particle cluster statistics. Each ensemble member is composed of 303 665 particles distributed across five potential density surfaces. Diffusivity dependence upon modelmore » resolution, velocity spatial scale, and buoyancy surface is quantified and compared with mixing length theory. The spatial structure of diffusivity ranges over approximately two orders of magnitude with values of O(10 5) m 2 s –1 in the region of western boundary current separation to O(10 3) m 2 s –1 in the eastern region of the basin. Dominant mixing occurs at scales twice the size of the first RRD. Model resolution at scales finer than the RRD is necessary to obtain sufficient model fidelity at scales between one and four RRD to accurately represent mixing. Mixing length scaling with eddy kinetic energy and the Lagrangian time scale yield mixing efficiencies that typically range between 0.4 and 0.8. In conclusion, a reduced mixing length in the eastern region of the domain relative to the west suggests there are different mixing regimes outside the baroclinic jet region.« less

  20. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  1. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator.

    PubMed

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane; Crochet, Patrice

    2018-01-01

    Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room.

  2. Chance performance and floor effects: threats to the validity of the Wechsler Memory Scale--fourth edition designs subtest.

    PubMed

    Martin, Phillip K; Schroeder, Ryan W

    2014-06-01

    The Designs subtest allows for accumulation of raw score points by chance alone, creating the potential for artificially inflated performances, especially in older patients. A random number generator was used to simulate the random selection and placement of cards by 100 test naive participants, resulting in a mean raw score of 36.26 (SD = 3.86). This resulted in relatively high-scaled scores in the 45-54, 55-64, and 65-69 age groups on Designs II. In the latter age group, in particular, the mean simulated performance resulted in a scaled score of 7, with scores 1 SD below and above the performance mean translating to scaled scores of 5 and 8, respectively. The findings indicate that clinicians should use caution when interpreting Designs II performance in these age groups, as our simulations demonstrated that low average to average range scores occur frequently when patients are relying solely on chance performance. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Performance estimation for highly loaded six and ten blade propellers combined with an advanced technology turboshaft engine

    NASA Technical Reports Server (NTRS)

    Morris, S. J., Jr.

    1980-01-01

    Performance estimations, weights, and scaling laws for the six blade and ten blade highly loaded propellers combined with an advanced turboshaft engine are presented. These data are useful for aircraft mission studies using the turboprop system. Comparisons are made between the performance of post 1980 technology turboprop propulsion systems and the performance of both a current technology turbofan and a post 1990 technology turbofan.

  4. Fabrication and performance analysis of 4-sq cm indium tin oxide/InP photovoltaic solar cells

    NASA Technical Reports Server (NTRS)

    Gessert, T. A.; Li, X.; Phelps, P. W.; Coutts, T. J.; Tzafaras, N.

    1991-01-01

    Large-area photovoltaic solar cells based on direct current magnetron sputter deposition of indium tin oxide (ITO) into single-crystal p-InP substrates demonstrated both the radiation hardness and high performance necessary for extraterrestrial applications. A small-scale production project was initiated in which approximately 50 ITO/InP cells are being produced. The procedures used in this small-scale production of 4-sq cm ITO/InP cells are presented and discussed. The discussion includes analyses of performance range of all available production cells, and device performance data of the best cells thus far produced. Additionally, processing experience gained from the production of these cells is discussed, indicating other issues that may be encountered when large-scale productions are begun.

  5. Biotic homogenization can decrease landscape-scale forest multifunctionality.

    PubMed

    van der Plas, Fons; Manning, Pete; Soliveres, Santiago; Allan, Eric; Scherer-Lorenzen, Michael; Verheyen, Kris; Wirth, Christian; Zavala, Miguel A; Ampoorter, Evy; Baeten, Lander; Barbaro, Luc; Bauhus, Jürgen; Benavides, Raquel; Benneter, Adam; Bonal, Damien; Bouriaud, Olivier; Bruelheide, Helge; Bussotti, Filippo; Carnol, Monique; Castagneyrol, Bastien; Charbonnier, Yohan; Coomes, David Anthony; Coppi, Andrea; Bastias, Cristina C; Dawud, Seid Muhie; De Wandeler, Hans; Domisch, Timo; Finér, Leena; Gessler, Arthur; Granier, André; Grossiord, Charlotte; Guyot, Virginie; Hättenschwiler, Stephan; Jactel, Hervé; Jaroszewicz, Bogdan; Joly, François-Xavier; Jucker, Tommaso; Koricheva, Julia; Milligan, Harriet; Mueller, Sandra; Muys, Bart; Nguyen, Diem; Pollastrini, Martina; Ratcliffe, Sophia; Raulund-Rasmussen, Karsten; Selvi, Federico; Stenlid, Jan; Valladares, Fernando; Vesterdal, Lars; Zielínski, Dawid; Fischer, Markus

    2016-03-29

    Many experiments have shown that local biodiversity loss impairs the ability of ecosystems to maintain multiple ecosystem functions at high levels (multifunctionality). In contrast, the role of biodiversity in driving ecosystem multifunctionality at landscape scales remains unresolved. We used a comprehensive pan-European dataset, including 16 ecosystem functions measured in 209 forest plots across six European countries, and performed simulations to investigate how local plot-scale richness of tree species (α-diversity) and their turnover between plots (β-diversity) are related to landscape-scale multifunctionality. After accounting for variation in environmental conditions, we found that relationships between α-diversity and landscape-scale multifunctionality varied from positive to negative depending on the multifunctionality metric used. In contrast, when significant, relationships between β-diversity and landscape-scale multifunctionality were always positive, because a high spatial turnover in species composition was closely related to a high spatial turnover in functions that were supported at high levels. Our findings have major implications for forest management and indicate that biotic homogenization can have previously unrecognized and negative consequences for large-scale ecosystem multifunctionality.

  6. Biotic homogenization can decrease landscape-scale forest multifunctionality

    PubMed Central

    van der Plas, Fons; Manning, Pete; Soliveres, Santiago; Allan, Eric; Scherer-Lorenzen, Michael; Verheyen, Kris; Wirth, Christian; Zavala, Miguel A.; Ampoorter, Evy; Baeten, Lander; Barbaro, Luc; Bauhus, Jürgen; Benavides, Raquel; Benneter, Adam; Bonal, Damien; Bouriaud, Olivier; Bruelheide, Helge; Bussotti, Filippo; Carnol, Monique; Castagneyrol, Bastien; Charbonnier, Yohan; Coppi, Andrea; Bastias, Cristina C.; Dawud, Seid Muhie; De Wandeler, Hans; Domisch, Timo; Finér, Leena; Granier, André; Grossiord, Charlotte; Guyot, Virginie; Hättenschwiler, Stephan; Jactel, Hervé; Jaroszewicz, Bogdan; Joly, François-xavier; Jucker, Tommaso; Koricheva, Julia; Milligan, Harriet; Mueller, Sandra; Muys, Bart; Nguyen, Diem; Pollastrini, Martina; Ratcliffe, Sophia; Raulund-Rasmussen, Karsten; Selvi, Federico; Stenlid, Jan; Valladares, Fernando; Vesterdal, Lars; Zielínski, Dawid; Fischer, Markus

    2016-01-01

    Many experiments have shown that local biodiversity loss impairs the ability of ecosystems to maintain multiple ecosystem functions at high levels (multifunctionality). In contrast, the role of biodiversity in driving ecosystem multifunctionality at landscape scales remains unresolved. We used a comprehensive pan-European dataset, including 16 ecosystem functions measured in 209 forest plots across six European countries, and performed simulations to investigate how local plot-scale richness of tree species (α-diversity) and their turnover between plots (β-diversity) are related to landscape-scale multifunctionality. After accounting for variation in environmental conditions, we found that relationships between α-diversity and landscape-scale multifunctionality varied from positive to negative depending on the multifunctionality metric used. In contrast, when significant, relationships between β-diversity and landscape-scale multifunctionality were always positive, because a high spatial turnover in species composition was closely related to a high spatial turnover in functions that were supported at high levels. Our findings have major implications for forest management and indicate that biotic homogenization can have previously unrecognized and negative consequences for large-scale ecosystem multifunctionality. PMID:26979952

  7. Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight

    NASA Technical Reports Server (NTRS)

    Narducci, Robert; Orr, Stanley; Kreeger, Richard E.

    2012-01-01

    An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.

  8. Scale model performance test investigation of mixed flow exhaust systems for an energy efficient engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1983-01-01

    As part of the NASA Energy Efficient Engine program, scale-model performance tests of a mixed flow exhaust system were conducted. The tests were used to evaluate the performance of exhaust system mixers for high-bypass, mixed-flow turbofan engines. The tests indicated that: (1) mixer penetration has the most significant affect on both mixing effectiveness and mixer pressure loss; (2) mixing/tailpipe length improves mixing effectiveness; (3) gap reduction between the mixer and centerbody increases high mixing effectiveness; (4) mixer cross-sectional shape influences mixing effectiveness; (5) lobe number affects mixing degree; and (6) mixer aerodynamic pressure losses are a function of secondary flows inherent to the lobed mixer concept.

  9. Expanding the Scope of High-Performance Computing Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uram, Thomas D.; Papka, Michael E.

    The high-performance computing centers of the future will expand their roles as service providers, and as the machines scale up, so should the sizes of the communities they serve. National facilities must cultivate their users as much as they focus on operating machines reliably. The authors present five interrelated topic areas that are essential to expanding the value provided to those performing computational science.

  10. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    NASA Astrophysics Data System (ADS)

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  11. Effect of color visualization and display hardware on the visual assessment of pseudocolor medical images

    PubMed Central

    Zabala-Travers, Silvina; Choi, Mina; Cheng, Wei-Chung

    2015-01-01

    Purpose: Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images. Methods: Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow (“jet”), a heated black-body (“hot”), and a gray (“gray”) color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone. Results: The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet. Conclusions: Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images. PMID:26127048

  12. The Fundamentals of Laparoscopic Surgery and LapVR evaluation metrics may not correlate with operative performance in a novice cohort

    PubMed Central

    Steigerwald, Sarah N.; Park, Jason; Hardy, Krista M.; Gillman, Lawrence; Vergis, Ashley S.

    2015-01-01

    Background Considerable resources have been invested in both low- and high-fidelity simulators in surgical training. The purpose of this study was to investigate if the Fundamentals of Laparoscopic Surgery (FLS, low-fidelity box trainer) and LapVR (high-fidelity virtual reality) training systems correlate with operative performance on the Global Operative Assessment of Laparoscopic Skills (GOALS) global rating scale using a porcine cholecystectomy model in a novice surgical group with minimal laparoscopic experience. Methods Fourteen postgraduate year 1 surgical residents with minimal laparoscopic experience performed tasks from the FLS program and the LapVR simulator as well as a live porcine laparoscopic cholecystectomy. Performance was evaluated using standardized FLS metrics, automatic computer evaluations, and a validated global rating scale. Results Overall, FLS score did not show an association with GOALS global rating scale score on the porcine cholecystectomy. None of the five LapVR task scores were significantly associated with GOALS score on the porcine cholecystectomy. Conclusions Neither the low-fidelity box trainer or the high-fidelity virtual simulator demonstrated significant correlation with GOALS operative scores. These findings offer caution against the use of these modalities for brief assessments of novice surgical trainees, especially for predictive or selection purposes. PMID:26641071

  13. High-performance parallel processors based on star-coupled wavelength division multiplexing optical interconnects

    DOEpatents

    Deri, Robert J.; DeGroot, Anthony J.; Haigh, Ronald E.

    2002-01-01

    As the performance of individual elements within parallel processing systems increases, increased communication capability between distributed processor and memory elements is required. There is great interest in using fiber optics to improve interconnect communication beyond that attainable using electronic technology. Several groups have considered WDM, star-coupled optical interconnects. The invention uses a fiber optic transceiver to provide low latency, high bandwidth channels for such interconnects using a robust multimode fiber technology. Instruction-level simulation is used to quantify the bandwidth, latency, and concurrency required for such interconnects to scale to 256 nodes, each operating at 1 GFLOPS performance. Performance scales have been shown to .apprxeq.100 GFLOPS for scientific application kernels using a small number of wavelengths (8 to 32), only one wavelength received per node, and achievable optoelectronic bandwidth and latency.

  14. High Temperature Perforating System for Geothermal Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smart, Moises E.

    The objective of this project is to develop a perforating system consisting of all the explosive components and hardware, capable of reliable performance in high temperatures geothermal wells (>200 ºC). In this light we will focused on engineering development of these components, characterization of the explosive raw powder and developing the internal infrastructure to increase the production of the explosive from laboratory scale to industrial scale.

  15. Ionic liquid-mediated synthesis of meso-scale porous lanthanum-transition-metal perovskites with high CO oxidation performance

    DOE PAGES

    Lu, Hanfeng; Zhang, Pengfei; Qiao, Zhen-An; ...

    2015-02-19

    Lanthanum-transition-metal perovskites with robust meso-scale porous frameworks (meso-LaMO 3) are synthesized through use of ionic liquids. The resultant samples demonstrate a rather high activity for CO oxidation, by taking advantage of unique nanostructure-derived benefits. This synthesis strategy opens up a new opportunity for preparing functional mesoporous complex oxides of various compositions.

  16. Ionic liquid-mediated synthesis of meso-scale porous lanthanum-transition-metal perovskites with high CO oxidation performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Hanfeng; Zhang, Pengfei; Qiao, Zhen-An

    Lanthanum-transition-metal perovskites with robust meso-scale porous frameworks (meso-LaMO 3) are synthesized through use of ionic liquids. The resultant samples demonstrate a rather high activity for CO oxidation, by taking advantage of unique nanostructure-derived benefits. This synthesis strategy opens up a new opportunity for preparing functional mesoporous complex oxides of various compositions.

  17. A Large-Scale Inquiry-Based Astronomy Intervention Project: Impact on Students' Content Knowledge Performance and Views of Their High School Science Classroom

    ERIC Educational Resources Information Center

    Fitzgerald, Michael; McKinnon, David H.; Danaia, Lena; Deehan, James

    2016-01-01

    In this paper, we present the results from a study of the impact on students involved in a large-scale inquiry-based astronomical high school education intervention in Australia. Students in this intervention were led through an educational design allowing them to undertake an investigative approach to understanding the lifecycle of stars more…

  18. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  19. Virtual Reality Exposure Training for Musicians: Its Effect on Performance Anxiety and Quality.

    PubMed

    Bissonnette, Josiane; Dubé, Francis; Provencher, Martin D; Moreno Sala, Maria T

    2015-09-01

    Music performance anxiety affects numerous musicians, with many of them reporting impairment of performance due to this problem. This exploratory study investigated the effects of virtual reality exposure training on students with music performance anxiety. Seventeen music students were randomly assigned to a control group (n=8) or a virtual training group (n=9). Participants were asked to play a musical piece by memory in two separate recitals within a 3-week interval. Anxiety was then measured with the Personal Report of Confidence as a Performer Scale and the S-Anxiety scale from the State-Trait Anxiety Inventory (STAI-Y). Between pre- and post-tests, the virtual training group took part in virtual reality exposure training consisting of six 1-hour long sessions of virtual exposure. The results indicate a significant decrease in performance anxiety for musicians in the treatment group for those with a high level of state anxiety, for those with a high level of trait anxiety, for women, and for musicians with high immersive tendencies. Finally, between the pre- and post-tests, we observed a significant increase in performance quality for the experimental group, but not for the control group.

  20. High-Performance Computing Unlocks Innovation at NREL - Video Text Version

    Science.gov Websites

    scales, data visualizations and large-scale modeling provide insights and test new ideas. But this type most energy-efficient data center in the world. NREL and Hewlett-Packard won an R&D 100 award-the

  1. Investigation of varying gray scale levels for remote manipulation

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Stuart, Mark A.; Sampaio, Carlos E.

    1991-01-01

    A study was conducted to investigate the effects of variant monitor gray scale levels and workplace illumination levels on operators' ability to discriminate between different colors on a monochrome monitor. It was determined that 8-gray scale viewing resulted in significantly worse discrimination performance compared to 16- and 32-gray scale viewing and that there was only a negligible difference found between 16 and 32 shades of gray. Therefore, it is recommended that monitors used while performing remote manipulation tasks have 16 or above shades of gray since this evaluation has found levels lower than this to be unacceptable for color discrimination task. There was no significant performance difference found between a high and a low workplace illumination condition. Further analysis was conducted to determine which specific combinations of colors can be used in conjunction with each other to ensure errorfree color coding/brightness discrimination performance while viewing a monochrome monitor. It was found that 92 three-color combination and 9 four-color combinations could be used with 100 percent accuracy. The results can help to determine which gray scale levels should be provided on monochrome monitors as well as which colors to use to ensure the maximal performance of remotely-viewed color discrimination/coding tasks.

  2. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  3. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  4. Demonstration-scale evaluation of a novel high-solids anaerobic digestion process for converting organic wastes to fuel gas and compost.

    PubMed

    Rivard, C J; Duff, B W; Dickow, J H; Wiles, C C; Nagle, N J; Gaddy, J L; Clausen, E C

    1998-01-01

    Early evaluations of the bioconversion potential for combined wastes such as tuna sludge and sorted municipal solid waste (MSW) were conducted at laboratory scale and compared conventional low-solids, stirred-tank anaerobic systems with the novel, high-solids anaerobic digester (HSAD) design. Enhanced feedstock conversion rates and yields were determined for the HSAD system. In addition, the HSAD system demonstrated superior resiliency to process failure. Utilizing relatively dry feedstocks, the HSAD system is approximately one-tenth the size of conventional low-solids systems. In addition, the HSAD system is capable of organic loading rates (OLRs) on the order of 20-25 g volatile solids per liter digester volume per d (gVS/L/d), roughly 4-5 times those of conventional systems. Current efforts involve developing a demonstration-scale (pilot-scale) HSAD system. A two-ton/d plant has been constructed in Stanton, CA and is currently in the commissioning/startup phase. The purposes of the project are to verify laboratory- and intermediate-scale process performance; test the performance of large-scale prototype mechanical systems; demonstrate the long-term reliability of the process; and generate the process and economic data required for the design, financing, and construction of full-scale commercial systems. This study presents conformational fermentation data obtained at intermediate-scale and a snapshot of the pilot-scale project.

  5. Italian version of the organic brain syndrome and the depression scales from the CARE: evaluation of their performance in geriatric institutions.

    PubMed

    Spagnoli, A; Foresti, G; MacDonald, A; Williams, P

    1987-05-01

    The Organic Brain Syndrome (OBS) and the Depression (D) scales derived from the Comprehensive Assessment and Referral Evaluation (CARE) were translated into Italian and used in a survey of geriatric institutions in Milan. During the survey validity and reliability tests of the scales were conducted. Inter-rater reliability (total score weighted kappa) was highly satisfactory for both scales (0.96 for OBS and 0.83 for D scale). Reliability was assessed three times during the survey and showed good stability for both scales, with a slight but significant trend towards reduction over time for the D scale. Reliability of the D scale was significantly lower when the subjects interviewed scored highly on the OBS scale (severe cognitive impairment). Criterion validity was highly satisfactory both for the OBS scale (cut-off point 4/5: sensitivity 77%, specificity 96%, positive predictive value 91%) and the D scale (cut-off point 10/11: sensitivity 95%, specificity 92%, positive predictive value 84%). Results are discussed with special reference to longitudinal assessment of reliability, the choice of the cut-off point, and the context-dependent properties of questionnaires.

  6. Investigation of the role of flocculation conditions in recuperative thickening on dewatering performance and biogas production.

    PubMed

    Cobbledick, Jeffrey; Zhang, Victor; Rollings-Scattergood, Sasha; Latulippe, David R

    2017-11-01

    There is considerable interest in recuperative thickening (RT), the recycling of partially digested solids in an anaerobic digester outlet stream back into the incoming feed, as a 'high-performance' process to increase biogas production, increase system capacity, and improve biosolids stabilization. While polymer flocculation is commonly used in full-scale RT operations, no studies have investigated the effect of flocculation conditions on RT process performance. Our goal was to investigate the effect of polymer type and dosage conditions on dewatering performance and biogas production in a lab-scale RT system. The type of polymer flocculant significantly affected dewatering performance. For example, the 440 LH polymer (low molecular weight (MW) polyacrylamide) demonstrated lower capillary suction time (CST) and filtrate total suspended solids (TSS) values than the C-6267 polymer (high MW polyacrylamide). An examination of the dewatering performance of RT digesters with different polymers found a strong correlation between CST and filtrate TSS. The type of polymer flocculant had no significant effect on biogas productivity or composition; the methane content was greater than 60% in good agreement with typical results. The optimization of the polymer flocculation conditions is a critical task for which the lab-scale RT system used in this work is ideally suited.

  7. Improved uniformity in high-performance organic photovoltaics enabled by (3-aminopropyl)triethoxysilane cathode functionalization.

    PubMed

    Luck, Kyle A; Shastry, Tejas A; Loser, Stephen; Ogien, Gabriel; Marks, Tobin J; Hersam, Mark C

    2013-12-28

    Organic photovoltaics have the potential to serve as lightweight, low-cost, mechanically flexible solar cells. However, losses in efficiency as laboratory cells are scaled up to the module level have to date impeded large scale deployment. Here, we report that a 3-aminopropyltriethoxysilane (APTES) cathode interfacial treatment significantly enhances performance reproducibility in inverted high-efficiency PTB7:PC71BM organic photovoltaic cells, as demonstrated by the fabrication of 100 APTES-treated devices versus 100 untreated controls. The APTES-treated devices achieve a power conversion efficiency of 8.08 ± 0.12% with histogram skewness of -0.291, whereas the untreated controls achieve 7.80 ± 0.26% with histogram skewness of -1.86. By substantially suppressing the interfacial origins of underperforming cells, the APTES treatment offers a pathway for fabricating large-area modules with high spatial performance uniformity.

  8. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  9. Evolutionary conservation of the polyproline II conformation surrounding intrinsically disordered phosphorylation sites.

    PubMed

    Elam, W Austin; Schrank, Travis P; Campagnolo, Andrew J; Hilser, Vincent J

    2013-04-01

    Intrinsically disordered (ID) proteins function in the absence of a unique stable structure and appear to challenge the classic structure-function paradigm. The extent to which ID proteins take advantage of subtle conformational biases to perform functions, and whether signals for such mechanism can be identified in proteome-wide studies is not well understood. Of particular interest is the polyproline II (PII) conformation, suggested to be highly populated in unfolded proteins. We experimentally determine a complete calorimetric propensity scale for the PII conformation. Projection of the scale into representative eukaryotic proteomes reveals significant PII bias in regions coding for ID proteins. Importantly, enrichment of PII in ID proteins, or protein segments, is also captured by other PII scales, indicating that this enrichment is robustly encoded and universally detectable regardless of the method of PII propensity determination. Gene ontology (GO) terms obtained using our PII scale and other scales demonstrate a consensus for molecular functions performed by high PII proteins across the proteome. Perhaps the most striking result of the GO analysis is conserved enrichment (P < 10(-8) ) of phosphorylation sites in high PII regions found by all PII scales. Subsequent conformational analysis reveals a phosphorylation-dependent modulation of PII, suggestive of a conserved "tunability" within these regions. In summary, the application of an experimentally determined polyproline II (PII) propensity scale to proteome-wide sequence analysis and gene ontology reveals an enrichment of PII bias near disordered phosphorylation sites that is conserved throughout eukaryotes. Copyright © 2013 The Protein Society.

  10. Quality of Voluntary Medical Male Circumcision Services during Scale-Up: A Comparative Process Evaluation in Kenya, South Africa, Tanzania and Zimbabwe

    PubMed Central

    Jennings, Larissa; Bertrand, Jane; Rech, Dino; Harvey, Steven A.; Hatzold, Karin; Samkange, Christopher A.; Omondi Aduda, Dickens S.; Fimbo, Bennett; Cherutich, Peter; Perry, Linnea; Castor, Delivette; Njeuhmeli, Emmanuel

    2014-01-01

    Background The rapid expansion of voluntary medical male circumcision (VMMC) has raised concerns whether health systems can deliver and sustain VMMC according to minimum quality criteria. Methods and Findings A comparative process evaluation was used to examine data from SYMMACS, the Systematic Monitoring of the Voluntary Medical Male Circumcision Scale-Up, among health facilities providing VMMC across two years of program scale-up. Site-level assessments examined the availability of guidelines, supplies and equipment, infection control, and continuity of care services. Direct observation of VMMC surgeries were used to assess care quality. Two sample tests of proportions and t-tests were used to examine differences in the percent of facilities meeting requisite preparedness standards and the mean number of directly-observed surgical tasks performed correctly. Results showed that safe, high quality VMMC can be implemented and sustained at-scale, although substantial variability was observed over time. In some settings, facility preparedness and VMMC service quality improved as the number of VMMC facilities increased. Yet, lapses in high performance and expansion of considerably deficient services were also observed. Surgical tasks had the highest quality scores, with lower performance levels in infection control, pre-operative examinations, and post-operative patient monitoring and counseling. The range of scale-up models used across countries additionally underscored the complexity of delivering high quality VMMC. Conclusions Greater efforts are needed to integrate VMMC scale-up and quality improvement processes in sub-Saharan African settings. Monitoring of service quality, not just adverse events reporting, will be essential in realizing the full health impact of VMMC for HIV prevention. PMID:24801073

  11. Toward high-energy-density, high-efficiency, and moderate-temperature chip-scale thermophotovoltaics

    PubMed Central

    Chan, Walker R.; Bermel, Peter; Pilawa-Podgurski, Robert C. N.; Marton, Christopher H.; Jensen, Klavs F.; Senkevich, Jay J.; Joannopoulos, John D.; Soljačić, Marin; Celanovic, Ivan

    2013-01-01

    The challenging problem of ultra-high-energy-density, high-efficiency, and small-scale portable power generation is addressed here using a distinctive thermophotovoltaic energy conversion mechanism and chip-based system design, which we name the microthermophotovoltaic (μTPV) generator. The approach is predicted to be capable of up to 32% efficient heat-to-electricity conversion within a millimeter-scale form factor. Although considerable technological barriers need to be overcome to reach full performance, we have performed a robust experimental demonstration that validates the theoretical framework and the key system components. Even with a much-simplified μTPV system design with theoretical efficiency prediction of 2.7%, we experimentally demonstrate 2.5% efficiency. The μTPV experimental system that was built and tested comprises a silicon propane microcombustor, an integrated high-temperature photonic crystal selective thermal emitter, four 0.55-eV GaInAsSb thermophotovoltaic diodes, and an ultra-high-efficiency maximum power-point tracking power electronics converter. The system was demonstrated to operate up to 800 °C (silicon microcombustor temperature) with an input thermal power of 13.7 W, generating 344 mW of electric power over a 1-cm2 area. PMID:23440220

  12. Examining the Psychometric Properties of the Infant-Toddler Environment Rating Scale-Revised Edition in a High-Stakes Context

    ERIC Educational Resources Information Center

    Bisceglia, Rossana; Perlman, Michal; Schaack, Diana; Jenkins, Jennifer

    2009-01-01

    The psychometric properties of the Infant-Toddler Environment Rating Scale-Revised Edition (ITERS-R) were examined using 153 classrooms from child-care centers where resources were tied to center performance. An exploratory factor analysis revealed that the scale measures one global aspect of quality. To decrease redundancy, subsets of items were…

  13. Spherical roller bearing analysis. SKF computer program SPHERBEAN. Volume 3: Program correlation with full scale hardware tests

    NASA Technical Reports Server (NTRS)

    Kleckner, R. J.; Rosenlieb, J. W.; Dyba, G.

    1980-01-01

    The results of a series of full scale hardware tests comparing predictions of the SPHERBEAN computer program with measured data are presented. The SPHERBEAN program predicts the thermomechanical performance characteristics of high speed lubricated double row spherical roller bearings. The degree of correlation between performance predicted by SPHERBEAN and measured data is demonstrated. Experimental and calculated performance data is compared over a range in speed up to 19,400 rpm (0.8 MDN) under pure radial, pure axial, and combined loads.

  14. Accelerating research into bio-based FDCA-polyesters by using small scale parallel film reactors.

    PubMed

    Gruter, Gert-Jan M; Sipos, Laszlo; Adrianus Dam, Matheus

    2012-02-01

    High Throughput experimentation has been well established as a tool in early stage catalyst development and catalyst and process scale-up today. One of the more challenging areas of catalytic research is polymer catalysis. The main difference with most non-polymer catalytic conversions is the fact that the product is not a well defined molecule and the catalytic performance cannot be easily expressed only in terms of catalyst activity and selectivity. In polymerization reactions, polymer chains are formed that can have various lengths (resulting in a molecular weight distribution rather than a defined molecular weight), that can have different compositions (when random or block co-polymers are produced), that can have cross-linking (often significantly affecting physical properties), that can have different endgroups (often affecting subsequent processing steps) and several other variations. In addition, for polyolefins, mass and heat transfer, oxygen and moisture sensitivity, stereoregularity and many other intrinsic features make relevant high throughput screening in this field an incredible challenge. For polycondensation reactions performed in the melt often the viscosity becomes already high at modest molecular weights, which greatly influences mass transfer of the condensation product (often water or methanol). When reactions become mass transfer limited, catalyst performance comparison is often no longer relevant. This however does not mean that relevant experiments for these application areas cannot be performed on small scale. Relevant catalyst screening experiments for polycondensation reactions can be performed in very efficient small scale parallel equipment. Both transesterification and polycondensation as well as post condensation through solid-stating in parallel equipment have been developed. Next to polymer synthesis, polymer characterization also needs to be accelerated without making concessions to quality in order to draw relevant conclusions.

  15. Assessing and mapping spatial associations among oral cancer mortality rates, concentrations of heavy metals in soil, and land use types based on multiple scale data.

    PubMed

    Lin, Wei-Chih; Lin, Yu-Pin; Wang, Yung-Chieh; Chang, Tsun-Kuo; Chiang, Li-Chi

    2014-02-21

    In this study, a deconvolution procedure was used to create a variogram of oral cancer (OC) rates. Based on the variogram, area-to-point (ATP) Poisson kriging and p-field simulation were used to downscale and simulate, respectively, the OC rate data for Taiwan from the district scale to a 1 km × 1 km grid scale. Local cluster analysis (LCA) of OC mortality rates was then performed to identify OC mortality rate hot spots based on the downscaled and the p-field-simulated OC mortality maps. The relationship between OC mortality and land use was studied by overlapping the maps of the downscaled OC mortality, the LCA results, and the land uses. One thousand simulations were performed to quantify local and spatial uncertainties in the LCA to identify OC mortality hot spots. The scatter plots and Spearman's rank correlation yielded the relationship between OC mortality and concentrations of the seven metals in the 1 km cell grid. The correlation analysis results for the 1 km scale revealed a weak correlation between OC mortality rate and concentrations of the seven studied heavy metals in soil. Accordingly, the heavy metal concentrations in soil are not major determinants of OC mortality rates at the 1 km scale at which soils were sampled. The LCA statistical results for local indicator of spatial association (LISA) revealed that the sites with high probability of high-high (high value surrounded by high values) OC mortality at the 1 km grid scale were clustered in southern, eastern, and mid-western Taiwan. The number of such sites was also significantly higher on agricultural land and in urban regions than on land with other uses. The proposed approach can be used to downscale and evaluate uncertainty in mortality data from a coarse scale to a fine scale at which useful additional information can be obtained for assessing and managing land use and risk.

  16. Diagnostic Accuracy of Rating Scales for Attention-Deficit/Hyperactivity Disorder: A Meta-analysis.

    PubMed

    Chang, Ling-Yin; Wang, Mei-Yeh; Tsai, Pei-Shan

    2016-03-01

    The Child Behavior Checklist-Attention Problem (CBCL-AP) scale and Conners Rating Scale-Revised (CRS-R) are commonly used behavioral rating scales for diagnosing attention-deficit/hyperactivity disorder (ADHD) in children and adolescents. To evaluate and compare the diagnostic performance of CBCL-AP and CRS-R in diagnosing ADHD in children and adolescents. PubMed, Ovid Medline, and other relevant electronic databases were searched for articles published up to May 2015. We included studies evaluating the diagnostic performance of either CBCL-AP scale or CRS-R for diagnosing ADHD in pediatric populations in comparison with a defined reference standard. Bivariate random effects models were used for pooling and comparing diagnostic performance. We identified and evaluated 14 and 11 articles on CBCL-AP and CRS-R, respectively. The results revealed pooled sensitivities of 0.77, 0.75, 0.72, and 0.83 and pooled specificities of 0.73, 0.75, 0.84, and 0.84 for CBCL-AP, Conners Parent Rating Scale-Revised, Conners Teacher Rating Scale-Revised, and Conners Abbreviated Symptom Questionnaire (ASQ), respectively. No difference was observed in the diagnostic performance of the various scales. Study location, age of participants, and percentage of female participants explained the heterogeneity in the specificity of the CBCL-AP. CBCL-AP and CRS-R both yielded moderate sensitivity and specificity in diagnosing ADHD. According to the comparable diagnostic performance of all examined scales, ASQ may be the most effective diagnostic tool in assessing ADHD because of its brevity and high diagnostic accuracy. CBCL is recommended for more comprehensive assessments. Copyright © 2016 by the American Academy of Pediatrics.

  17. Psychological variables and Wechsler Adult Intelligence Scale-IV performance.

    PubMed

    Gass, Carlton S; Gutierrez, Laura

    2017-01-01

    The MMPI-2 and WAIS-IV are commonly used together in neuropsychological evaluations yet little is known about their interrelationships. This study explored the potential influence of psychological factors on WAIS-IV performance in a sample of 180 predominantly male veteran referrals that underwent a comprehensive neuropsychological examination in a VA Medical Center. Exclusionary criteria included failed performance validity testing and self-report distortion on the MMPI-2. A Principal Components Analysis was performed on the 15 MMPI-2 content scales, yielding three broader higher-order psychological dimensions: Internalized Emotional Dysfunction (IED), Externalized Emotional Dysfunction (EED), and Fear. Level of IED was not related to performance on the WAIS-IV Full Scale IQ or its four indexes: (Verbal Comprehension, Perceptual Reasoning, Working Memory, and Processing Speed). EED was not related to WAIS-IV performance. Level of Fear, which encompasses health preoccupations (HEA) and distorted perceptions (BIZ), was significantly related to WAIS-IV Full Scale IQ and Verbal Comprehension. These results challenge the common use of high scores on the MMPI-2 IED measures (chiefly depression and anxiety) to explain deficient WAIS-IV performance. In addition, they provide impetus for further investigation of the relation between verbal intelligence and Fear.

  18. Math anxiety differentially affects WAIS-IV arithmetic performance in undergraduates.

    PubMed

    Buelow, Melissa T; Frakey, Laura L

    2013-06-01

    Previous research has shown that math anxiety can influence the math performance level; however, to date, it is unknown whether math anxiety influences performance on working memory tasks during neuropsychological evaluation. In the present study, 172 undergraduate students completed measures of math achievement (the Math Computation subtest from the Wide Range Achievement Test-IV), math anxiety (the Math Anxiety Rating Scale-Revised), general test anxiety (from the Adult Manifest Anxiety Scale-College version), and the three Working Memory Index tasks from the Wechsler Adult Intelligence Scale-IV Edition (WAIS-IV; Digit Span [DS], Arithmetic, Letter-Number Sequencing [LNS]). Results indicated that math anxiety predicted performance on Arithmetic, but not DS or LNS, above and beyond the effects of gender, general test anxiety, and math performance level. Our findings suggest that math anxiety can negatively influence WAIS-IV working memory subtest scores. Implications for clinical practice include the utilization of LNS in individuals expressing high math anxiety.

  19. Miniature ion thruster ring-cusp discharge performance and behavior

    NASA Astrophysics Data System (ADS)

    Dankongkakul, Ben; Wirz, Richard E.

    2017-12-01

    Miniature ion thrusters are an attractive option for a wide range of space missions due to their low power levels and high specific impulse. Thrusters using ring-cusp plasma discharges promise the highest performance, but are still limited by the challenges of efficiently maintaining a plasma discharge at such small scales (typically 1-3 cm diameter). This effort significantly advances the understanding of miniature-scale plasma discharges by comparing the performance and xenon plasma confinement behavior for 3-ring, 4-ring, and 5-ring cusp by using the 3 cm Miniature Xenon Ion thruster as a modifiable platform. By measuring and comparing the plasma and electron energy distribution maps throughout the discharge, we find that miniature ring-cusp plasma behavior is dominated by the high magnetic fields from the cusps; this can lead to high loss rates of high-energy primary electrons to the anode walls. However, the primary electron confinement was shown to considerably improve by imposing an axial magnetic field or by using cathode terminating cusps, which led to increases in the discharge efficiency of up to 50%. Even though these design modifications still present some challenges, they show promise to bypassing what were previously seen as inherent limitations to ring-cusp discharge efficiency at miniature scales.

  20. Accelerating DNA analysis applications on GPU clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data which needs to be matched against exponentially growing databases known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also includemore » heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variabilities, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. Load balancing also plays a crucial role when considering the limited bandwidth among the nodes of these systems. In this paper we present an efficient implementation of the Aho-Corasick algorithm for high performance clusters accelerated with GPUs. We discuss how we partitioned and adapted the algorithm to fit the Tesla C1060 GPU and then present a MPI based implementation for a heterogeneous high performance cluster. We compare this implementation to MPI and MPI with pthreads based implementations for a homogeneous cluster of x86 processors, discussing the stability vs. the performance and the scaling of the solutions, taking into consideration aspects such as the bandwidth among the different nodes.« less

  1. Scaling laws for testing of high lift airfoils under heavy rainfall

    NASA Technical Reports Server (NTRS)

    Bilanin, A. J.

    1985-01-01

    The results of studies regarding the effect of rainfall about aircraft are briefly reviewed. It is found that performance penalties on airfoils have been identified in subscale tests. For this reason, it is of great importance that scaling laws be dveloped to aid in the extrapolation of these data to fullscale. The present investigation represents an attempt to develop scaling laws for testing subscale airfoils under heavy rain conditions. Attention is given to rain statistics, airfoil operation in heavy rain, scaling laws, thermodynamics of condensation and/or evaporation, rainfall and airfoil scaling, aspects of splash back, film thickness, rivulets, and flap slot blockage. It is concluded that the extrapolation of airfoil performance data taken at subscale under simulated heavy rain conditions to fullscale must be undertaken with caution.

  2. Rational and Efficient Preparative Isolation of Natural Products by MPLC-UV-ELSD based on HPLC to MPLC Gradient Transfer.

    PubMed

    Challal, Soura; Queiroz, Emerson Ferreira; Debrus, Benjamin; Kloeti, Werner; Guillarme, Davy; Gupta, Mahabir Prashad; Wolfender, Jean-Luc

    2015-11-01

    In natural product research, the isolation of biomarkers or bioactive compounds from complex natural extracts represents an essential step for de novo identification and bioactivity assessment. When pure natural products have to be obtained in milligram quantities, the chromatographic steps are generally labourious and time-consuming. In this respect, an efficient method has been developed for the reversed-phase gradient transfer from high-performance liquid chromatography to medium-performance liquid chromatography for the isolation of pure natural products at the level of tens of milligrams from complex crude natural extracts. The proposed method provides a rational way to predict retention behaviour and resolution at the analytical scale prior to medium-performance liquid chromatography, and guarantees similar performances at both analytical and preparative scales. The optimisation of the high-performance liquid chromatography separation and system characterisation allows for the prediction of the gradient at the medium-performance liquid chromatography scale by using identical stationary phase chemistries. The samples were introduced in medium-performance liquid chromatography using a pressure-resistant aluminium dry load cell especially designed for this study to allow high sample loading while maintaining a maximum achievable flow rate for the separation. The method has been validated with a mixture of eight natural product standards. Ultraviolet and evaporative light scattering detections were used in parallel for a comprehensive monitoring. In addition, post-chromatographic mass spectrometry detection was provided by high-throughput ultrahigh-performance liquid chromatography time-of-flight mass spectrometry analyses of all fractions. The processing of all liquid chromatography-mass spectrometry data in the form of an medium-performance liquid chromatography x ultra high-performance liquid chromatography time-of-flight mass spectrometry matrix enabled an efficient localisation of the compounds of interest in the generated fractions. The methodology was successfully applied for the separation of three different plant extracts that contain many diverse secondary metabolites. The advantages and limitations of this approach and the theoretical chromatographic background that rules such as liquid chromatography gradient transfer are presented from a practical viewpoint. Georg Thieme Verlag KG Stuttgart · New York.

  3. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  4. Systems Engineering Provides Successful High Temperature Steam Electrolysis Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles V. Park; Emmanuel Ohene Opare, Jr.

    2011-06-01

    This paper describes two Systems Engineering Studies completed at the Idaho National Laboratory (INL) to support development of the High Temperature Stream Electrolysis (HTSE) process. HTSE produces hydrogen from water using nuclear power and was selected by the Department of Energy (DOE) for integration with the Next Generation Nuclear Plant (NGNP). The first study was a reliability, availability and maintainability (RAM) analysis to identify critical areas for technology development based on available information regarding expected component performance. An HTSE process baseline flowsheet at commercial scale was used as a basis. The NGNP project also established a process and capability tomore » perform future RAM analyses. The analysis identified which components had the greatest impact on HTSE process availability and indicated that the HTSE process could achieve over 90% availability. The second study developed a series of life-cycle cost estimates for the various scale-ups required to demonstrate the HTSE process. Both studies were useful in identifying near- and long-term efforts necessary for successful HTSE process deployment. The size of demonstrations to support scale-up was refined, which is essential to estimate near- and long-term cost and schedule. The life-cycle funding profile, with high-level allocations, was identified as the program transitions from experiment scale R&D to engineering scale demonstration.« less

  5. Field of genes: using Apache Kafka as a bioinformatic data repository.

    PubMed

    Lawlor, Brendan; Lynch, Richard; Mac Aogáin, Micheál; Walsh, Paul

    2018-04-01

    Bioinformatic research is increasingly dependent on large-scale datasets, accessed either from private or public repositories. An example of a public repository is National Center for Biotechnology Information's (NCBI's) Reference Sequence (RefSeq). These repositories must decide in what form to make their data available. Unstructured data can be put to almost any use but are limited in how access to them can be scaled. Highly structured data offer improved performance for specific algorithms but limit the wider usefulness of the data. We present an alternative: lightly structured data stored in Apache Kafka in a way that is amenable to parallel access and streamed processing, including subsequent transformations into more highly structured representations. We contend that this approach could provide a flexible and powerful nexus of bioinformatic data, bridging the gap between low structure on one hand, and high performance and scale on the other. To demonstrate this, we present a proof-of-concept version of NCBI's RefSeq database using this technology. We measure the performance and scalability characteristics of this alternative with respect to flat files. The proof of concept scales almost linearly as more compute nodes are added, outperforming the standard approach using files. Apache Kafka merits consideration as a fast and more scalable but general-purpose way to store and retrieve bioinformatic data, for public, centralized reference datasets such as RefSeq and for private clinical and experimental data.

  6. Short-Channel Tunneling Field-Effect Transistor with Drain-Overlap and Dual-Metal Gate Structure for Low-Power and High-Speed Operations.

    PubMed

    Yoon, Young Jun; Eun, Hye Rim; Seo, Jae Hwa; Kang, Hee-Sung; Lee, Seong Min; Lee, Jeongmin; Cho, Seongjae; Tae, Heung-Sik; Lee, Jung-Hee; Kang, In Man

    2015-10-01

    We have investigated and proposed a highly scaled tunneling field-effect transistor (TFET) based on Ge/GaAs heterojunction with a drain overlap to suppress drain-induced barrier thinning (DIBT) and improve low-power (LP) performance. The highly scaled TFET with a drain overlap achieves lower leakage tunneling current because of the decrease in tunneling events between the source and drain, whereas a typical short-channel TFET suffers from a great deal of tunneling leakage current due to the DIBT at the off-state. However, the drain overlap inevitably increases the gate-to-drain capacitance (Cgd) because of the increase in the overlap capacitance (Cov) and inversion capacitance (Cinv). Thus, in this work, a dual-metal gate structure is additionally applied along with the drain overlap. The current performance and the total gate capacitance (Cgg) of the device with a dual-metal gate can be possibly controlled by adjusting the metal gate workfunction (φgate) and φoverlap-gate in the overlapping regions. As a result, the intrinsic delay time (τ) is greatly reduced by obtaining lower Cgg divided by the on-state current (Ion), i.e., Cgg/Ion. We have successfully demonstrated excellent LP and high-speed performance of a highly scaled TFET by adopting both drain overlap and dual-metal gate with DIBT minimization.

  7. Scaled centrifugal compressor, collector and running gear program

    NASA Technical Reports Server (NTRS)

    Kenehan, J. G.

    1983-01-01

    The Scaled Centrifugal Compressor, Collector and Running gear Program was conducted in support of an overall NASA strategy to improve small-compressor performance, durability, and reliability while reducing initial and life-cycle costs. Accordingly, Garrett designed and provided a test rig, gearbox coupling, and facility collector for a new NASA facility, and provided a scaled model of an existing, high-performance impeller for evaluation scaling effects on aerodynamic performance and for obtaining other performance data. Test-rig shafting was designed to operate smoothly throughout a speed range up to 60,000 rpm. Pressurized components were designed to operate at pressures up to 300 psia and at temperatures to 1000 F. Nonrotating components were designed to provide a margin-of-safety of 0.05 or greater; rotating components, for a margin-of-safety based on allowable yield and ultimate strengths. Design activities were supported by complete design analysis, and the finished hardware was subjected to check-runs to confirm proper operation. The test rig will support a wide range of compressor tests and evaluations.

  8. Research and Development of High-performance Explosives

    PubMed Central

    Cornell, Rodger; Wrobel, Erik; Anderson, Paul E.

    2016-01-01

    Developmental testing of high explosives for military applications involves small-scale formulation, safety testing, and finally detonation performance tests to verify theoretical calculations. small-scale For newly developed formulations, the process begins with small-scale mixes, thermal testing, and impact and friction sensitivity. Only then do subsequent larger scale formulations proceed to detonation testing, which will be covered in this paper. Recent advances in characterization techniques have led to unparalleled precision in the characterization of early-time evolution of detonations. The new technique of photo-Doppler velocimetry (PDV) for the measurement of detonation pressure and velocity will be shared and compared with traditional fiber-optic detonation velocity and plate-dent calculation of detonation pressure. In particular, the role of aluminum in explosive formulations will be discussed. Recent developments led to the development of explosive formulations that result in reaction of aluminum very early in the detonation product expansion. This enhanced reaction leads to changes in the detonation velocity and pressure due to reaction of the aluminum with oxygen in the expanding gas products. PMID:26966969

  9. Nursing performance under high workload: a diary study on the moderating role of selection, optimization and compensation strategies.

    PubMed

    Baethge, Anja; Müller, Andreas; Rigotti, Thomas

    2016-03-01

    The aim of this study was to investigate whether selective optimization with compensation constitutes an individualized action strategy for nurses wanting to maintain job performance under high workload. High workload is a major threat to healthcare quality and performance. Selective optimization with compensation is considered to enhance the efficient use of intra-individual resources and, therefore, is expected to act as a buffer against the negative effects of high workload. The study applied a diary design. Over five consecutive workday shifts, self-report data on workload was collected at three randomized occasions during each shift. Self-reported job performance was assessed in the evening. Self-reported selective optimization with compensation was assessed prior to the diary reporting. Data were collected in 2010. Overall, 136 nurses from 10 German hospitals participated. Selective optimization with compensation was assessed with a nine-item scale that was specifically developed for nursing. The NASA-TLX scale indicating the pace of task accomplishment was used to measure workload. Job performance was assessed with one item each concerning performance quality and forgetting of intentions. There was a weaker negative association between workload and both indicators of job performance in nurses with a high level of selective optimization with compensation, compared with nurses with a low level. Considering the separate strategies, selection and compensation turned out to be effective. The use of selective optimization with compensation is conducive to nurses' job performance under high workload levels. This finding is in line with calls to empower nurses' individual decision-making. © 2015 John Wiley & Sons Ltd.

  10. Study of LANDSAT-D thematic mapper performance as applied to hydrocarbon exploration

    NASA Technical Reports Server (NTRS)

    Everett, J. R. (Principal Investigator)

    1983-01-01

    Two fully processed test tapes were enhanced and evaluated at scales up to 1:10,000, using both hardcopy output and interactive screen display. A large scale, the Detroit, Michigan scene shows evidence of an along line data slip every sixteenth line in TM channel 2. Very large scale products generated in false color using channels 1,3, and 4 should be very acceptable for interpretation at scales up to 1:50,000 and useful for change mapping probably up to scale 1:24,000. Striping visible in water bodies for both natural and color products indicates that the detector calibration is probably performing below preflight specification. For a set of 512 x 512 windows within the NE Arkansas scene, the variance-covariance matrices were computed and principal component analyses performed. Initial analysis suggests that the shortwave infrared TM 5 and 6 channels are a highly significant data source. The thermal channel (TM 7) shows negative correlation with TM 1 and 4.

  11. Micro-sized microbial fuel cell: a mini-review.

    PubMed

    Wang, Hsiang-Yu; Bernarda, Angela; Huang, Chih-Yung; Lee, Duu-Jong; Chang, Jo-Shu

    2011-01-01

    This review presents the development of micro-sized microbial fuel cells (including mL-scale and μL-scale setups), with summarization of their advantageous characteristics, fabrication methods, performances, potential applications and possible future directions. The performance of microbial fuel cells (MFCs) is affected by issues such as mass transport, reaction kinetics and ohmic resistance. These factors are manipulated in micro-sized MFCs using specially allocated electrodes constructed with specified materials having physically or chemically modified surfaces. Both two-chamber and air-breathing cathodes are promising configurations for mL-scale MFCs. However, most of the existing μL-scale MFCs generate significantly lower volumetric power density compared with their mL-counterparts because of the high internal resistance. Although μL-scale MFCs have not yet to provide sufficient power for operating conventional equipment, they show great potential in rapid screening of electrochemically microbes and electrode performance. Additional possible applications and future directions are also provided for the development of micro-sized MFCs. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Functional Outcome Trajectories After Out-of-Hospital Pediatric Cardiac Arrest.

    PubMed

    Silverstein, Faye S; Slomine, Beth S; Christensen, James; Holubkov, Richard; Page, Kent; Dean, J Michael; Moler, Frank W

    2016-12-01

    To analyze functional performance measures collected prospectively during the conduct of a clinical trial that enrolled children (up to age 18 yr old), resuscitated after out-of-hospital cardiac arrest, who were at high risk of poor outcomes. Children with Glasgow Motor Scale score less than 5, within 6 hours of resuscitation, were enrolled in a clinical trial that compared two targeted temperature management interventions (THAPCA-OH, NCT00878644). The primary outcome, 12-month survival with Vineland Adaptive Behavior Scale, second edition, score greater or equal to 70, did not differ between groups. Thirty-eight North American PICUs. Two hundred ninety-five children were enrolled; 270 of 295 had baseline Vineland Adaptive Behavior Scale, second edition, scores greater or equal to 70; 87 of 270 survived 1 year. Targeted temperatures were 33.0°C and 36.8°C for hypothermia and normothermia groups. Baseline measures included Vineland Adaptive Behavior Scale, second edition, Pediatric Cerebral Performance Category, and Pediatric Overall Performance Category. Pediatric Cerebral Performance Category and Pediatric Overall Performance Category were rescored at hospital discharges; all three were scored at 3 and 12 months. In survivors with baseline Vineland Adaptive Behavior Scale, second edition scores greater or equal to 70, we evaluated relationships of hospital discharge Pediatric Cerebral Performance Category with 3- and 12-month scores and between 3- and 12-month Vineland Adaptive Behavior Scale, second edition, scores. Hospital discharge Pediatric Cerebral Performance Category scores strongly predicted 3- and 12-month Pediatric Cerebral Performance Category (r = 0.82 and 0.79; p < 0.0001) and Vineland Adaptive Behavior Scale, second edition, scores (r = -0.81 and -0.77; p < 0.0001). Three-month Vineland Adaptive Behavior Scale, second edition, scores strongly predicted 12-month performance (r = 0.95; p < 0.0001). Hypothermia treatment did not alter these relationships. In comatose children, with Glasgow Motor Scale score less than 5 in the initial hours after out-of-hospital cardiac arrest resuscitation, function scores at hospital discharge and at 3 months predicted 12-month performance well in the majority of survivors.

  13. Screening for prenatal substance use: development of the Substance Use Risk Profile-Pregnancy scale.

    PubMed

    Yonkers, Kimberly A; Gotman, Nathan; Kershaw, Trace; Forray, Ariadna; Howell, Heather B; Rounsaville, Bruce J

    2010-10-01

    To report on the development of a questionnaire to screen for hazardous substance use in pregnant women and to compare the performance of the questionnaire with other drug and alcohol measures. Pregnant women were administered a modified TWEAK (Tolerance, Worried, Eye-openers, Amnesia, K[C] Cut Down) questionnaire, the 4Ps Plus questionnaire, items from the Addiction Severity Index, and two questions about domestic violence (N=2,684). The sample was divided into "training" (n=1,610) and "validation" (n=1,074) subsamples. We applied recursive partitioning class analysis to the responses from individuals in the training subsample that resulted in a three-item Substance Use Risk Profile-Pregnancy scale. We examined sensitivity, specificity, and the fit of logistic regression models in the validation subsample to compare the performance of the Substance Use Risk Profile-Pregnancy scale with the modified TWEAK and various scoring algorithms of the 4Ps. The Substance Use Risk Profile-Pregnancy scale is comprised of three informative questions that can be scored for high- or low-risk populations. The Substance Use Risk Profile-Pregnancy scale algorithm for low-risk populations was mostly highly predictive of substance use in the validation subsample (Akaike's Information Criterion=579.75, Nagelkerke R=0.27) with high sensitivity (91%) and adequate specificity (67%). The high-risk algorithm had lower sensitivity (57%) but higher specificity (88%). The Substance Use Risk Profile-Pregnancy scale is simple and flexible with good sensitivity and specificity. The Substance Use Risk Profile-Pregnancy scale can potentially detect a range of substances that may be abused. Clinicians need to further assess women with a positive screen to identify those who require treatment for alcohol or illicit substance use in pregnancy. III.

  14. High power Nb-doped LiFePO4 Li-ion battery cathodes; pilot-scale synthesis and electrochemical properties

    NASA Astrophysics Data System (ADS)

    Johnson, Ian D.; Blagovidova, Ekaterina; Dingwall, Paul A.; Brett, Dan J. L.; Shearing, Paul R.; Darr, Jawwad A.

    2016-09-01

    High power, phase-pure Nb-doped LiFePO4 (LFP) nanoparticles are synthesised using a pilot-scale continuous hydrothermal flow synthesis process (production rate of 6 kg per day) in the range 0.01-2.00 at% Nb with respect to total transition metal content. EDS analysis suggests that Nb is homogeneously distributed throughout the structure. The addition of fructose as a reagent in the hydrothermal flow process, followed by a post synthesis heat-treatment, affords a continuous graphitic carbon coating on the particle surfaces. Electrochemical testing reveals that cycling performance improves with increasing dopant concentration, up to a maximum of 1.0 at% Nb, for which point a specific capacity of 110 mAh g-1 is obtained at 10 C (6 min for the charge or discharge). This is an excellent result for a high power cathode LFP based material, particularly when considering the synthesis was performed on a large pilot-scale apparatus.

  15. Van der Waals epitaxial growth and optoelectronics of large-scale WSe2/SnS2 vertical bilayer p-n junctions.

    PubMed

    Yang, Tiefeng; Zheng, Biyuan; Wang, Zhen; Xu, Tao; Pan, Chen; Zou, Juan; Zhang, Xuehong; Qi, Zhaoyang; Liu, Hongjun; Feng, Yexin; Hu, Weida; Miao, Feng; Sun, Litao; Duan, Xiangfeng; Pan, Anlian

    2017-12-04

    High-quality two-dimensional atomic layered p-n heterostructures are essential for high-performance integrated optoelectronics. The studies to date have been largely limited to exfoliated and restacked flakes, and the controlled growth of such heterostructures remains a significant challenge. Here we report the direct van der Waals epitaxial growth of large-scale WSe 2 /SnS 2 vertical bilayer p-n junctions on SiO 2 /Si substrates, with the lateral sizes reaching up to millimeter scale. Multi-electrode field-effect transistors have been integrated on a single heterostructure bilayer. Electrical transport measurements indicate that the field-effect transistors of the junction show an ultra-low off-state leakage current of 10 -14 A and a highest on-off ratio of up to 10 7 . Optoelectronic characterizations show prominent photoresponse, with a fast response time of 500 μs, faster than all the directly grown vertical 2D heterostructures. The direct growth of high-quality van der Waals junctions marks an important step toward high-performance integrated optoelectronic devices and systems.

  16. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

  17. Classification of Suicide Attempts through a Machine Learning Algorithm Based on Multiple Systemic Psychiatric Scales.

    PubMed

    Oh, Jihoon; Yun, Kyongsik; Hwang, Ji-Hyun; Chae, Jeong-Ho

    2017-01-01

    Classification and prediction of suicide attempts in high-risk groups is important for preventing suicide. The purpose of this study was to investigate whether the information from multiple clinical scales has classification power for identifying actual suicide attempts. Patients with depression and anxiety disorders ( N  = 573) were included, and each participant completed 31 self-report psychiatric scales and questionnaires about their history of suicide attempts. We then trained an artificial neural network classifier with 41 variables (31 psychiatric scales and 10 sociodemographic elements) and ranked the contribution of each variable for the classification of suicide attempts. To evaluate the clinical applicability of our model, we measured classification performance with top-ranked predictors. Our model had an overall accuracy of 93.7% in 1-month, 90.8% in 1-year, and 87.4% in lifetime suicide attempts detection. The area under the receiver operating characteristic curve (AUROC) was the highest for 1-month suicide attempts detection (0.93), followed by lifetime (0.89), and 1-year detection (0.87). Among all variables, the Emotion Regulation Questionnaire had the highest contribution, and the positive and negative characteristics of the scales similarly contributed to classification performance. Performance on suicide attempts classification was largely maintained when we only used the top five ranked variables for training (AUROC; 1-month, 0.75, 1-year, 0.85, lifetime suicide attempts detection, 0.87). Our findings indicate that information from self-report clinical scales can be useful for the classification of suicide attempts. Based on the reliable performance of the top five predictors alone, this machine learning approach could help clinicians identify high-risk patients in clinical settings.

  18. Extreme-scale Algorithms and Solver Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dongarra, Jack

    A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less

  19. Carbon and Carbon Hybrid Materials as Anodes for Sodium-Ion Batteries.

    PubMed

    Zhong, Xiongwu; Wu, Ying; Zeng, Sifan; Yu, Yan

    2018-02-12

    Sodium-ion batteries (SIBs) have attracted much attention for application in large-scale grid energy storage owing to the abundance and low cost of sodium sources. However, low energy density and poor cycling life hinder practical application of SIBs. Recently, substantial efforts have been made to develop electrode materials to push forward large-scale practical applications. Carbon materials can be directly used as anode materials, and they show excellent sodium storage performance. Additionally, designing and constructing carbon hybrid materials is an effective strategy to obtain high-performance anodes for SIBs. In this review, we summarize recent research progress on carbon and carbon hybrid materials as anodes for SIBs. Nanostructural design to enhance the sodium storage performance of anode materials is discussed, and we offer some insight into the potential directions of and future high-performance anode materials for SIBs. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Application of Wavelet Filters in an Evaluation of Photochemical Model Performance

    EPA Science Inventory

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...

  1. ClearFuels-Rentech Integrated Biorefinery Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pearson, Joshua

    The project Final Report describes the validation of the performance of the integration of two technologies that were proven individually on a pilot scale and were demonstrated as a pilot scale integrated biorefinery. The integrated technologies were a larger scale ClearFuels’ (CF) advanced flexible biomass to syngas thermochemical high efficiency hydrothermal reformer (HEHTR) technology with Rentech’s (RTK) existing synthetic gas to liquids (GTL) technology.

  2. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  3. Results from core-edge experiments in high Power, high performance plasmas on DIII-D

    DOE PAGES

    Petrie, T. W.; Fenstermacher, M. E.; Holcomb, C. T.; ...

    2016-12-24

    Here, significant challenges to reducing divertor heat flux in highly powered near-double null divertor (DND) hybrid plasmas, while still maintaining both high performance metrics and low enough density for application of RF heating, are identified. For these DNDs on DIII-D, the scaling of the peak heat flux at the outer target (q ⊥ P) ∝ [P SOL x I P] 0.92 for P SOL = 8-19 MW and I P = 1.0–1.4 MA, and is consistent with standard ITPA scaling for single-null H-mode plasmas. Two divertor heat flux reduction methods were tested. First, applying the puff-and-pump radiating divertor to DIII-Dmore » plasmas may be problematical at high power and H98 (≥ 1.5) due to improvement in confinement time with deuterium gas puffing which can lead to unacceptably high core density under certain conditions. Second, q ⊥ P for these high performance DNDs was reduced by ≈35% when an open divertor is closed on the common flux side of the outer divertor target (“semi-slot”) but also that heating near the slot opening is a significant source for impurity contamination of the core.« less

  4. Adaptive temperature-accelerated dynamics

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2011-02-01

    We present three adaptive methods for optimizing the high temperature Thigh on-the-fly in temperature-accelerated dynamics (TAD) simulations. In all three methods, the high temperature is adjusted periodically in order to maximize the performance. While in the first two methods the adjustment depends on the number of observed events, the third method depends on the minimum activation barrier observed so far and requires an a priori knowledge of the optimal high temperature T^{opt}_{high}(E_a) as a function of the activation barrier Ea for each accepted event. In order to determine the functional form of T^{opt}_{high}(E_a), we have carried out extensive simulations of submonolayer annealing on the (100) surface for a variety of metals (Ag, Cu, Ni, Pd, and Au). While the results for all five metals are different, when they are scaled with the melting temperature Tm, we find that they all lie on a single scaling curve. Similar results have also been obtained for (111) surfaces although in this case the scaling function is slightly different. In order to test the performance of all three methods, we have also carried out adaptive TAD simulations of Ag/Ag(100) annealing and growth at T = 80 K and compared with fixed high-temperature TAD simulations for different values of Thigh. We find that the performance of all three adaptive methods is typically as good as or better than that obtained in fixed high-temperature TAD simulations carried out using the effective optimal fixed high temperature. In addition, we find that the final high temperatures obtained in our adaptive TAD simulations are very close to our results for T^{opt}_{high}(E_a). The applicability of the adaptive methods to a variety of TAD simulations is also briefly discussed.

  5. Velocity-Resolved LES (VR-LES) technique for simulating turbulent transport of high Schmidt number passive scalars

    NASA Astrophysics Data System (ADS)

    Verma, Siddhartha; Blanquart, Guillaume; P. K. Yeung Collaboration

    2011-11-01

    Accurate simulation of high Schmidt number scalar transport in turbulent flows is essential to studying pollutant dispersion, weather, and several oceanic phenomena. Batchelor's theory governs scalar transport in such flows, but requires further validation at high Schmidt and high Reynolds numbers. To this end, we use a new approach with the velocity field fully resolved, but the scalar field only partially resolved. The grid used is fine enough to resolve scales up to the viscous-convective subrange where the decaying slope of the scalar spectrum becomes constant. This places the cutoff wavenumber between the Kolmogorov scale and the Batchelor scale. The subgrid scale terms, which affect transport at the supergrid scales, are modeled under the assumption that velocity fluctuations are negligible beyond this cutoff wavenumber. To ascertain the validity of this technique, we performed a-priori testing on existing DNS data. This Velocity-Resolved LES (VR-LES) technique significantly reduces the computational cost of turbulent simulations of high Schmidt number scalars, and yet provides valuable information of the scalar spectrum in the viscous-convective subrange.

  6. Reliable and valid assessment of point-of-care ultrasonography.

    PubMed

    Todsen, Tobias; Tolsgaard, Martin Grønnebæk; Olsen, Beth Härstedt; Henriksen, Birthe Merete; Hillingsø, Jens Georg; Konge, Lars; Jensen, Morten Lind; Ringsted, Charlotte

    2015-02-01

    To explore the reliability and validity of the Objective Structured Assessment of Ultrasound Skills (OSAUS) scale for point-of-care ultrasonography (POC US) performance. POC US is increasingly used by clinicians and is an essential part of the management of acute surgical conditions. However, the quality of performance is highly operator-dependent. Therefore, reliable and valid assessment of trainees' ultrasonography competence is needed to ensure patient safety. Twenty-four physicians, representing novices, intermediates, and experts in POC US, scanned 4 different surgical patient cases in a controlled set-up. All ultrasound examinations were video-recorded and assessed by 2 blinded radiologists using OSAUS. Reliability was examined using generalizability theory. Construct validity was examined by comparing performance scores between the groups and by correlating physicians' OSAUS scores with diagnostic accuracy. The generalizability coefficient was high (0.81) and a D-study demonstrated that 1 assessor and 5 cases would result in similar reliability. The construct validity of the OSAUS scale was supported by a significant difference in the mean scores between the novice group (17.0; SD 8.4) and the intermediate group (30.0; SD 10.1), P = 0.007, as well as between the intermediate group and the expert group (72.9; SD 4.4), P = 0.04, and by a high correlation between OSAUS scores and diagnostic accuracy (Spearman ρ correlation coefficient = 0.76; P < 0.001). This study demonstrates high reliability as well as evidence of construct validity of the OSAUS scale for assessment of POC US competence. Hence, the OSAUS scale may be suitable for both in-training as well as end-of-training assessment.

  7. Wind tunnel performance results of an aeroelastically scaled 2/9 model of the PTA flight test prop-fan

    NASA Technical Reports Server (NTRS)

    Stefko, George L.; Rose, Gayle E.; Podboy, Gary G.

    1987-01-01

    High speed wind tunnel aerodynamic performance tests of the SR-7A advanced prop-fan have been completed in support of the Prop-Fan Test Assessment (PTA) flight test program. The test showed that the SR-7A model performed aerodynamically very well. At the cruise design condition, the SR-7A prop fan had a high measured net efficiency of 79.3 percent.

  8. Turbulence measurements in high Reynolds number boundary layers

    NASA Astrophysics Data System (ADS)

    Vallikivi, Margit; Smits, Alexander

    2013-11-01

    Measurements are conducted in zero pressure gradient turbulent boundary layers for Reynolds numbers from Reθ = 9,000 to 225,000. The experiments were performed in the High Reynolds number Test Facility (HRTF) at Princeton University, which uses compressed air as the working fluid. Nano-Scale Thermal Anemometry Probes (NSTAPs) are used to acquire data with very high spatial and temporal precision. These new data are used to study the scaling behavior of the streamwise velocity fluctuations in the boundary layer and make comparisons with the scaling of other wall-bounded turbulent flows. Supported under ONR Grant N00014-09-1-0263 (program manager Ron Joslin) and NSF Grant CBET-1064257 (program manager Henning Winter).

  9. Smile line assessment comparing quantitative measurement and visual estimation.

    PubMed

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  10. Performance Status and Change--Measuring Education System Effectiveness with Data from PISA 2000-2009

    ERIC Educational Resources Information Center

    Lenkeit, Jenny; Caro, Daniel H.

    2014-01-01

    Reports of international large-scale assessments tend to evaluate and compare education system performance based on absolute scores. And policymakers refer to high-performing and economically prosperous education systems to enhance their own systemic features. But socioeconomic differences between systems compromise the plausibility of those…

  11. Defining Administrative Tasks, Evaluating Performance, and Developing Skills.

    ERIC Educational Resources Information Center

    Herman, Janice L.; Herman, Jerry J.

    1995-01-01

    To ensure high performance, administrators should develop an articulated structure and process systems approach that identifies the critical success factors (CSFs) of performance for each position; appropriate indicators and scales; and a personal-improvement plan based on last year's evaluation. Once CSFs are identified and written into the…

  12. Approval Motive and Academic Behaviors: The Self Reinforcement Hypothesis

    ERIC Educational Resources Information Center

    Matell, Michael S.; Smith, Ronald E.

    1970-01-01

    Testing of college students in differing conditions as to performance being relevant to academic achievement goals revealed that under hgih relevance conditions scores on the Marlowe Crowne Social Desirability Scale were unrelated to test performance. Under low relevant conditions, the need for approval was highly related to performance in high…

  13. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  14. A modeling understanding on the phosphorous removal performances of A2O and reversed A2O processes in a full-scale wastewater treatment plant.

    PubMed

    Xie, Wen-Ming; Zeng, Raymond J; Li, Wen-Wei; Wang, Guo-Xiang; Zhang, Li-Min

    2018-05-31

    Reversed A 2 O process (anoxic-anaerobic-aerobic) and conventional A 2 O process (anaerobic-anoxic-aerobic) are widely used in many wastewater treatment plants (WWTPs) in Asia. However, at present, there are still no consistent results to figure out which process has better total phosphorous (TP) removal performance and the mechanism for this difference was not clear yet. In this study, the treatment performances of both processes were compared in the same full-scale WWTP and the TP removal dynamics was analyzed by a modeling method. The treatment performance of full-scale WWTP showed the TP removal efficiency of the reversed A 2 O process was more efficient than in the conventional A 2 O process. The modeling results further reveal that the TP removal depends highly on the concentration and composition of influent COD. It had more efficient TP removal than the conventional A 2 O process only under conditions of sufficient influent COD and high fermentation products content. This study may lay a foundation for appropriate selection and optimization of treatment processes to suit practical wastewater properties.

  15. Dynamic Performance of High Bypass Ratio Turbine Engines With Water Ingestion

    NASA Technical Reports Server (NTRS)

    Murthy, S. N. B.

    1996-01-01

    The research on dynamic performance of high bypass turbofan engines includes studies on inlets, turbomachinery and the total engine system operating with air-water mixture; the water may be in vapor, droplet, or film form, and their combinations. Prediction codes (WISGS, WINCOF, WINCOF-1, WINCLR, and Transient Engine Performance Code) for performance changes, as well as changes in blade-casing clearance, have been established and demonstrated in application to actual, generic engines. In view of the continuous changes in water distribution in turbomachinery, the performance of both components and the total engine system must be determined in a time-dependent mode; hence, the determination of clearance changes also requires a time-dependent approach. In general, the performance and clearances changes cannot be scaled either with respect to operating or ingestion conditions. Removal of water prior to phase change is the most effective means of avoiding ingestion effects. Sufficient background has been established to perform definitive, full scale tests on a set of components and a complete engine to establish engine control and operability with various air-water vapor-water mixtures.

  16. Classroom Environment as Related to Contest Ratings among High School Performing Ensembles.

    ERIC Educational Resources Information Center

    Hamann, Donald L.; And Others

    1990-01-01

    Examines influence of classroom environments, measured by the Classroom Environment Scale, Form R (CESR), on vocal and instrumental ensembles' musical achievement at festival contests. Using random sample, reveals subjects with higher scores on CESR scales of involvement, affiliation, teacher support, and organization received better contest…

  17. SOLVENT EXTRACTION AND SOIL WASHING TREATMENT OF CONTAMINATED SOILS FROM WOOD PRESERVING SITES: BENCH SCALE STUDIES

    EPA Science Inventory

    Bench-scale solvent extraction and soil washing studies were performed on soil samples obtained from three abandoned wood preserving sites that included in the NPL. The soil samples from these sites were contaminated with high levels of polyaromatic hydrocarbons (PAHs), pentachlo...

  18. Optimizing hydraulic retention times in denitrifying woodchip bioreactors treating recirculating aquaculture system wastewater

    USDA-ARS?s Scientific Manuscript database

    The performance of wood-based denitrifying bioreactors to treat high-nitrate wastewaters from aquaculture systems has not previously been demonstrated. Four pilot-scale woodchip bioreactors (approximately 1:10 scale) were constructed and operated for 268 d to determine the optimal range of design hy...

  19. Los Alamos Explosives Performance Key to Stockpile Stewardship

    ScienceCinema

    Dattelbaum, Dana

    2018-02-14

    As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- and small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.

  20. Dimensionality and predictive validity of the HAM-Nat, a test of natural sciences for medical school admission

    PubMed Central

    2011-01-01

    Background Knowledge in natural sciences generally predicts study performance in the first two years of the medical curriculum. In order to reduce delay and dropout in the preclinical years, Hamburg Medical School decided to develop a natural science test (HAM-Nat) for student selection. In the present study, two different approaches to scale construction are presented: a unidimensional scale and a scale composed of three subject specific dimensions. Their psychometric properties and relations to academic success are compared. Methods 334 first year medical students of the 2006 cohort responded to 52 multiple choice items from biology, physics, and chemistry. For the construction of scales we generated two random subsamples, one for development and one for validation. In the development sample, unidimensional item sets were extracted from the item pool by means of weighted least squares (WLS) factor analysis, and subsequently fitted to the Rasch model. In the validation sample, the scales were subjected to confirmatory factor analysis and, again, Rasch modelling. The outcome measure was academic success after two years. Results Although the correlational structure within the item set is weak, a unidimensional scale could be fitted to the Rasch model. However, psychometric properties of this scale deteriorated in the validation sample. A model with three highly correlated subject specific factors performed better. All summary scales predicted academic success with an odds ratio of about 2.0. Prediction was independent of high school grades and there was a slight tendency for prediction to be better in females than in males. Conclusions A model separating biology, physics, and chemistry into different Rasch scales seems to be more suitable for item bank development than a unidimensional model, even when these scales are highly correlated and enter into a global score. When such a combination scale is used to select the upper quartile of applicants, the proportion of successful completion of the curriculum after two years is expected to rise substantially. PMID:21999767

  1. Dimensionality and predictive validity of the HAM-Nat, a test of natural sciences for medical school admission.

    PubMed

    Hissbach, Johanna C; Klusmann, Dietrich; Hampe, Wolfgang

    2011-10-14

    Knowledge in natural sciences generally predicts study performance in the first two years of the medical curriculum. In order to reduce delay and dropout in the preclinical years, Hamburg Medical School decided to develop a natural science test (HAM-Nat) for student selection. In the present study, two different approaches to scale construction are presented: a unidimensional scale and a scale composed of three subject specific dimensions. Their psychometric properties and relations to academic success are compared. 334 first year medical students of the 2006 cohort responded to 52 multiple choice items from biology, physics, and chemistry. For the construction of scales we generated two random subsamples, one for development and one for validation. In the development sample, unidimensional item sets were extracted from the item pool by means of weighted least squares (WLS) factor analysis, and subsequently fitted to the Rasch model. In the validation sample, the scales were subjected to confirmatory factor analysis and, again, Rasch modelling. The outcome measure was academic success after two years. Although the correlational structure within the item set is weak, a unidimensional scale could be fitted to the Rasch model. However, psychometric properties of this scale deteriorated in the validation sample. A model with three highly correlated subject specific factors performed better. All summary scales predicted academic success with an odds ratio of about 2.0. Prediction was independent of high school grades and there was a slight tendency for prediction to be better in females than in males. A model separating biology, physics, and chemistry into different Rasch scales seems to be more suitable for item bank development than a unidimensional model, even when these scales are highly correlated and enter into a global score. When such a combination scale is used to select the upper quartile of applicants, the proportion of successful completion of the curriculum after two years is expected to rise substantially.

  2. Topological Properties of Some Integrated Circuits for Very Large Scale Integration Chip Designs

    NASA Astrophysics Data System (ADS)

    Swanson, S.; Lanzerotti, M.; Vernizzi, G.; Kujawski, J.; Weatherwax, A.

    2015-03-01

    This talk presents topological properties of integrated circuits for Very Large Scale Integration chip designs. These circuits can be implemented in very large scale integrated circuits, such as those in high performance microprocessors. Prior work considered basic combinational logic functions and produced a mathematical framework based on algebraic topology for integrated circuits composed of logic gates. Prior work also produced an historically-equivalent interpretation of Mr. E. F. Rent's work for today's complex circuitry in modern high performance microprocessors, where a heuristic linear relationship was observed between the number of connections and number of logic gates. This talk will examine topological properties and connectivity of more complex functionally-equivalent integrated circuits. The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the U.S. Government.

  3. A high-performance dual-scale porous electrode for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    Zhou, X. L.; Zeng, Y. K.; Zhu, X. B.; Wei, L.; Zhao, T. S.

    2016-09-01

    In this work, we present a simple and cost-effective method to form a dual-scale porous electrode by KOH activation of the fibers of carbon papers. The large pores (∼10 μm), formed between carbon fibers, serve as the macroscopic pathways for high electrolyte flow rates, while the small pores (∼5 nm), formed on carbon fiber surfaces, act as active sites for rapid electrochemical reactions. It is shown that the Brunauer-Emmett-Teller specific surface area of the carbon paper is increased by a factor of 16 while maintaining the same hydraulic permeability as that of the original carbon paper electrode. We then apply the dual-scale electrode to a vanadium redox flow battery (VRFB) and demonstrate an energy efficiency ranging from 82% to 88% at current densities of 200-400 mA cm-2, which is record breaking as the highest performance of VRFB in the open literature.

  4. PetIGA: A framework for high-performance isogeometric analysis

    DOE PAGES

    Dalcin, Lisandro; Collier, Nathaniel; Vignal, Philippe; ...

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility ofmore » PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. Lastly, we show strong scaling results on up to 4096 cores, which confirm the suitability of PetIGA for large scale simulations.« less

  5. Users matter : multi-agent systems model of high performance computing cluster users.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Hood, C. S.; Decision and Information Sciences

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less

  6. Precarious employment in Chile: psychometric properties of the Chilean version of Employment Precariousness Scale in private sector workers.

    PubMed

    Vives-Vergara, Alejandra; González-López, Francisca; Solar, Orielle; Bernales-Baksai, Pamela; González, María José; Benach, Joan

    2017-04-20

    The purpose of this study is to perform a psychometric analysis (acceptability, reliability and factor structure) of the Chilean version of the new Employment Precariousness Scale (EPRES). The data is drawn from a sample of 4,248 private salaried workers with a formal contract from the first Chilean Employment Conditions, Work, Health and Quality of Life (ENETS) survey, applied to a nationally representative sample of the Chilean workforce in 2010. Item and scale-level statistics were performed to assess scaling properties, acceptability and reliability. The six-dimensional factor structure was examined with confirmatory factor analysis. The scale exhibited high acceptability (roughly 80%) and reliability (Cronbach's alpha 0.83) and the factor structure was confirmed. One subscale (rights) demonstrated poorer metric properties without compromising the overall scale. The Chilean version of the Employment Precariousness Scale (EPRES-Ch) demonstrated good metric properties, pointing to its suitability for use in epidemiologic and public health research.

  7. Validation of the group nuclear safety climate questionnaire.

    PubMed

    Navarro, M Felisa Latorre; Gracia Lerín, Francisco J; Tomás, Inés; Peiró Silla, José María

    2013-09-01

    Group safety climate is a leading indicator of safety performance in high reliability organizations. Zohar and Luria (2005) developed a Group Safety Climate scale (ZGSC) and found it to have a single factor. The ZGSC scale was used as a basis in this study with the researchers rewording almost half of the items on this scale, changing the referents from the leader to the group, and trying to validate a two-factor scale. The sample was composed of 566 employees in 50 groups from a Spanish nuclear power plant. Item analysis, reliability, correlations, aggregation indexes and CFA were performed. Results revealed that the construct was shared by each unit, and our reworded Group Safety Climate (GSC) scale showed a one-factor structure and correlated to organizational safety climate, formalized procedures, safety behavior, and time pressure. This validation of the one-factor structure of the Zohar and Luria (2005) scale could strengthen and spread this scale and measure group safety climate more effectively. Copyright © 2013 National Safety Council and Elsevier Ltd. All rights reserved.

  8. Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.

    PubMed

    Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L

    2015-07-01

    Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  9. Structural performance of light-frame roof assemblies. I, Truss assemblies designed for high variability and wood failure

    Treesearch

    R.W. Wolfe; Monica McCarthy

    1989-01-01

    The first report of a three-part series that covers results of a full-scale roof assemblies research program. The focus of this report is the structural performance of truss assemblies comprising trusses with abnormally high stiffness variability and critical joint strength. Results discussed include properties of truss members and connections. individual truss...

  10. High subsonic flow tests of a parallel pipe followed by a large area ratio diffuser

    NASA Technical Reports Server (NTRS)

    Barna, P. S.

    1975-01-01

    Experiments were performed on a pilot model duct system in order to explore its aerodynamic characteristics. The model was scaled from a design projected for the high speed operation mode of the Aircraft Noise Reduction Laboratory. The test results show that the model performed satisfactorily and therefore the projected design will most likely meet the specifications.

  11. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    PubMed Central

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-01-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner. PMID:26169570

  12. Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale

    NASA Astrophysics Data System (ADS)

    Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv

    2015-07-01

    X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner.

  13. Situation Awareness and Workload Measures for SAFOR

    NASA Technical Reports Server (NTRS)

    DeMaio, Joe; Hart, Sandra G.; Allen, Ed (Technical Monitor)

    1999-01-01

    The present research was conducted in support of the NASA Safe All-Weather Flight Operations for Rotorcraft (SAFOR) program. The purpose of the work was to investigate the utility of two measurement tools developed by the British Defense Evaluation Research Agency. These tools were a subjective workload assessment scale, the DRA Workload Scale (DRAWS), and a situation awareness measurement tool in which the crews self-evaluation of performance is compared against actual performance. These two measurement tools were evaluated in the context of a test of an innovative approach to alerting the crew by way of a helmet mounted display. The DRAWS was found to be usable, but it offered no advantages over extant scales, and it had only limited resolution. The performance self-evaluation metric of situation awareness was found to be highly effective.

  14. Development of an objective assessment tool for total laparoscopic hysterectomy: A Delphi method among experts and evaluation on a virtual reality simulator

    PubMed Central

    Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane

    2018-01-01

    Introduction Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. Material and methods The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. Results A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). Conclusion The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room. PMID:29293635

  15. Performance of a commercial industrial-scale UF-based process for treatment of oily wastewaters.

    PubMed

    Karhu, M; Kuokkanen, T; Rämö, J; Mikola, M; Tanskanen, J

    2013-10-15

    An evaluation was made of the performance of a commercial industrial-scale ultrafiltration (UF)-based process for treatment of highly concentrated oily wastewaters. Wastewater samples were gathered from two plants treating industrial wastewaters in 2008, and in 2011 (only from one of the plants), from three points of a UF-based treatment train. The wastewater samples were analyzed by measuring the BOD7, COD, TOC and total surface charge (TSC). The inorganic content and zeta potentials of the samples were analyzed and GC-FID/MS analyses were performed. The removal performances of BOD7, COD, TOC and TSC in 2008 and 2011 for both plants were very high. Initial concentrations of contaminants in 2011 were lower than in 2008, therefore the COD and TSC reductions were also lower in 2011 than three years before. Regardless of the high performance of UF-based processes in both plants, at times the residual concentrations were considerable. This could be explained by the high initial concentrations and also by the presence of the dissolved compounds that were characterized. Linear correlation was observed between COD and TOC, and between COD and TSC. The correlation between COD and TSC could be utilized for process control purposes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Peter Andrew

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomicmore » scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.« less

  17. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors

    NASA Astrophysics Data System (ADS)

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-10-01

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m2 g-1 present high specific capacities of the 308 and 200 F g-1 in KOH electrolyte at current densities of 0.1 and 10 A g-1, respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g-1 at 0.1 A g-1 and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry.

  18. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors.

    PubMed

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-10-16

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m(2) g(-1) present high specific capacities of the 308 and 200 F g(-1) in KOH electrolyte at current densities of 0.1 and 10 A g(-1), respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g(-1) at 0.1 A g(-1) and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry.

  19. High nitrogen-containing cotton derived 3D porous carbon frameworks for high-performance supercapacitors

    PubMed Central

    Fan, Li-Zhen; Chen, Tian-Tian; Song, Wei-Li; Li, Xiaogang; Zhang, Shichao

    2015-01-01

    Supercapacitors fabricated by 3D porous carbon frameworks, such as graphene- and carbon nanotube (CNT)-based aerogels, have been highly attractive due to their various advantages. However, their high cost along with insufficient yield has inhibited their large-scale applications. Here we have demonstrated a facile and easily scalable approach for large-scale preparing novel 3D nitrogen-containing porous carbon frameworks using ultralow-cost commercial cotton. Electrochemical performance suggests that the optimal nitrogen-containing cotton-derived carbon frameworks with a high nitrogen content (12.1 mol%) along with low surface area 285 m2 g−1 present high specific capacities of the 308 and 200 F g−1 in KOH electrolyte at current densities of 0.1 and 10 A g−1, respectively, with very limited capacitance loss upon 10,000 cycles in both aqueous and gel electrolytes. Moreover, the electrode exhibits the highest capacitance up to 220 F g−1 at 0.1 A g−1 and excellent flexibility (with negligible capacitance loss under different bending angles) in the polyvinyl alcohol/KOH gel electrolyte. The observed excellent performance competes well with that found in the electrodes of similar 3D frameworks formed by graphene or CNTs. Therefore, the ultralow-cost and simply strategy here demonstrates great potential for scalable producing high-performance carbon-based supercapacitors in the industry. PMID:26472144

  20. Selective Dry Etch for Defining Ohmic Contacts for High Performance ZnO TFTs

    DTIC Science & Technology

    2014-03-27

    scale, high-frequency ZnO thin - film transistors (TFTs) could be fabricated. Molybdenum, tantalum, titanium tungsten 10-90, and tungsten metallic contact... thin - film transistor layout utilized in the thesis research . . . . . 42 3.4 Process Flow Diagram for Optical and e-Beam Devices...TFT thin - film transistor TLM transmission line model UV ultra-violet xvii SELECTIVE DRY ETCH FOR DEFINING OHMIC CONTACTS FOR HIGH PERFORMANCE ZnO TFTs

  1. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DOE PAGES

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian; ...

    2015-02-25

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourlymore » and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.« less

  2. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourlymore » and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.« less

  3. Suggestibility and signal detection performance in hallucination-prone students.

    PubMed

    Alganami, Fatimah; Varese, Filippo; Wagstaff, Graham F; Bentall, Richard P

    2017-03-01

    Auditory hallucinations are associated with signal detection biases. We examine the extent to which suggestions influence performance on a signal detection task (SDT) in highly hallucination-prone and low hallucination-prone students. We also explore the relationship between trait suggestibility, dissociation and hallucination proneness. In two experiments, students completed on-line measures of hallucination proneness (the revised Launay-Slade Hallucination Scale; LSHS-R), trait suggestibility (Inventory of Suggestibility) and dissociation (Dissociative Experiences Scale-II). Students in the upper and lower tertiles of the LSHS-R performed an auditory SDT. Prior to the task, suggestions were made pertaining to the number of expected targets (Experiment 1, N = 60: high vs. low suggestions; Experiment 2, N = 62, no suggestion vs. high suggestion vs. no voice suggestion). Correlational and regression analyses indicated that trait suggestibility and dissociation predicted hallucination proneness. Highly hallucination-prone students showed a higher SDT bias in both studies. In Experiment 1, both bias scores were significantly affected by suggestions to the same degree. In Experiment 2, highly hallucination-prone students were more reactive to the high suggestion condition than the controls. Suggestions may affect source-monitoring judgments, and this effect may be greater in those who have a predisposition towards hallucinatory experiences.

  4. Multi-Scale Microstructural Thermoelectric Materials: Transport Behavior, Non-Equilibrium Preparation, and Applications.

    PubMed

    Su, Xianli; Wei, Ping; Li, Han; Liu, Wei; Yan, Yonggao; Li, Peng; Su, Chuqi; Xie, Changjun; Zhao, Wenyu; Zhai, Pengcheng; Zhang, Qingjie; Tang, Xinfeng; Uher, Ctirad

    2017-05-01

    Considering only about one third of the world's energy consumption is effectively utilized for functional uses, and the remaining is dissipated as waste heat, thermoelectric (TE) materials, which offer a direct and clean thermal-to-electric conversion pathway, have generated a tremendous worldwide interest. The last two decades have witnessed a remarkable development in TE materials. This Review summarizes the efforts devoted to the study of non-equilibrium synthesis of TE materials with multi-scale structures, their transport behavior, and areas of applications. Studies that work towards the ultimate goal of developing highly efficient TE materials possessing multi-scale architectures are highlighted, encompassing the optimization of TE performance via engineering the structures with different dimensional aspects spanning from the atomic and molecular scales, to nanometer sizes, and to the mesoscale. In consideration of the practical applications of high-performance TE materials, the non-equilibrium approaches offer a fast and controllable fabrication of multi-scale microstructures, and their scale up to industrial-size manufacturing is emphasized here. Finally, the design of two integrated power generating TE systems are described-a solar thermoelectric-photovoltaic hybrid system and a vehicle waste heat harvesting system-that represent perhaps the most important applications of thermoelectricity in the energy conversion area. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Heavy hydrocarbon main injector technology

    NASA Technical Reports Server (NTRS)

    Fisher, S. C.; Arbit, H. A.

    1988-01-01

    One of the key components of the Advanced Launch System (ALS) is a large liquid rocket, booster engine. To keep the overall vehicle size and cost down, this engine will probably use liquid oxygen (LOX) and a heavy hydrocarbon, such as RP-1, as propellants and operate at relatively high chamber pressures to increase overall performance. A technology program (Heavy Hydrocarbon Main Injector Technology) is being studied. The main objective of this effort is to develop a logic plan and supporting experimental data base to reduce the risk of developing a large scale (approximately 750,000 lb thrust), high performance main injector system. The overall approach and program plan, from initial analyses to large scale, two dimensional combustor design and test, and the current status of the program are discussed. Progress includes performance and stability analyses, cold flow tests of injector model, design and fabrication of subscale injectors and calorimeter combustors for performance, heat transfer, and dynamic stability tests, and preparation of hot fire test plans. Related, current, high pressure, LOX/RP-1 injector technology efforts are also briefly discussed.

  6. Nitrification performance and microbial ecology of nitrifying bacteria in a full-scale membrane bioreactor treating TFT-LCD wastewater.

    PubMed

    Whang, Liang-Ming; Wu, Yi-Ju; Lee, Ya-Chin; Chen, Hong-Wei; Fukushima, Toshikazu; Chang, Ming-Yu; Cheng, Sheng-Shung; Hsu, Shu-Fu; Chang, Cheng-Huey; Shen, Wason; Huang, Chung Kai; Fu, Ryan; Chang, Barkley

    2012-10-01

    This study investigated nitrification performance and nitrifying community in one full-scale membrane bioreactor (MBR) treating TFT-LCD wastewater. For the A/O MBR system treating monoethanolamine (MEA) and dimethyl sulfoxide (DMSO), no nitrification was observed, due presumably to high organic loading, high colloidal COD, low DO, and low hydraulic retention time (HRT) conditions. By including additional A/O or O/A tanks, the A/O/A/O MBR and the O/A/O MBR were able to perform successful nitrification. The real-time PCR results for quantification of nitrifying populations showed a high correlation to nitrification performance, and can be a good indicator of stable nitrification. Terminal restriction fragment length polymorphism (T-RFLP) results of functional gene, amoA, suggest that Nitrosomonas oligotropha-like AOB seemed to be important to a good nitrification in the MBR system. In the MBR system, Nitrobacter- and Nitrospira-like NOB were both abundant, but the low nitrite environment is likely to promote the growth of Nitrospira-like NOB. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Field of genes: using Apache Kafka as a bioinformatic data repository

    PubMed Central

    Lynch, Richard; Walsh, Paul

    2018-01-01

    Abstract Background Bioinformatic research is increasingly dependent on large-scale datasets, accessed either from private or public repositories. An example of a public repository is National Center for Biotechnology Information's (NCBI’s) Reference Sequence (RefSeq). These repositories must decide in what form to make their data available. Unstructured data can be put to almost any use but are limited in how access to them can be scaled. Highly structured data offer improved performance for specific algorithms but limit the wider usefulness of the data. We present an alternative: lightly structured data stored in Apache Kafka in a way that is amenable to parallel access and streamed processing, including subsequent transformations into more highly structured representations. We contend that this approach could provide a flexible and powerful nexus of bioinformatic data, bridging the gap between low structure on one hand, and high performance and scale on the other. To demonstrate this, we present a proof-of-concept version of NCBI’s RefSeq database using this technology. We measure the performance and scalability characteristics of this alternative with respect to flat files. Results The proof of concept scales almost linearly as more compute nodes are added, outperforming the standard approach using files. Conclusions Apache Kafka merits consideration as a fast and more scalable but general-purpose way to store and retrieve bioinformatic data, for public, centralized reference datasets such as RefSeq and for private clinical and experimental data. PMID:29635394

  8. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  9. Low Noise Exhaust Nozzle Technology Development

    NASA Technical Reports Server (NTRS)

    Majjigi, R. K.; Balan, C.; Mengle, V.; Brausch, J. F.; Shin, H.; Askew, J. W.

    2005-01-01

    NASA and the U.S. aerospace industry have been assessing the economic viability and environmental acceptability of a second-generation supersonic civil transport, or High Speed Civil Transport (HSCT). Development of a propulsion system that satisfies strict airport noise regulations and provides high levels of cruise and transonic performance with adequate takeoff performance, at an acceptable weight, is critical to the success of any HSCT program. The principal objectives were to: 1. Develop a preliminary design of an innovative 2-D exhaust nozzle with the goal of meeting FAR36 Stage III noise levels and providing high levels of cruise performance with a high specific thrust for Mach 2.4 HSCT with a range of 5000 nmi and a payload of 51,900 lbm, 2. Employ advanced acoustic and aerodynamic codes during preliminary design, 3. Develop a comprehensive acoustic and aerodynamic database through scale-model testing of low-noise, high-performance, 2-D nozzle configurations, based on the preliminary design, and 4. Verify acoustic and aerodynamic predictions by means of scale-model testing. The results were: 1. The preliminary design of a 2-D, convergent/divergent suppressor ejector nozzle for a variable-cycle engine powered, Mach 2.4 HSCT was evolved, 2. Noise goals were predicted to be achievable for three takeoff scenarios, and 3. Impact of noise suppression, nozzle aerodynamic performance, and nozzle weight on HSCT takeoff gross weight were assessed.

  10. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  11. High-Performance Cryogenic Designs for OMEGA and the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Goncharov, V. N.; Collins, T. J. B.; Marozas, J. A.; Regan, S. P.; Betti, R.; Boehly, T. R.; Campbell, E. M.; Froula, D. H.; Igumenshchev, I. V.; McCrory, R. L.; Myatt, J. F.; Radha, P. B.; Sangster, T. C.; Shvydky, A.

    2016-10-01

    The main advantage of laser symmetric direct drive (SDD) is a significantly higher coupled drive laser energy to the hot-spot internal energy at stagnation compared to that of laser indirect drive. Because of coupling losses resulting from cross-beam energy transfer (CBET), however, reaching ignition conditions on the NIF with SDD requires designs with excessively large in-flight aspect ratios ( 30). Results of cryogenic implosions performed on OMEGA show that such designs are unstable to short-scale nonuniformity growth during shell implosion. Several CBET reduction strategies have been proposed in the past. This talk will discuss high-performing designs using several CBET-mitigation techniques, including using drive laser beams smaller than the target size and wavelength detuning. Designs that are predicted to reach alpha burning regimes as well as a gain of 10 to 40 at the NIF-scale will be presented. Hydrodynamically scaled OMEGA designs with similar CBET-reduction techniques will also be discussed. This material is based upon work supported by the Department Of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  12. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition.

    PubMed

    Wang, Runchun; Thakur, Chetan Singh; Cohen, Gregory; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, Andre

    2017-06-01

    We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.

  13. Hybrid fuel formulation and technology development

    NASA Technical Reports Server (NTRS)

    Dean, D. L.

    1995-01-01

    The objective was to develop an improved hybrid fuel with higher regression rate, a regression rate expression exponent close to 0.5, lower cost, and higher density. The approach was to formulate candidate fuels based on promising concepts, perform thermomechanical analyses to select the most promising candidates, develop laboratory processes to fabricate fuel grains as needed, fabricate fuel grains and test in a small lab-scale motor, select the best candidate, and then scale up and validate performance in a 2500 lbf scale, 11-inch diameter motor. The characteristics of a high performance fuel have been verified in 11-inch motor testing. The advanced fuel exhibits a 15% increase in density over an all hydrocarbon formulation accompanied by a 50% increase in regression rate, which when multiplied by the increase in density yields a 70% increase in fuel mass flow rate; has a significantly lower oxidizer-to-fuel (O/F) ratio requirement at 1.5; has a significantly decreased axial regression rate variation making for more uniform propellant flow throughout motor operation; is very clean burning; extinguishes cleanly and quickly; and burns with a high combustion efficiency.

  14. Large voltage modulation in superconducting quantum interference devices with submicron-scale step-edge junctions

    NASA Astrophysics Data System (ADS)

    Lam, Simon K. H.

    2017-09-01

    A promising direction to improve the sensitivity of a SQUID is to increase its junction's normal resistance value, Rn, as the SQUID modulation voltage scales linearly with Rn. As a first step to develop highly sensitive single layer SQUID, submicron scale YBCO grain boundary step edge junctions and SQUIDs with large Rn were fabricated and studied. The step-edge junctions were reduced to submicron scale to increase their Rn values using focus ion beam, FIB and the measurement of transport properties were performed from 4.3 to 77 K. The FIB induced deposition layer proves to be effective to minimize the Ga ion contamination during the FIB milling process. The critical current-normal resistance value of submicron junction at 4.3 K was found to be 1-3 mV, comparable to the value of the same type of junction in micron scale. The submicron junction Rn value is in the range of 35-100 Ω, resulting a large SQUID modulation voltage in a wide temperature range. This performance promotes further investigation of cryogen-free, high field sensitivity SQUID applications at medium low temperature, e.g. at 40-60 K.

  15. Rotor Hover Performance and Flowfield Measurements with Untwisted and Highly-Twisted Blades

    NASA Technical Reports Server (NTRS)

    Ramasamy, Manikandan; Gold, Nili P.; Bhagwat, Mahendra J.

    2010-01-01

    The flowfield and performance characteristics of highly-twisted blades were analyzed at various thrust conditions to improve the fundamental understanding relating the wake effects on rotor performance. Similar measurements made using untwisted blades served as the baseline case. Twisted blades are known to give better hover performance than untwisted blades at high thrust coefficients typical of those found in full-scale rotors. However, the present experiments were conducted at sufficiently low thrust (beginning from zero thrust), where the untwisted blades showed identical, if not better, performance when compared with the highly-twisted blades. The flowfield measurements showed some key wake differences between the two rotors, as well. These observations when combined with simple blade element momentum theory (also called annular disk momentum theory) helped further the understanding of rotor performance characteristics.

  16. Manganese oxides-based composite electrodes for supercapacitors

    NASA Astrophysics Data System (ADS)

    Su, Dongyun; Ma, Jun; Huang, Mingyu; Liu, Feng; Chen, Taizhou; Liu, Chao; Ni, Hongjun

    2017-06-01

    In recent, nanostructured transition metal oxides as a new class of energy storage materials have widely attracted attention due to its excellent electrochemical performance for supercapacitors. The MnO2 based transition metal oxides and their composite electrode materials were focused in the review for supercapacitor applications. The researches on different nanostructures of manganese oxides such as Nano rods, Nano sheets, nanowires, nanotubes and so on have been discovered in recent years, together with brief explanations of their properties. Research on enhancing materials’ properties by designing combination of different materials on the micron or Nano scale is too limited, and therefore we discuss the effects of different components’ sizes and their synergy on the performance. Moreover, the low-cost and large-scale fabrication of flexible supercapacitors with high performance (high energy density and cycle stability) have been pointed out and studied.

  17. Edge-localized mode avoidance and pedestal structure in I-mode plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walk, J. R., E-mail: jrwalk@psfc.mit.edu; Hughes, J. W.; Hubbard, A. E.

    I-mode is a high-performance tokamak regime characterized by the formation of a temperature pedestal and enhanced energy confinement, without an accompanying density pedestal or drop in particle and impurity transport. I-mode operation appears to have naturally occurring suppression of large Edge-Localized Modes (ELMs) in addition to its highly favorable scalings of pedestal structure and overall performance. Extensive study of the ELMy H-mode has led to the development of the EPED model, which utilizes calculations of coupled peeling-ballooning MHD modes and kinetic-ballooning mode (KBM) stability limits to predict the pedestal structure preceding an ELM crash. We apply similar tools to themore » structure and ELM stability of I-mode pedestals. Analysis of I-mode discharges prepared with high-resolution pedestal data from the most recent C-Mod campaign reveals favorable pedestal scalings for extrapolation to large machines—pedestal temperature scales strongly with power per particle P{sub net}/n{sup ¯}{sub e}, and likewise pedestal pressure scales as the net heating power (consistent with weak degradation of confinement with heating power). Matched discharges in current, field, and shaping demonstrate the decoupling of energy and particle transport in I-mode, increasing fueling to span nearly a factor of two in density while maintaining matched temperature pedestals with consistent levels of P{sub net}/n{sup ¯}{sub e}. This is consistent with targets for increased performance in I-mode, elevating pedestal β{sub p} and global performance with matched increases in density and heating power. MHD calculations using the ELITE code indicate that I-mode pedestals are strongly stable to edge peeling-ballooning instabilities. Likewise, numerical modeling of the KBM turbulence onset, as well as scalings of the pedestal width with poloidal beta, indicates that I-mode pedestals are not limited by KBM turbulence—both features identified with the trigger for large ELMs, consistent with the observed suppression of large ELMs in I-mode.« less

  18. Edge-localized mode avoidance and pedestal structure in I-mode plasmasa)

    NASA Astrophysics Data System (ADS)

    Walk, J. R.; Hughes, J. W.; Hubbard, A. E.; Terry, J. L.; Whyte, D. G.; White, A. E.; Baek, S. G.; Reinke, M. L.; Theiler, C.; Churchill, R. M.; Rice, J. E.; Snyder, P. B.; Osborne, T.; Dominguez, A.; Cziegler, I.

    2014-05-01

    I-mode is a high-performance tokamak regime characterized by the formation of a temperature pedestal and enhanced energy confinement, without an accompanying density pedestal or drop in particle and impurity transport. I-mode operation appears to have naturally occurring suppression of large Edge-Localized Modes (ELMs) in addition to its highly favorable scalings of pedestal structure and overall performance. Extensive study of the ELMy H-mode has led to the development of the EPED model, which utilizes calculations of coupled peeling-ballooning MHD modes and kinetic-ballooning mode (KBM) stability limits to predict the pedestal structure preceding an ELM crash. We apply similar tools to the structure and ELM stability of I-mode pedestals. Analysis of I-mode discharges prepared with high-resolution pedestal data from the most recent C-Mod campaign reveals favorable pedestal scalings for extrapolation to large machines—pedestal temperature scales strongly with power per particle Pnet/n ¯e, and likewise pedestal pressure scales as the net heating power (consistent with weak degradation of confinement with heating power). Matched discharges in current, field, and shaping demonstrate the decoupling of energy and particle transport in I-mode, increasing fueling to span nearly a factor of two in density while maintaining matched temperature pedestals with consistent levels of Pnet/n ¯e. This is consistent with targets for increased performance in I-mode, elevating pedestal βp and global performance with matched increases in density and heating power. MHD calculations using the ELITE code indicate that I-mode pedestals are strongly stable to edge peeling-ballooning instabilities. Likewise, numerical modeling of the KBM turbulence onset, as well as scalings of the pedestal width with poloidal beta, indicates that I-mode pedestals are not limited by KBM turbulence—both features identified with the trigger for large ELMs, consistent with the observed suppression of large ELMs in I-mode.

  19. Multifunctional picoliter droplet manipulation platform and its application in single cell analysis.

    PubMed

    Gu, Shu-Qing; Zhang, Yun-Xia; Zhu, Ying; Du, Wen-Bin; Yao, Bo; Fang, Qun

    2011-10-01

    We developed an automated and multifunctional microfluidic platform based on DropLab to perform flexible generation and complex manipulations of picoliter-scale droplets. Multiple manipulations including precise droplet generation, sequential reagent merging, and multistep solid-phase extraction for picoliter-scale droplets could be achieved in the present platform. The system precision in generating picoliter-scale droplets was significantly improved by minimizing the thermo-induced fluctuation of flow rate. A novel droplet fusion technique based on the difference of droplet interfacial tensions was developed without the need of special microchannel networks or external devices. It enabled sequential addition of reagents to droplets on demand for multistep reactions. We also developed an effective picoliter-scale droplet splitting technique with magnetic actuation. The difficulty in phase separation of magnetic beads from picoliter-scale droplets due to the high interfacial tension was overcome using ferromagnetic particles to carry the magnetic beads to pass through the phase interface. With this technique, multistep solid-phase extraction was achieved among picoliter-scale droplets. The present platform had the ability to perform complex multistep manipulations to picoliter-scale droplets, which is particularly required for single cell analysis. Its utility and potentials in single cell analysis were preliminarily demonstrated in achieving high-efficiency single-cell encapsulation, enzyme activity assay at the single cell level, and especially, single cell DNA purification based on solid-phase extraction.

  20. External validation of the ability of the DRAGON score to predict outcome after thrombolysis treatment.

    PubMed

    Ovesen, C; Christensen, A; Nielsen, J K; Christensen, H

    2013-11-01

    Easy-to-perform and valid assessment scales for the effect of thrombolysis are essential in hyperacute stroke settings. Because of this we performed an external validation of the DRAGON scale proposed by Strbian et al. in a Danish cohort. All patients treated with intravenous recombinant plasminogen activator between 2009 and 2011 were included. Upon admission all patients underwent physical and neurological examination using the National Institutes of Health Stroke Scale along with non-contrast CT scans and CT angiography. Patients were followed up through the Outpatient Clinic and their modified Rankin Scale (mRS) was assessed after 3 months. Three hundred and three patients were included in the analysis. The DRAGON scale proved to have a good discriminative ability for predicting highly unfavourable outcome (mRS 5-6) (area under the curve-receiver operating characteristic [AUC-ROC]: 0.89; 95% confidence interval [CI] 0.81-0.96; p<0.001) and good outcome (mRS 0-2) (AUC-ROC: 0.79; 95% CI 0.73-0.85; p<0.001). When only patients with M1 occlusions were selected the DRAGON scale provided good discriminative capability (AUC-ROC: 0.89; 95% CI 0.78-1.0; p=0.003) for highly unfavourable outcome. We confirmed the validity of the DRAGON scale in predicting outcome after thrombolysis treatment. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Multi-scale image segmentation and numerical modeling in carbonate rocks

    NASA Astrophysics Data System (ADS)

    Alves, G. C.; Vanorio, T.

    2016-12-01

    Numerical methods based on computational simulations can be an important tool in estimating physical properties of rocks. These can complement experimental results, especially when time constraints and sample availability are a problem. However, computational models created at different scales can yield conflicting results with respect to the physical laboratory. This problem is exacerbated in carbonate rocks due to their heterogeneity at all scales. We developed a multi-scale approach performing segmentation of the rock images and numerical modeling across several scales, accounting for those heterogeneities. As a first step, we measured the porosity and the elastic properties of a group of carbonate samples with varying micrite content. Then, samples were imaged by Scanning Electron Microscope (SEM) as well as optical microscope at different magnifications. We applied three different image segmentation techniques to create numerical models from the SEM images and performed numerical simulations of the elastic wave-equation. Our results show that a multi-scale approach can efficiently account for micro-porosities in tight micrite-supported samples, yielding acoustic velocities comparable to those obtained experimentally. Nevertheless, in high-porosity samples characterized by larger grain/micrite ratio, results show that SEM scale images tend to overestimate velocities, mostly due to their inability to capture macro- and/or intragranular- porosity. This suggests that, for high-porosity carbonate samples, optical microscope images would be more suited for numerical simulations.

  2. Integrated low emissions cleanup system for direct coal-fueled turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lippert, T.E.; Newby, R.A.; Alvin, M.A.

    1992-01-01

    The Westinghouse Electric Corporation, Science Technology Center (W-STC) is developing an Integrated Low Emissions Cleanup (ILEC) concept for high-temperature gas cleaning to meet environmental standards, as well as to economical gas turbine life. The ILEC concept simultaneously controls sulfur, particulate, and alkali contaminants in high-pressure fuel gases or combustion gases at temperatures up to 1850[degrees]F for advanced power generation systems (PFBC, APFBC, IGCC, DCF7). The objective of this program is to demonstrate, at a bench scale, the conceptual, technical feasibility of the REC concept. The ELEC development program has a 3 phase structure: Phase 1 - laboratory-scale testing; phase 2more » - bench-scale equipment; design and fabrication; and phase 3 - bench-scale testing. Phase 1 laboratory testing has been completed. In Phase 1, entrained sulfur and alkali sorbent kinetics were measured and evaluated, and commercial-scale performance was projected. Related cold flow model testing has shown that gas-particle contacting within the ceramic barrier filter vessel will provide a good reactor environment. The Phase 1 test results and the commercial evaluation conducted in the Phase 1 program support the bench-scale facility testing to be performed in Phase 3. Phase 2 is nearing completion with the design and assembly of a modified, bench-scale test facility to demonstrate the technical feasibility of the ILEC features. This feasibility testing will be conducted in Phase 3.« less

  3. Integrated low emissions cleanup system for direct coal-fueled turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lippert, T.E.; Newby, R.A.; Alvin, M.A.

    1992-12-31

    The Westinghouse Electric Corporation, Science & Technology Center (W-STC) is developing an Integrated Low Emissions Cleanup (ILEC) concept for high-temperature gas cleaning to meet environmental standards, as well as to economical gas turbine life. The ILEC concept simultaneously controls sulfur, particulate, and alkali contaminants in high-pressure fuel gases or combustion gases at temperatures up to 1850{degrees}F for advanced power generation systems (PFBC, APFBC, IGCC, DCF7). The objective of this program is to demonstrate, at a bench scale, the conceptual, technical feasibility of the REC concept. The ELEC development program has a 3 phase structure: Phase 1 - laboratory-scale testing; phasemore » 2 - bench-scale equipment; design and fabrication; and phase 3 - bench-scale testing. Phase 1 laboratory testing has been completed. In Phase 1, entrained sulfur and alkali sorbent kinetics were measured and evaluated, and commercial-scale performance was projected. Related cold flow model testing has shown that gas-particle contacting within the ceramic barrier filter vessel will provide a good reactor environment. The Phase 1 test results and the commercial evaluation conducted in the Phase 1 program support the bench-scale facility testing to be performed in Phase 3. Phase 2 is nearing completion with the design and assembly of a modified, bench-scale test facility to demonstrate the technical feasibility of the ILEC features. This feasibility testing will be conducted in Phase 3.« less

  4. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2018-01-01

    Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  5. [Quality of sleep and academic performance in high school students].

    PubMed

    Bugueño, Maithe; Curihual, Carolina; Olivares, Paulina; Wallace, Josefa; López-AlegrÍa, Fanny; Rivera-López, Gonzalo; Oyanedel, Juan Carlos

    2017-09-01

    Sleeping and studying are the day-to-day activities of a teenager attending school. To determine the quality of sleep and its relationship to the academic performance among students attending morning and afternoon shifts in a public high school. Students of the first and second year of high school answered an interview about socio-demographic background, academic performance, student activities and subjective sleep quality; they were evaluated using the Pittsburgh Sleep Quality Index (PSQI). The interview was answered by 322 first year students aged 15 ± 5 years attending the morning shift and 364 second year students, aged 16 ± 0.5 years, attending the afternoon shift. The components: sleep latency, habitual sleep efficiency, sleep disturbance, drug use and daytime dysfunction were similar and classified as good in both school shifts. The components subjective sleep quality and duration of sleep had higher scores among students of the morning shift. The mean grades during the first semester of the students attending morning and afternoon shifts were 5.9 and 5.8, respectively (of a scale from 1 to 7). Among students of both shifts, the PSQI scale was associated inversely and significantly with academic performance. A bad sleep quality influences academic performance in these students.

  6. Propulsion engineering study for small-scale Mars missions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, J.

    1995-09-12

    Rocket propulsion options for small-scale Mars missions are presented and compared, particularly for the terminal landing maneuver and for sample return. Mars landing has a low propulsive {Delta}v requirement on a {approximately}1-minute time scale, but at a high acceleration. High thrust/weight liquid rocket technologies, or advanced pulse-capable solids, developed during the past decade for missile defense, are therefore more appropriate for small Mars landers than are conventional space propulsion technologies. The advanced liquid systems are characterize by compact lightweight thrusters having high chamber pressures and short lifetimes. Blowdown or regulated pressure-fed operation can satisfy the Mars landing requirement, but hardwaremore » mass can be reduced by using pumps. Aggressive terminal landing propulsion designs can enable post-landing hop maneuvers for some surface mobility. The Mars sample return mission requires a small high performance launcher having either solid motors or miniature pump-fed engines. Terminal propulsion for 100 kg Mars landers is within the realm of flight-proven thruster designs, but custom tankage is desirable. Landers on a 10 kg scale also are feasible, using technology that has been demonstrated but not previously flown in space. The number of sources and the selection of components are extremely limited on this smallest scale, so some customized hardware is required. A key characteristic of kilogram-scale propulsion is that gas jets are much lighter than liquid thrusters for reaction control. The mass and volume of tanks for inert gas can be eliminated by systems which generate gas as needed from a liquid or a solid, but these have virtually no space flight history. Mars return propulsion is a major engineering challenge; earth launch is the only previously-solved propulsion problem requiring similar or greater performance.« less

  7. Screening for Prenatal Substance Use

    PubMed Central

    Yonkers, Kimberly A.; Gotman, Nathan; Kershaw, Trace; Forray, Ariadna; Howell, Heather B.; Rounsaville, Bruce J.

    2011-01-01

    OBJECTIVE To report on the development of a questionnaire to screen for hazardous substance use in pregnant women and to compare the performance of the questionnaire with other drug and alcohol measures. METHODS Pregnant women were administered a modified TWEAK (Tolerance, Worried, Eye-openers, Amnesia, K[C] Cut Down) questionnaire, the 4Ps Plus questionnaire, items from the Addiction Severity Index, and two questions about domestic violence (N=2,684). The sample was divided into “training” (n=1,610) and “validation” (n=1,074) subsamples. We applied recursive partitioning class analysis to the responses from individuals in the training subsample that resulted in a three-item Substance Use Risk Profile-Pregnancy scale. We examined sensitivity, specificity, and the fit of logistic regression models in the validation subsample to compare the performance of the Substance Use Risk Profile-Pregnancy scale with the modified TWEAK and various scoring algorithms of the 4Ps. RESULTS The Substance Use Risk Profile-Pregnancy scale is comprised of three informative questions that can be scored for high- or low-risk populations. The Substance Use Risk Profile-Pregnancy scale algorithm for low-risk populations was mostly highly predictive of substance use in the validation subsample (Akaike’s Information Criterion=579.75, Nagelkerke R2=0.27) with high sensitivity (91%) and adequate specificity (67%). The high-risk algorithm had lower sensitivity (57%) but higher specificity (88%). CONCLUSION The Substance Use Risk Profile-Pregnancy scale is simple and flexible with good sensitivity and specificity. The Substance Use Risk Profile-Pregnancy scale can potentially detect a range of substances that may be abused. Clinicians need to further assess women with a positive screen to identify those who require treatment for alcohol or illicit substance use in pregnancy. PMID:20859145

  8. Scale model testing of drogues for free drifting buoys

    NASA Technical Reports Server (NTRS)

    Vachon, W. A.

    1973-01-01

    Instrumented model drogue tests were conducted in a ship model towing tank. The purpose of the tests was to observe and measure deployment and drag characteristics of such shapes as parachutes, crossed vanes, and window shades which may be employed in conjunction with free drifting buoys. Both Froude and Reynolds scaling laws were applied while scaling to full scale relative velocities of from 0 to 0.2 knots. A weighted window shade drogue is recommended because of its performance, high drag coefficient, simplicity, and low cost. Detailed theoretical performance curves are presented for parachutes, crossed vanes, and window shade drogues. Theoretical estimates of depth locking accuracy and buoy-induced dynamic loads pertinent to window shade drogues are presented as a design aid. An example of a window shade drogue design is presented.

  9. Novel nano materials for high performance logic and memory devices

    NASA Astrophysics Data System (ADS)

    Das, Saptarshi

    After decades of relentless progress, the silicon CMOS industry is approaching a stall in device performance for both logic and memory devices due to fundamental scaling limitations. In order to reinforce the accelerating pace, novel materials with unique properties are being proposed on an urgent basis. This list includes one dimensional nanotubes, quasi one dimensional nanowires, two dimensional atomistically thin layered materials like graphene, hexagonal boron nitride and the more recently the rich family of transition metal di-chalcogenides comprising of MoS2, WSe2, WS2 and many more for logic applications and organic and inorganic ferroelectrics, phase change materials and magnetic materials for memory applications. Only time will tell who will win, but exploring these novel materials allow us to revisit the fundamentals and strengthen our understanding which will ultimately be beneficial for high performance device design. While there has been growing interest in two-dimensional (2D) crystals other than graphene, evaluating their potential usefulness for electronic applications is still in its infancies due to the lack of a complete picture of their performance potential. The fact that the 2-D layered semiconducting di-chalcogenides need to be connected to the "outside" world in order to capitalize on their ultimate potential immediately emphasizes the importance of a thorough understanding of the contacts. This thesis demonstrate that through a proper understanding and design of source/drain contacts and the right choice of number of MoS2 layers the excellent intrinsic properties of this 2D material can be harvested. A comprehensive experimental study on the dependence of carrier mobility on the layer thickness of back gated multilayer MoS 2 field effect transistors is also provided. A resistor network model that comprises of Thomas-Fermi charge screening and interlayer coupling is used to explain the non-monotonic trend in the extracted field effect mobility with the layer thickness. The non-monotonic trend suggests that in order to harvest the maximum potential of MoS2 for high performance device applications, a layer thickness in the range of 6-12 nm would be ideal. Finally using scandium contacts on 10nm thick exfoliated MoS2 flakes that are covered by a 15nm ALD grown Al2O3 film, record high mobility of 700cm2/Vs is achieved at room-temperature which is extremely encouraging for the design of high performance logic devices. The destructive nature of the readout process in Ferroelectric Random Access Memories (FeRAMs) is one of the major limiting factors for their wide scale commercialization. Utilizing Ferroelectric Field-Effect Transistor RAM (FeTRAM) instead solves the destructive read out problem, but at the expense of introducing crystalline ferroelectrics that are hard to integrate into CMOS. In order to address these challenges a novel, fully functional, CMOS compatible, One-Transistor-One-Transistor (1T1T) memory cell architecture using an organic ferroelectric -- PVDF-TrFE -- as the memory storage unit (gate oxide) and a silicon nanowire as the memory read out unit (channel material) is proposed and experimentally demonstrated. While evaluating the scaling potential of the above mentioned organic FeTRAM, it is found that the switching time and switching voltage of this organic copolymer PVDF-TrFE exhibits an unexpected scaling behavior as a function of the lateral device dimensions. The phenomenological theory, that explains this abnormal scaling trend, involves in-plane interchain and intrachain interaction of the copolymer - resulting in a power-law dependence of the switching field on the device area (ESW alpha ACH0.1) that is ultimately responsible for the decrease in the switching time and switching voltage. These findings are encouraging since they indicate that scaling the switching voltage and switching time without aggressively scaling the copolymer thickness occurs naturally while scaling the device area -- in this way ultimately improving the packing density and leading towards high performance memory devices.

  10. Performance Characteristics of a New Generation Pressure Microsensor for Physiologic Applications

    PubMed Central

    Cottler, Patrick S.; Karpen, Whitney R.; Morrow, Duane A.; Kaufman, Kenton R.

    2009-01-01

    A next generation fiber-optic microsensor based on the extrinsic Fabry–Perot interferometric (EFPI) technique has been developed for pressure measurements. The basic physics governing the operation of these sensors makes them relatively tolerant or immune to the effects of high-temperature, high-EMI, and highly-corrosive environments. This pressure microsensor represents a significant improvement in size and performance over previous generation sensors. To achieve the desired overall size and sensitivity, numerical modeling of diaphragm deflection was incorporated in the design, with the desired dimensions and calculated material properties. With an outer diameter of approximately 250 µm, a dynamic operating range of over 250 mmHg, and a sampling frequency of 960 Hz, this sensor is ideal for the minimally invasive measurement of physiologic pressures and incorporation in catheter-based instrumentation. Nine individual sensors were calibrated and characterized by comparing the output to a U.S. National Institute of Standards and Technology (NIST) Traceable reference pressure over the range of 0–250 mmHg. The microsensor performance demonstrated accuracy of better than 2% full-scale output, and repeatability, and hysteresis of better than 1% full-scale output. Additionally, fatigue effects on five additional sensors were 0.25% full-scale output after over 10,000 pressure cycles. PMID:19495983

  11. Application of Open Source Technologies for Oceanographic Data Analysis

    NASA Astrophysics Data System (ADS)

    Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.

    2015-12-01

    NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes

  12. Changes in Biology Self-Efficacy during a First-Year University Course

    PubMed Central

    Ainscough, Louise; Foulis, Eden; Colthorpe, Kay; Zimbardi, Kirsten; Robertson-Dean, Melanie; Chunduri, Prasad; Lluka, Lesley

    2016-01-01

    Academic self-efficacy encompasses judgments regarding one’s ability to perform academic tasks and is correlated with achievement and persistence. This study describes changes in biology self-efficacy during a first-year course. Students (n = 614) were given the Biology Self-Efficacy Scale at the beginning and end of the semester. The instrument consisted of 21 questions ranking confidence in performing biology-related tasks on a scale from 1 (not at all confident) to 5 (totally confident). The results demonstrated that students increased in self-efficacy during the semester. High school biology and chemistry contributed to self-efficacy at the beginning of the semester; however, this relationship was lost by the end of the semester, when experience within the course became a significant contributing factor. A proportion of high- and low- achieving (24 and 40%, respectively) students had inaccurate self-efficacy judgments of their ability to perform well in the course. In addition, female students were significantly less confident than males overall, and high-achieving female students were more likely than males to underestimate their academic ability. These results suggest that the Biology Self-Efficacy Scale may be a valuable resource for tracking changes in self-efficacy in first-year students and for identifying students with poorly calibrated self-efficacy perceptions. PMID:27193290

  13. Metal Hydrides for High-Temperature Power Generation

    DOE PAGES

    Ronnebro, Ewa; Whyatt, Greg A.; Powell, Michael R.; ...

    2015-08-10

    Metal hydrides can be utilized for hydrogen storage and for thermal energy storage (TES) applications. By using TES with solar technologies, heat can be stored from sun energy to be used later which enables continuous power generation. We are developing a TES technology based on a dual-bed metal hydride system, which has a high-temperature (HT) metal hydride operating reversibly at 600-800°C to generate heat as well as a low-temperature (LT) hydride near room temperature that is used for hydrogen storage during sun hours until there is a need to produce electricity, such as during night time, a cloudy day, ormore » during peak hours. We proceeded from selecting a high-energy density, low-cost HT-hydride based on performance characterization on gram size samples, to scale-up to kilogram quantities and design, fabrication and testing of a 1.5kWh, 200kWh/m 3 bench-scale TES prototype based on a HT-bed of titanium hydride and a hydrogen gas storage instead of a LT-hydride. COMSOL Multiphysics was used to make performance predictions for cylindrical hydride beds with varying diameters and thermal conductivities. Based on experimental and modeling results, a bench-scale prototype was designed and fabricated and we successfully showed feasibility to meet or exceed all performance targets.« less

  14. Influence of 2D electrostatic effects on the high-frequency noise behavior of sub-100-nm scaled MOSFETs

    NASA Astrophysics Data System (ADS)

    Rengel, Raul; Pardo, Daniel; Martin, Maria J.

    2004-05-01

    In this work, we have performed an investigation of the consequences of dowscaling the bulk MOSFET beyond the 100 nm range by means of a particle-based Monte Carlo simulator. Taking a 250 nm gate-length ideal structure as the starting point, the constant field scaling rules (also known as "classical" scaling) are considered and the high-frequency dynamic and noise performance of transistors with 130 nm, 90 nm and 60 nm gate-lengths are studied in depth. The analysis of internal quantities such as electric fields, velocity and energy of carriers or conduction band profiles shows the increasing importance of electrostatic two-dimensional effects due to the proximity of source and drain regions even when the most ideal bias conditions are imposed. As a consequence, a loss of the transistor action for the smallest MOSFET and the degradation of the most important high-frequency figures of merit is observed. Whereas the comparative values of intrinsic noise sources (SID, SIG) are improved when reducing the dimensions and the bias voltages, the poor dynamic performance yields an overall worse noise behaviour than expected (especially for Rn and Gass), limiting at the same time the useful bias ranges and conditions for a proper low-noise configuration.

  15. Teaching elliptical excision skills to novice medical students: a randomized controlled study comparing low- and high-fidelity bench models.

    PubMed

    Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério

    2014-03-01

    The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills' training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs' skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills' training) was considered large (>0.80) in all measurements. The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials.

  16. Permeability from complex conductivity: an evaluation of polarization magnitude versus relaxation time based geophysical length scales

    NASA Astrophysics Data System (ADS)

    Slater, L. D.; Robinson, J.; Weller, A.; Keating, K.; Robinson, T.; Parker, B. L.

    2017-12-01

    Geophysical length scales determined from complex conductivity (CC) measurements can be used to estimate permeability k when the electrical formation factor F describing the ratio between tortuosity and porosity is known. Two geophysical length scales have been proposed: [1] the imaginary conductivity σ" normalized by the specific polarizability cp; [2] the time constant τ multiplied by a diffusion coefficient D+. The parameters cp and D+ account for the control of fluid chemistry and/or varying minerology on the geophysical length scale. We evaluated the predictive capability of two recently presented CC permeability models: [1] an empirical formulation based on σ"; [2] a mechanistic formulation based on τ;. The performance of the CC models was evaluated against measured permeability; this performance was also compared against that of well-established k estimation equations that use geometric length scales to represent the pore scale properties controlling fluid flow. Both CC models predict permeability within one order of magnitude for a database of 58 sandstone samples, with the exception of those samples characterized by high pore volume normalized surface area Spor and more complex mineralogy including significant dolomite. Variations in cp and D+ likely contribute to the poor performance of the models for these high Spor samples. The ultimate value of such geophysical models for permeability prediction lies in their application to field scale geophysical datasets. Two observations favor the implementation of the σ" based model over the τ based model for field-scale estimation: [1] the limited range of variation in cp relative to D+; [2] σ" is readily measured using field geophysical instrumentation (at a single frequency) whereas τ requires broadband spectral measurements that are extremely challenging and time consuming to accurately measure in the field. However, the need for a reliable estimate of F remains a major obstacle to the field-scale implementation of either of the CC permeability models for k estimation.

  17. Using Discrete Event Simulation for Programming Model Exploration at Extreme-Scale: Macroscale Components for the Structural Simulation Toolkit (SST).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilke, Jeremiah J; Kenny, Joseph P.

    2015-02-01

    Discrete event simulation provides a powerful mechanism for designing and testing new extreme- scale programming models for high-performance computing. Rather than debug, run, and wait for results on an actual system, design can first iterate through a simulator. This is particularly useful when test beds cannot be used, i.e. to explore hardware or scales that do not yet exist or are inaccessible. Here we detail the macroscale components of the structural simulation toolkit (SST). Instead of depending on trace replay or state machines, the simulator is architected to execute real code on real software stacks. Our particular user-space threading frameworkmore » allows massive scales to be simulated even on small clusters. The link between the discrete event core and the threading framework allows interesting performance metrics like call graphs to be collected from a simulated run. Performance analysis via simulation can thus become an important phase in extreme-scale programming model and runtime system design via the SST macroscale components.« less

  18. Affordable, automatic quantitative fall risk assessment based on clinical balance scales and Kinect data.

    PubMed

    Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S

    2014-01-01

    The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).

  19. Performance assessment and calibration of a profiling lab-scale acoustic Doppler velocimeter for application over mixed sand-gravel beds

    USDA-ARS?s Scientific Manuscript database

    Acoustic Doppler velocimetry has made high-resolution turbulence measurements in sediment-laden flows possible. Recent developments have resulted in a commercially available lab-scale acoustic Doppler profiling device, a Nortek Vectrino II, that allows for three-dimensional velocity data to be colle...

  20. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  1. A PILOT-SCALE STUDY ON THE COMBUSTION OF WASTE ...

    EPA Pesticide Factsheets

    Symposium Paper Post-consumer carpet is a potential substitute fuel for high temperature thermal processes such as cement kilns and boilers.This paper reports on results examining emissions of PCDDs/Fs from a series of pilot-scale experiments performed on the EPA's rotary kiln incinerator simulator facility in Research triangle Park, NC.

  2. The effect of cerium oxide argon-annealed coatings on the high temperature oxidation of a FeCrAl alloy

    NASA Astrophysics Data System (ADS)

    Nguyen, C. T.; Buscail, H.; Cueff, R.; Issartel, C.; Riffard, F.; Perrier, S.; Poble, O.

    2009-09-01

    Ceria coatings were applied in order to improve the adherence of alumina scales developed on a model Fe-20Cr-5Al alloy during oxidation at high temperature. These coatings were performed by argon annealing of a ceria sol-gel coating at temperatures ranging between 600 and 1000 °C. The influence of these coatings on the alloy oxidation behaviour was studied at 1100 °C. In situ X-ray diffraction (XRD) was performed to characterize the coating crystallographic nature after annealing and during the oxidation process. The alumina scale morphologies were studied by means of scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDS). The present work shows that the alumina scale morphology observed on cerium sol-gel coated alloy was very convoluted. On the cerium sol-gel coated alloy, argon annealing results in an increase of the oxidation rate in air, at 1100 °C. The 600 °C argon annealing temperature results in a good alumina scale adherence under thermal cycling conditions at 1100 °C.

  3. Criticality calculations of the Very High Temperature reactor Critical Assembly benchmark with Serpent and SCALE/KENO-VI

    DOE PAGES

    Bostelmann, Friederike; Hammer, Hans R.; Ortensi, Javier; ...

    2015-12-30

    Within the framework of the IAEA Coordinated Research Project on HTGR Uncertainty Analysis in Modeling, criticality calculations of the Very High Temperature Critical Assembly experiment were performed as the validation reference to the prismatic MHTGR-350 lattice calculations. Criticality measurements performed at several temperature points at this Japanese graphite-moderated facility were recently included in the International Handbook of Evaluated Reactor Physics Benchmark Experiments, and represent one of the few data sets available for the validation of HTGR lattice physics. Here, this work compares VHTRC criticality simulations utilizing the Monte Carlo codes Serpent and SCALE/KENO-VI. Reasonable agreement was found between Serpent andmore » KENO-VI, but only the use of the latest ENDF cross section library release, namely the ENDF/B-VII.1 library, led to an improved match with the measured data. Furthermore, the fourth beta release of SCALE 6.2/KENO-VI showed significant improvements from the current SCALE 6.1.2 version, compared to the experimental values and Serpent.« less

  4. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  5. Convective dynamics - Panel report

    NASA Technical Reports Server (NTRS)

    Carbone, Richard; Foote, G. Brant; Moncrieff, Mitch; Gal-Chen, Tzvi; Cotton, William; Heymsfield, Gerald

    1990-01-01

    Aspects of highly organized forms of deep convection at midlatitudes are reviewed. Past emphasis in field work and cloud modeling has been directed toward severe weather as evidenced by research on tornadoes, hail, and strong surface winds. A number of specific issues concerning future thrusts, tactics, and techniques in convective dynamics are presented. These subjects include; convective modes and parameterization, global structure and scale interaction, convective energetics, transport studies, anvils and scale interaction, and scale selection. Also discussed are analysis workshops, four-dimensional data assimilation, matching models with observations, network Doppler analyses, mesoscale variability, and high-resolution/high-performance Doppler. It is also noted, that, classical surface measurements and soundings, flight-level research aircraft data, passive satellite data, and traditional photogrammetric studies are examples of datasets that require assimilation and integration.

  6. Water Flow Testing and Unsteady Pressure Analysis of a Two-Bladed Liquid Oxidizer Pump Inducer

    NASA Technical Reports Server (NTRS)

    Schwarz, Jordan B.; Mulder, Andrew; Zoladz, Thomas

    2011-01-01

    The unsteady fluid dynamic performance of a cavitating two-bladed oxidizer turbopump inducer was characterized through sub-scale water flow testing. While testing a novel inlet duct design that included a cavitation suppression groove, unusual high-frequency pressure oscillations were observed. With potential implications for inducer blade loads, these high-frequency components were analyzed extensively in order to understand their origins and impacts to blade loading. Water flow testing provides a technique to determine pump performance without the costs and hazards associated with handling cryogenic propellants. Water has a similar density and Reynolds number to liquid oxygen. In a 70%-scale water flow test, the inducer-only pump performance was evaluated. Over a range of flow rates, the pump inlet pressure was gradually reduced, causing the flow to cavitate near the pump inducer. A nominal, smooth inducer inlet was tested, followed by an inlet duct with a circumferential groove designed to suppress cavitation. A subsequent 52%-scale water flow test in another facility evaluated the combined inducer-impeller pump performance. With the nominal inlet design, the inducer showed traditional cavitation and surge characteristics. Significant bearing loads were created by large side loads on the inducer during synchronous cavitation. The grooved inlet successfully mitigated these loads by greatly reducing synchronous cavitation, however high-frequency pressure oscillations were observed over a range of frequencies. Analytical signal processing techniques showed these oscillations to be created by a rotating, multi-celled train of pressure pulses, and subsequent CFD analysis suggested that such pulses could be created by the interaction of rotating inducer blades with fluid trapped in a cavitation suppression groove. Despite their relatively low amplitude, these high-frequency pressure oscillations posed a design concern due to their sensitivity to flow conditions and test scale. The amplitude and frequency of oscillations varied considerably over the pump s operating space, making it difficult to predict blade loads.

  7. Development of high performance refractory fibers with enhanced insulating properties and longer service lifetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, P.C.; DePoorter, G.L.; Munoz, D.R.

    1991-02-01

    We have initiated a three phase investigation of the development of high performance refractory fibers with enhanced insulating properties and longer usable lifetimes. This report presents the results of the first phase of the study, performed from Aug. 1989 through Feb. 1991, which shows that significant energy saving are possible through the use of high temperature insulating fibers that better retain their efficient insulating properties during the service lifetime of the fibers. The remaining phases of this program include the pilot scale development and then full scale production feasibility development and evaluation of enhanced high temperature refractory insulting fibers. Thismore » first proof of principle phase of the program presents a summary of the current use patterns of refractory fibers, a laboratory evaluation of the high temperature performance characteristics of selected typical refractory fibers and an analysis of the potential energy savings through the use of enhanced refractory fibers. The current use patterns of refractory fibers span a wide range of industries and high temperature furnaces within those industries. The majority of high temperature fiber applications are in furnaces operating between 2000 and 26000{degrees}F. The fibers used in furnaces operating within this range provide attractive thermal resistance and low thermal storage at reasonable cost. A series of heat treatment studies performed for this phase of the program has shown that the refractory fibers, as initially manufactured, have attractive thermal conductivities for high temperature applications but the fibers go through rapid devitrification and subsequent crystal growth upon high temperature exposure. Development of improved fibers, maintaining the favorable characteristics of the existing as-manufactured fibers, could save between 1 and 4% of the energy consumed in high temperature furnaces using refractory fibers.« less

  8. The Europa Imaging System (EIS): High-Resolution, 3-D Insight into Europa's Geology, Ice Shell, and Potential for Current Activity

    NASA Astrophysics Data System (ADS)

    Turtle, E. P.; McEwen, A. S.; Collins, G. C.; Fletcher, L. N.; Hansen, C. J.; Hayes, A.; Hurford, T., Jr.; Kirk, R. L.; Barr, A.; Nimmo, F.; Patterson, G.; Quick, L. C.; Soderblom, J. M.; Thomas, N.

    2015-12-01

    The Europa Imaging System will transform our understanding of Europa through global decameter-scale coverage, three-dimensional maps, and unprecedented meter-scale imaging. EIS combines narrow-angle and wide-angle cameras (NAC and WAC) designed to address high-priority Europa science and reconnaissance goals. It will: (A) Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar; (B) Constrain formation processes of surface features and the potential for current activity by characterizing endogenic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure, and by searching for evidence of recent activity, including potential plumes; and (C) Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. The NAC provides very high-resolution, stereo reconnaissance, generating 2-km-wide swaths at 0.5-m pixel scale from 50-km altitude, and uses a gimbal to enable independent targeting. NAC observations also include: near-global (>95%) mapping of Europa at ≤50-m pixel scale (to date, only ~14% of Europa has been imaged at ≤500 m/pixel, with best pixel scale 6 m); regional and high-resolution stereo imaging at <1-m/pixel; and high-phase-angle observations for plume searches. The WAC is designed to acquire pushbroom stereo swaths along flyby ground-tracks, generating digital topographic models with 32-m spatial scale and 4-m vertical precision from 50-km altitude. These data support characterization of cross-track clutter for radar sounding. The WAC also performs pushbroom color imaging with 6 broadband filters (350-1050 nm) to map surface units and correlations with geologic features and topography. EIS will provide comprehensive data sets essential to fulfilling the goal of exploring Europa to investigate its habitability and perform collaborative science with other investigations, including cartographic and geologic maps, regional and high-resolution digital topography, GIS products, color and photometric data products, a geodetic control network tied to radar altimetry, and a database of plume-search observations.

  9. Adventure Behavior Seeking Scale

    PubMed Central

    Próchniak, Piotr

    2017-01-01

    This article presents a new tool—the Adventure Behavior Seeking Scale (ABSS). The Adventure Behavior Seeking Scale was developed to assess individuals’ highly stimulating behaviors in natural environments. An exploratory factor analysis was conducted with 466 participants and resulted in one factor. The internal consistency was 0.80. A confirmatory factor analysis was performed using another sample of 406 participants, and results verified the one-factor structure. The findings indicate that people with a lot of experience in outdoor adventure have a higher score on the ABSS scale than control groups without such experience. The results also suggest that the 8-item ABSS scores were highly related to sensation seeking. The author discusses findings in regard to the ABSS as an instrument to measure outdoor adventure. However, further studies need to be carried out in other sample groups to further validate the scale. PMID:28555018

  10. The effect of primary sedimentation on full-scale WWTP nutrient removal performance.

    PubMed

    Puig, S; van Loosdrecht, M C M; Flameling, A G; Colprim, J; Meijer, S C F

    2010-06-01

    Traditionally, the performance of full-scale wastewater treatment plants (WWTPs) is measured based on influent and/or effluent and waste sludge flows and concentrations. Full-scale WWTP data typically have a high variance which often contains (large) measurement errors. A good process engineering evaluation of the WWTP performance is therefore difficult. This also makes it usually difficult to evaluate effect of process changes in a plant or compare plants to each other. In this paper we used a case study of a full-scale nutrient removing WWTP. The plant normally uses presettled wastewater, as a means to increase the nutrient removal the plant was operated for a period by-passing raw wastewater (27% of the influent flow). The effect of raw wastewater addition has been evaluated by different approaches: (i) influent characteristics, (ii) design retrofit, (iii) effluent quality, (iv) removal efficiencies, (v) activated sludge characteristics, (vi) microbial activity tests and FISH analysis and, (vii) performance assessment based on mass balance evaluation. This paper demonstrates that mass balance evaluation approach helps the WWTP engineers to distinguish and quantify between different strategies, where others could not. In the studied case, by-passing raw wastewater (27% of the influent flow) directly to the biological reactor did not improve the effluent quality and the nutrient removal efficiency of the WWTP. The increase of the influent C/N and C/P ratios was associated to particulate compounds with low COD/VSS ratio and a high non-biodegradable COD fraction. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

  12. Los Alamos Explosives Performance Key to Stockpile Stewardship

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dattelbaum, Dana

    2014-11-03

    As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- andmore » small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.« less

  13. High Performance Nano-Crystalline Oxide Fuel Cell Materials. Defects, Structures, Interfaces, Transport, and Electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, Scott; Poeppelmeier, Ken; Mason, Tom

    This project addresses fundamental materials challenges in solid oxide electrochemical cells, devices that have a broad range of important energy applications. Although nano-scale mixed ionically and electronically conducting (MIEC) materials provide an important opportunity to improve performance and reduce device operating temperature, durability issues threaten to limit their utility and have remained largely unexplored. Our work has focused on both (1) understanding the fundamental processes related to oxygen transport and surface-vapor reactions in nano-scale MIEC materials, and (2) determining and understanding the key factors that control their long-term stability. Furthermore, materials stability has been explored under the “extreme” conditions encounteredmore » in many solid oxide cell applications, i.e, very high or very low effective oxygen pressures, and high current density.« less

  14. Cl-Assisted Large Scale Synthesis of Cm-Scale Buckypapers of Fe₃C-Filled Carbon Nanotubes with Pseudo-Capacitor Properties: The Key Role of SBA-16 Catalyst Support as Synthesis Promoter.

    PubMed

    Boi, Filippo S; He, Yi; Wen, Jiqiu; Wang, Shanling; Yan, Kai; Zhang, Jingdong; Medranda, Daniel; Borowiec, Joanna; Corrias, Anna

    2017-10-23

    We show a novel chemical vapour deposition (CVD) approach, in which the large-scale fabrication of ferromagnetically-filled cm-scale buckypapers is achieved through the deposition of a mesoporous supported catalyst (SBA-16) on a silicon substrate. We demonstrate that SBA-16 has the crucial role of promoting the growth of carbon nanotubes (CNTs) on a horizontal plane with random orientation rather than in a vertical direction, therefore allowing a facile fabrication of cm-scale CNTs buckypapers free from the onion-crust by-product observed on the buckypaper-surface in previous reports. The morphology and composition of the obtained CNTs-buckypapers are analyzed in detail by scanning electron microscopy (SEM), Energy Dispersive X-ray (EDX), transmission electron microscopy (TEM), high resolution TEM (HRTEM), and thermogravimetric analysis (TGA), while structural analysis is performed by Rietveld Refinement of XRD data. The room temperature magnetic properties of the produced buckypapers are also investigated and reveal the presence of a high coercivity of 650 Oe. Additionally, the electrochemical performances of these buckypapers are demonstrated and reveal a behavior that is compatible with that of a pseudo-capacitor (resistive-capacitor) with better performances than those presented in other previously studied layered-buckypapers of Fe-filled CNTs, obtained by pyrolysis of dichlorobenzene-ferrocene mixtures. These measurements indicate that these materials show promise for applications in energy storage systems as flexible electrodes.

  15. High-Throughput Microbore UPLC-MS Metabolic Phenotyping of Urine for Large-Scale Epidemiology Studies.

    PubMed

    Gray, Nicola; Lewis, Matthew R; Plumb, Robert S; Wilson, Ian D; Nicholson, Jeremy K

    2015-06-05

    A new generation of metabolic phenotyping centers are being created to meet the increasing demands of personalized healthcare, and this has resulted in a major requirement for economical, high-throughput metabonomic analysis by liquid chromatography-mass spectrometry (LC-MS). Meeting these new demands represents an emerging bioanalytical problem that must be solved if metabolic phenotyping is to be successfully applied to large clinical and epidemiological sample sets. Ultraperformance (UP)LC-MS-based metabolic phenotyping, based on 2.1 mm i.d. LC columns, enables comprehensive metabolic phenotyping but, when employed for the analysis of thousands of samples, results in high solvent usage. The use of UPLC-MS employing 1 mm i.d. columns for metabolic phenotyping rather than the conventional 2.1 mm i.d. methodology shows that the resulting optimized microbore method provided equivalent or superior performance in terms of peak capacity, sensitivity, and robustness. On average, we also observed, when using the microbore scale separation, an increase in response of 2-3 fold over that obtained with the standard 2.1 mm scale method. When applied to the analysis of human urine, the 1 mm scale method showed no decline in performance over the course of 1000 analyses, illustrating that microbore UPLC-MS represents a viable alternative to conventional 2.1 mm i.d. formats for routine large-scale metabolic profiling studies while also resulting in a 75% reduction in solvent usage. The modest increase in sensitivity provided by this methodology also offers the potential to either reduce sample consumption or increase the number of metabolite features detected with confidence due to the increased signal-to-noise ratios obtained. Implementation of this miniaturized UPLC-MS method of metabolic phenotyping results in clear analytical, economic, and environmental benefits for large-scale metabolic profiling studies with similar or improved analytical performance compared to conventional UPLC-MS.

  16. Wide range scaling laws for radiation driven shock speed, wall albedo and ablation parameters for high-Z materials

    NASA Astrophysics Data System (ADS)

    Mishra, Gaurav; Ghosh, Karabi; Ray, Aditi; Gupta, N. K.

    2018-06-01

    Radiation hydrodynamic (RHD) simulations for four different potential high-Z hohlraum materials, namely Tungsten (W), Gold (Au), Lead (Pb), and Uranium (U) are performed in order to investigate their performance with respect to x-ray absorption, re-emission and ablation properties, when irradiated by constant temperature drives. A universal functional form is derived for estimating time dependent wall albedo for high-Z materials. Among the high-Z materials studied, it is observed that for a fixed simulation time the albedo is maximum for Au below 250 eV, whereas it is maximum for U above 250 eV. New scaling laws for shock speed vs drive temperature, applicable over a wide temperature range of 100 eV to 500 eV, are proposed based on the physics of x-ray driven stationary ablation. The resulting scaling relation for a reference material Aluminium (Al), shows good agreement with that of Kauffman's power law for temperatures ranging from 100 eV to 275 eV. New scaling relations are also obtained for temperature dependent mass ablation rate and ablation pressure, through RHD simulation. Finally, our study reveals that for temperatures above 250 eV, U serves as a better hohlraum material since it offers maximum re-emission for x-rays along with comparable mass ablation rate. Nevertheless, traditional choice, Au works well for temperatures below 250 eV. Besides inertial confinement fusion (ICF), the new scaling relations may find its application in view-factor codes, which generally ignore atomic physics calculations of opacities and emissivities, details of laser-plasma interaction and hydrodynamic motions.

  17. Study of multi-functional precision optical measuring system for large scale equipment

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  18. Scale effects and a method for similarity evaluation in micro electrical discharge machining

    NASA Astrophysics Data System (ADS)

    Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua

    2016-08-01

    Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.

  19. Development of a performance anxiety scale for music students.

    PubMed

    Çirakoğlu, Okan Cem; Şentürk, Gülce Çoskun

    2013-12-01

    In the present research, the Performance Anxiety Scale for Music Students (PASMS) was developed in three successive studies. In Study 1, the factor structure of PASMS was explored and three components were found: fear of stage (FES), avoidance (AVD) and symptoms (SMP). The internal consistency of the subscales of PASMS, which consisted of 27 items, varied between 0.89 and 0.91. The internal consistency for the whole scale was found to be 0.95. The correlations among PASMS and other anxiety-related measures were significant and in the expected direction, indicating that the scale has convergent validity. The construct validity of the scale was assessed in Study 2 by confirmatory factor analysis. After several revisions, the final tested model achieved acceptable fits. In Study 3, the 14-day test-retest reliability of the final 24-item version of PASMS was tested and found to be extremely high (0.95). In all three studies, the whole scale and subscale scores of females were significantly higher than for males.

  20. Shock-induced mechanochemistry in heterogeneous reactive powder mixtures

    NASA Astrophysics Data System (ADS)

    Gonzales, Manny; Gurumurthy, Ashok; Kennedy, Gregory; Neel, Christopher; Gokhale, Arun; Thadhani, Naresh

    The bulk response of compacted powder mixtures subjected to high-strain-rate loading conditions in various configurations is manifested from behavior at the meso-scale. Simulations at the meso-scale can provide an additional confirmation of the possible origins of the observed response. This work investigates the bulk dynamic response of Ti +B +Al reactive powder mixtures under two extreme loading configurations - uniaxial stress and strain loading - leveraging highly-resolved in-situ measurements and meso-scale simulations. Modified rod-on-anvil impact tests on a reactive pellet demonstrate an optimized stoichiometry promoting reaction in Ti +B +Al. Encapsulated powders subjected to shock compression via flyer plate tests provide possible evidence of a shock-induced reaction at high pressures. Meso-scale simulations of the direct experimental configurations employing highly-resolved microstructural features of the Ti +B compacted mixture show complex inhomogeneous deformation responses and reveal the importance of meso-scale features such as particle size and morphology and their effects on the measured response. Funding is generously provided by DTRA through Grant No. HDTRA1-10-1-0038 (Dr. Su Peiris - Program Manager) and by the SMART (AFRL Wright Patterson AFB) and NDSEG fellowships (High Performance Computing and Modernization Office).

  1. High-Performance and Omnidirectional Thin-Film Amorphous Silicon Solar Cell Modules Achieved by 3D Geometry Design.

    PubMed

    Yu, Dongliang; Yin, Min; Lu, Linfeng; Zhang, Hanzhong; Chen, Xiaoyuan; Zhu, Xufei; Che, Jianfei; Li, Dongdong

    2015-11-01

    High-performance thin-film hydrogenated amorphous silicon solar cells are achieved by combining macroscale 3D tubular substrates and nanoscaled 3D cone-like antireflective films. The tubular geometry delivers a series of advantages for large-scale deployment of photovoltaics, such as omnidirectional performance, easier encapsulation, decreased wind resistance, and easy integration with a second device inside the glass tube. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Development and performance evaluation of frustum cone shaped churn for small scale production of butter.

    PubMed

    Kalla, Adarsh M; Sahu, C; Agrawal, A K; Bisen, P; Chavhan, B B; Sinha, Geetesh

    2016-05-01

    The present research was intended to develop a small scale butter churn and its performance by altering churning temperature and churn speed during butter making. In the present study, the cream was churned at different temperatures (8, 10 and 12 °C) and churn speeds (35, 60 and 85 rpm). The optimum parameters of churning time (40 min), moisture content (16 %) and overrun (19.42 %) were obtained when cream was churned at churning temperature of 10 °C and churn speed of 60 rpm. Using appropriate conditions of churning temperature and churn speed, high quality butter can be produced at cottage scale.

  3. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  4. Multiplexed, High Density Electrophysiology with Nanofabricated Neural Probes

    PubMed Central

    Du, Jiangang; Blanche, Timothy J.; Harrison, Reid R.; Lester, Henry A.; Masmanidis, Sotiris C.

    2011-01-01

    Extracellular electrode arrays can reveal the neuronal network correlates of behavior with single-cell, single-spike, and sub-millisecond resolution. However, implantable electrodes are inherently invasive, and efforts to scale up the number and density of recording sites must compromise on device size in order to connect the electrodes. Here, we report on silicon-based neural probes employing nanofabricated, high-density electrical leads. Furthermore, we address the challenge of reading out multichannel data with an application-specific integrated circuit (ASIC) performing signal amplification, band-pass filtering, and multiplexing functions. We demonstrate high spatial resolution extracellular measurements with a fully integrated, low noise 64-channel system weighing just 330 mg. The on-chip multiplexers make possible recordings with substantially fewer external wires than the number of input channels. By combining nanofabricated probes with ASICs we have implemented a system for performing large-scale, high-density electrophysiology in small, freely behaving animals that is both minimally invasive and highly scalable. PMID:22022568

  5. Reinforcements: The key to high performance composite materials

    NASA Technical Reports Server (NTRS)

    Grisaffe, Salvatore J.

    1990-01-01

    Better high temperature fibers are the key to high performance, light weight composite materials. However, current U.S. and Japanese fibers still have inadequate high temperature strength, creep resistance, oxidation resistance, modulus, stability, and thermal expansion match with some of the high temperature matrices being considered for future aerospace applications. In response to this clear deficiency, both countries have research and development activities underway. Once successful fibers are identified, their production will need to be taken from laboratory scale to pilot plant scale. In such efforts it can be anticipated that the Japanese decisions will be based on longer term criteria than those applied in the U.S. Since the initial markets will be small, short term financial criteria may adversely minimize the number and strength of U.S. aerospace materials suppliers to well into the 21st century. This situation can only be compounded by the Japanese interests in learning to make commercial products with existing materials so that when the required advanced fibers eventually do arrive, their manufacturing skills will be developed.

  6. Ultra scale-down device to predict dewatering levels of solids recovered in a continuous scroll decanter centrifuge.

    PubMed

    Lopes, A G; Keshavarz-Moore, E

    2013-01-01

    During centrifugation operation, the major challenge in the recovery of extracellular proteins is the removal of the maximum liquid entrapped within the spaces between the settled solids-dewatering level. The ability of the scroll decanter centrifuge (SDC) to process continuously large amounts of feed material with high concentration of solids without the need for resuspension of feeds, and also to achieve relatively high dewatering, could be of great benefit for future use in the biopharmaceutical industry. However, for reliable prediction of dewatering in such a centrifuge, tests using the same kind of equipment at pilot-scale are required, which are time consuming and costly. To alleviate the need of pilot-scale trials, a novel USD device, with reduced amounts of feed (2 mL) and to be used in the laboratory, was developed to predict the dewatering levels of a SDC. To verify USD device, dewatering levels achieved were plotted against equivalent compression (Gtcomp ) and decanting (Gtdec ) times, obtained from scroll rates and feed flow rates operated at pilot-scale, respectively. The USD device was able to successfully match dewatering trends of the pilot-scale as a function of both Gtcomp and Gtdec , particularly for high cell density feeds, hence accounting for all key variables that influenced dewatering in a SDC. In addition, it accurately mimicked the maximum dewatering performance of the pilot-scale equipment. Therefore the USD device has the potential to be a useful tool at early stages of process development to gather performance data in the laboratory thus minimizing lengthy and costly runs with pilot-scale SDC. © 2013 American Institute of Chemical Engineers.

  7. Use of a shoulder abduction brace after arthroscopic rotator cuff repair: A study on gait performance and falls.

    PubMed

    Sonoda, Yuma; Nishioka, Takashi; Nakajima, Ryo; Imai, Shinji; Vigers, Piers; Kawasaki, Taku

    2018-04-01

    Fall prevention is essential in patients after arthroscopic rotator cuff repair because of the high risk of re-rupture. However, there are no reports related to falls that occur during the early postoperative period, while the affected limb is immobilized. This study assessed gait performance and falls in patients using a shoulder abduction brace after arthroscopic rotator cuff repair. Prospective cohort and postoperative repeated measures. This study included 29 patients (mean age, 67.1 ± 7.4 years) who underwent arthroscopic rotator cuff repair followed by rehabilitation. The timed up and go test, Geriatric Depression Scale, and Falls Efficacy Scale were measured, and the numbers of falls were compared between those shoulder abduction brace users and patients who had undergone total hip or knee arthroplasty. In arthroscopic rotator cuff repair patients, there were significant improvements in timed up and go test and Geriatric Depression Scale, but no significant differences in Falls Efficacy Scale, between the second and fifth postoperative weeks ( p < 0.05). Additionally, arthroscopic rotator cuff repair patients fell more often than patients with total hip arthroplasty or total knee arthroplasty during the same period. The findings suggest that rehabilitation in arthroscopic rotator cuff repair patients is beneficial, but decreased gait performance due to the immobilizing shoulder abduction brace can lead to falls. Clinical relevance Although rehabilitation helps motor function and mental health after arthroscopic rotator cuff repair, shoulder abduction brace use is associated with impaired gait performance, high Falls Efficacy Scale scores, and risk of falls, so awareness of risk factors including medications and lower limb dysfunctions is especially important after arthroscopic rotator cuff repair.

  8. Large-scale self-assembled zirconium phosphate smectic layers via a simple spray-coating process

    NASA Astrophysics Data System (ADS)

    Wong, Minhao; Ishige, Ryohei; White, Kevin L.; Li, Peng; Kim, Daehak; Krishnamoorti, Ramanan; Gunther, Robert; Higuchi, Takeshi; Jinnai, Hiroshi; Takahara, Atsushi; Nishimura, Riichi; Sue, Hung-Jue

    2014-04-01

    The large-scale assembly of asymmetric colloidal particles is used in creating high-performance fibres. A similar concept is extended to the manufacturing of thin films of self-assembled two-dimensional crystal-type materials with enhanced and tunable properties. Here we present a spray-coating method to manufacture thin, flexible and transparent epoxy films containing zirconium phosphate nanoplatelets self-assembled into a lamellar arrangement aligned parallel to the substrate. The self-assembled mesophase of zirconium phosphate nanoplatelets is stabilized by epoxy pre-polymer and exhibits rheology favourable towards large-scale manufacturing. The thermally cured film forms a mechanically robust coating and shows excellent gas barrier properties at both low- and high humidity levels as a result of the highly aligned and overlapping arrangement of nanoplatelets. This work shows that the large-scale ordering of high aspect ratio nanoplatelets is easier to achieve than previously thought and may have implications in the technological applications for similar materials.

  9. Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions

    DOE PAGES

    McLerran, Larry; Tribedy, Prithwish

    2015-11-02

    High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less

  10. CONTINUED ASSESSMENT OF A HIGH-VELOCITY FABRIC FILTRATION SYSTEM USED TO CONTROL FLY ASH EMISSIONS

    EPA Science Inventory

    The report gives results of a full-scale investigation of the performance of a variety of filter media, to provide technical and economic information under high-velocity conditions (high gas/cloth ratio). The fly ash emission studies demonstrated that woven fiberglass fabrics and...

  11. [Reliability of the Japanese version of the Scale for the Assessment and Rating of Ataxia (SARA)].

    PubMed

    Sato, Kazunori; Yabe, Ichiro; Soma, Hiroyuki; Yasui, Kenichi; Ito, Mizuki; Shimohata, Takayoshi; Onodera, Osamu; Nakashima, Kenji; Sobue, Gen; Nishizawa, Masatoyo; Sasaki, Hidenao

    2009-05-01

    The International Cooperative Ataxia Rating Scale (ICARS) is widely used as a scale for the assessment of the severity of cerebellar ataxia. However, this scale comprises several items; thus, making the application of this scale is not sufficiently practical to perform daily assessment of ataxic patients. A new rating scale--Scale for the Assessment and Rating of Ataxia (SARA)--was shown to provide highly reliable assessments; further, the scores on SARA correlated with the ICARS score and the Barthel index. After obtaining the permission, original SARA was translated into Japanese. To examine the reliability and internal consistency of the Japanese version of the SARA for the assessment of cerebellar ataxia in 66 patients with spinocerebellar degeneration. Intraclass coefficients (ICC) were observed to be greater than 0.8 except in the case of the inter-rater "finger chase" and "fast alternating hand movement" tests. The Japanese version of SARA is highly reliable and very useful for the assessment of cerebellar ataxia on a daily basis.

  12. Nano-scaled Pt/Ag/Ni/Au contacts on p-type GaN for low contact resistance and high reflectivity.

    PubMed

    Kwon, Y W; Ju, I C; Kim, S K; Choi, Y S; Kim, M H; Yoo, S H; Kang, D H; Sung, H K; Shin, K; Ko, C G

    2011-07-01

    We synthesized the vertical-structured LED (VLED) using nano-scaled Pt between p-type GaN and Ag-based reflector. The metallization scheme on p-type GaN for high reflectance and low was the nano-scaled Pt/Ag/Ni/Au. Nano-scaled Pt (5 A) on Ag/Ni/Au exhibited reasonably high reflectance of 86.2% at the wavelength of 460 nm due to high transmittance of light through nano-scaled Pt (5 A) onto Ag layer. Ohmic behavior of contact metal, Pt/Ag/Ni/Au, to p-type GaN was achieved using surface treatments of p-type GaN prior to the deposition of contact metals and the specific contact resistance was observed with decreasing Pt thickness of 5 A, resulting in 1.5 x 10(-4) ohms cm2. Forward voltages of Pt (5 A)/Ag/Ni contact to p-type GaN showed 4.19 V with the current injection of 350 mA. Output voltages with various thickness of Pt showed the highest value at the smallest thickness of Pt due to its high transmittance of light onto Ag, leading to high reflectance. Our results propose that nano-scaled Pt/Ag/Ni could act as a promising contact metal to p-type GaN for improving the performance of VLEDs.

  13. Automatic Energy Schemes for High Performance Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundriyal, Vaibhav

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-allmore » and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.« less

  14. A General Sparse Tensor Framework for Electronic Structure Theory

    DOE PAGES

    Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...

    2017-01-24

    Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less

  15. Chemical Reactivity Test (CRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaka, F.

    The Chemical Reactivity Test (CRT) is used to determine the thermal stability of High Explosives (HEs) and chemical compatibility between (HEs) and alien materials. The CRT is one of the small-scale safety tests performed on HE at the High Explosives Applications Facility (HEAF).

  16. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  17. Assessing the performance of multi-purpose channel management measures at increasing scales

    NASA Astrophysics Data System (ADS)

    Wilkinson, Mark; Addy, Steve

    2016-04-01

    In addition to hydroclimatic drivers, sediment deposition from high energy river systems can reduce channel conveyance capacity and lead to significant increases in flood risk. There is an increasing recognition that we need to work with the interplay of natural hydrological and morphological processes in order to attenuate flood flows and manage sediment (both coarse and fine). This typically includes both catchment (e.g. woodland planting, wetlands) and river (e.g. wood placement, floodplain reconnection) restoration approaches. The aim of this work was to assess at which scales channel management measures (notably wood placement and flood embankment removal) are most appropriate for flood and sediment management in high energy upland river systems. We present research findings from two densely instrumented research sites in Scotland which regularly experience flood events and have associated coarse sediment problems. We assessed the performance of a range of novel trial measures for three different scales: wooded flow restrictors and gully tree planting at the small scale (<1 km2), floodplain tree planting and engineered log jams at the intermediate scale (5-60 km2), and flood embankment lowering at the large scale (350 km2). Our results suggest that at the smallest scale, care is needed in the installation of flow restrictors. It was found for some restrictors that vertical erosion can occur if the tributary channel bed is disturbed. Preliminary model evidence suggested they have a very limited impact on channel discharge and flood peak delay owing to the small storage areas behind the structures. At intermediate scales, the ability to trap sediment by engineered log jams was limited. Of the 45 engineered log jams installed, around half created a small geomorphic response and only 5 captured a significant amount of coarse material (during one large flood event). As scale increases, the chance of damage or loss of wood placement is greatest. Monitoring highlights the importance of structure design (porosity and degree of channel blockage) and placement in zones of high sediment transport to optimise performance. At the large scale, well designed flood embankment lowering can improve connectivity to the floodplain during low to medium return period events. However, ancillary works to stabilise the bank failed thus emphasising the importance of letting natural processes readjust channel morphology and hydrological connections to the floodplain. Although these trial measures demonstrated limited effects, this may be in part owing to restrictions in the range of hydroclimatological conditions during the study period and further work is needed to assess the performance under more extreme conditions. This work will contribute to refining guidance for managing channel coarse sediment problems in the future which in turn could help mitigate flooding using natural approaches.

  18. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  19. Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution

    NASA Astrophysics Data System (ADS)

    Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.

    2016-12-01

    Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo which not only enables mesh refinement, but also refinement of the model-pore scale or continuum Darcy scale-in a dynamic way such that the appropriate model is used only when and where it is needed. Explicit flux matching provides coupling betwen the scales.

  20. Spatio-temporal modeling and optimization of a deformable-grating compressor for short high-energy laser pulses

    DOE PAGES

    Qiao, Jie; Papa, J.; Liu, X.

    2015-09-24

    Monolithic large-scale diffraction gratings are desired to improve the performance of high-energy laser systems and scale them to higher energy, but the surface deformation of these diffraction gratings induce spatio-temporal coupling that is detrimental to the focusability and compressibility of the output pulse. A new deformable-grating-based pulse compressor architecture with optimized actuator positions has been designed to correct the spatial and temporal aberrations induced by grating wavefront errors. An integrated optical model has been built to analyze the effect of grating wavefront errors on the spatio-temporal performance of a compressor based on four deformable gratings. Moreover, a 1.5-meter deformable gratingmore » has been optimized using an integrated finite-element-analysis and genetic-optimization model, leading to spatio-temporal performance similar to the baseline design with ideal gratings.« less

  1. Importance of balanced architectures in the design of high-performance imaging systems

    NASA Astrophysics Data System (ADS)

    Sgro, Joseph A.; Stanton, Paul C.

    1999-03-01

    Imaging systems employed in demanding military and industrial applications, such as automatic target recognition and computer vision, typically require real-time high-performance computing resources. While high- performances computing systems have traditionally relied on proprietary architectures and custom components, recent advances in high performance general-purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems. A common pitfall in the design of high performance imaging system, particularly systems employing scalable multiprocessor architectures, is the failure to balance computational and memory bandwidth. The performance of standard cluster designs, for example, in which several processors share a common memory bus, is typically constrained by memory bandwidth. The symptom characteristic of this problem is failure to the performance of the system to scale as more processors are added. The problem becomes exacerbated if I/O and memory functions share the same bus. The recent introduction of microprocessors with large internal caches and high performance external memory interfaces makes it practical to design high performance imaging system with balanced computational and memory bandwidth. Real word examples of such designs will be presented, along with a discussion of adapting algorithm design to best utilize available memory bandwidth.

  2. Keeping on Track: Performance Profiles of Low Performers in Academic Educational Tracks

    ERIC Educational Resources Information Center

    Reed, Helen C.; van Wesel, Floryt; Ouwehand, Carolijn; Jolles, Jelle

    2015-01-01

    In countries with high differentiation between academic and vocational education, an individual's future prospects are strongly determined by the educational track to which he or she is assigned. This large-scale, cross-sectional study focuses on low-performing students in academic tracks who face being moved to a vocational track. If more is…

  3. Performance of high intensity fed-batch mammalian cell cultures in disposable bioreactor systems.

    PubMed

    Smelko, John Paul; Wiltberger, Kelly Rae; Hickman, Eric Francis; Morris, Beverly Janey; Blackburn, Tobias James; Ryll, Thomas

    2011-01-01

    The adoption of disposable bioreactor technology as an alternate to traditional nondisposable technology is gaining momentum in the biotechnology industry. Evaluation of current disposable bioreactors systems to sustain high intensity fed-batch mammalian cell culture processes needs to be explored. In this study, an assessment was performed comparing single-use bioreactors (SUBs) systems of 50-, 250-, and 1,000-L operating scales with traditional stainless steel (SS) and glass vessels using four distinct mammalian cell culture processes. This comparison focuses on expansion and production stage performance. The SUB performance was evaluated based on three main areas: operability, process scalability, and process performance. The process performance and operability aspects were assessed over time and product quality performance was compared at the day of harvest. Expansion stage results showed disposable bioreactors mirror traditional bioreactors in terms of cellular growth and metabolism. Set-up and disposal times were dramatically reduced using the SUB systems when compared with traditional systems. Production stage runs for both Chinese hamster ovary and NS0 cell lines in the SUB system were able to model SS bioreactors runs at 100-, 200-, 2,000-, and 15,000-L scales. A single 1,000-L SUB run applying a high intensity fed-batch process was able to generate 7.5 kg of antibody with comparable product quality. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  4. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing.

    PubMed

    Midekisa, Alemayehu; Holl, Felix; Savory, David J; Andrade-Pacheco, Ricardo; Gething, Peter W; Bennett, Adam; Sturrock, Hugh J W

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth's land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources.

  5. Mapping land cover change over continental Africa using Landsat and Google Earth Engine cloud computing

    PubMed Central

    Holl, Felix; Savory, David J.; Andrade-Pacheco, Ricardo; Gething, Peter W.; Bennett, Adam; Sturrock, Hugh J. W.

    2017-01-01

    Quantifying and monitoring the spatial and temporal dynamics of the global land cover is critical for better understanding many of the Earth’s land surface processes. However, the lack of regularly updated, continental-scale, and high spatial resolution (30 m) land cover data limit our ability to better understand the spatial extent and the temporal dynamics of land surface changes. Despite the free availability of high spatial resolution Landsat satellite data, continental-scale land cover mapping using high resolution Landsat satellite data was not feasible until now due to the need for high-performance computing to store, process, and analyze this large volume of high resolution satellite data. In this study, we present an approach to quantify continental land cover and impervious surface changes over a long period of time (15 years) using high resolution Landsat satellite observations and Google Earth Engine cloud computing platform. The approach applied here to overcome the computational challenges of handling big earth observation data by using cloud computing can help scientists and practitioners who lack high-performance computational resources. PMID:28953943

  6. A multi-scale model for geared transmission aero-thermodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Sean M.

    A multi-scale, multi-physics computational tool for the simulation of high-per- formance gearbox aero-thermodynamics was developed and applied to equilibrium and pathological loss-of-lubrication performance simulation. The physical processes at play in these systems include multiphase compressible ow of the air and lubricant within the gearbox, meshing kinematics and tribology, as well as heat transfer by conduction, and free and forced convection. These physics are coupled across their representative space and time scales in the computational framework developed in this dissertation. These scales span eight orders of magnitude, from the thermal response of the full gearbox O(100 m; 10 2 s), through effects at the tooth passage time scale O(10-2 m; 10-4 s), down to tribological effects on the meshing gear teeth O(10-6 m; 10-6 s). Direct numerical simulation of these coupled physics and scales is intractable. Accordingly, a scale-segregated simulation strategy was developed by partitioning and treating the contributing physical mechanisms as sub-problems, each with associated space and time scales, and appropriate coupling mechanisms. These are: (1) the long time scale thermal response of the system, (2) the multiphase (air, droplets, and film) aerodynamic flow and convective heat transfer within the gearbox, (3) the high-frequency, time-periodic thermal effects of gear tooth heating while in mesh and its subsequent cooling through the rest of rotation, (4) meshing effects including tribology and contact mechanics. The overarching goal of this dissertation was to develop software and analysis procedures for gearbox loss-of-lubrication performance. To accommodate these four physical effects and their coupling, each is treated in the CFD code as a sub problem. These physics modules are coupled algorithmically. Specifically, the high- frequency conduction analysis derives its local heat transfer coefficient and near-wall air temperature boundary conditions from a quasi-steady cyclic-symmetric simulation of the internal flow. This high-frequency conduction solution is coupled directly with a model for the meshing friction, developed by a collaborator, which was adapted for use in a finite-volume CFD code. The local surface heat flux on solid surfaces is calculated by time-averaging the heat flux in the high-frequency analysis. This serves as a fixed-flux boundary condition in the long time scale conduction module. The temperature distribution from this long time scale heat transfer calculation serves as a boundary condition for the internal convection simulation, and as the initial condition for the high-frequency heat transfer module. Using this multi-scale model, simulations were performed for equilibrium and loss-of-lubrication operation of the NASA Glenn Research Center test stand. Results were compared with experimental measurements. In addition to the multi-scale model itself, several other specific contributions were made. Eulerian models for droplets and wall-films were developed and im- plemented in the CFD code. A novel approach to retaining liquid film on the solid surfaces, and strategies for its mass exchange with droplets, were developed and verified. Models for interfacial transfer between droplets and wall-film were implemented, and include the effects of droplet deposition, splashing, bouncing, as well as film breakup. These models were validated against airfoil data. To mitigate the observed slow convergence of CFD simulations of the enclosed aerodynamic flows within gearboxes, Fourier stability analysis was applied to the SIMPLE-C fractional-step algorithm. From this, recommendations to accelerate the convergence rate through enhanced pressure-velocity coupling were made. These were shown to be effective. A fast-running finite-volume reduced-order-model of the gearbox aero-thermo- dynamics was developed, and coupled with the tribology model to investigate the sensitivity of loss-of-lubrication predictions to various model and physical param- eters. This sensitivity study was instrumental in guiding efforts toward improving the accuracy of the multi-scale model without undue increase in computational cost. In addition, the reduced-order model is now used extensively by a collaborator in tribology model development and testing. Experimental measurements of high-speed gear windage in partially and fully- shrouded configurations were performed to supplement the paucity of available validation data. This measurement program provided measurements of windage loss for a gear of design-relevant size and operating speed, as well as guidance for increasing the accuracy of future measurements.

  7. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  8. Converting positive and negative symptom scores between PANSS and SAPS/SANS.

    PubMed

    van Erp, Theo G M; Preda, Adrian; Nguyen, Dana; Faziola, Lawrence; Turner, Jessica; Bustillo, Juan; Belger, Aysenil; Lim, Kelvin O; McEwen, Sarah; Voyvodic, James; Mathalon, Daniel H; Ford, Judith; Potkin, Steven G; Fbirn

    2014-01-01

    The Scale for the Assessment of Positive Symptoms (SAPS), the Scale for the Assessment of Negative Symptoms (SANS), and the Positive and Negative Syndrome Scale for Schizophrenia (PANSS) are the most widely used schizophrenia symptom rating scales, but despite their co-existence for 25 years no easily usable between-scale conversion mechanism exists. The aim of this study was to provide equations for between-scale symptom rating conversions. Two-hundred-and-five schizophrenia patients [mean age±SD=39.5±11.6, 156 males] were assessed with the SANS, SAPS, and PANSS. Pearson's correlations between symptom scores from each of the scales were computed. Linear regression analyses, on data from 176 randomly selected patients, were performed to derive equations for converting ratings between the scales. Intraclass correlations, on data from the remaining 29 patients, not part of the regression analyses, were performed to determine rating conversion accuracy. Between-scale positive and negative symptom ratings were highly correlated. Intraclass correlations between the original positive and negative symptom ratings and those obtained via conversion of alternative ratings using the conversion equations were moderate to high (ICCs=0.65 to 0.91). Regression-based equations may be useful for conversion between schizophrenia symptom severity as measured by the SANS/SAPS and PANSS, though additional validation is warranted. This study's conversion equations, implemented at http:/converteasy.org, may aid in the comparison of medication efficacy studies, in meta- and mega-analyses examining symptoms as moderator variables, and in retrospective combination of symptom data in multi-center data sharing projects that need to pool symptom rating data when such data are obtained using different scales. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Tensor scale: An analytic approach with efficient computation and applications☆

    PubMed Central

    Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura

    2015-01-01

    Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148

  10. Reliability Considerations for Ultra- Low Power Space Applications

    NASA Technical Reports Server (NTRS)

    White, Mark; Johnston, Allan

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub- micron region and ULP devices are sought after. Technology trends, ULP microelectronics, scaling and performance tradeoffs, reliability considerations, and spacecraft environments will be presented from a ULP perspective for space applications.

  11. Dimension scaling effects on the yield sensitivity of HEMT digital circuits

    NASA Technical Reports Server (NTRS)

    Sarker, Jogendra C.; Purviance, John E.

    1992-01-01

    In our previous works, using a graphical tool, yield factor histograms, we studied the yield sensitivity of High Electron Mobility Transistors (HEMT) and HEMT circuit performance with the variation of process parameters. This work studies the scaling effects of process parameters on yield sensitivity of HEMT digital circuits. The results from two HEMT circuits are presented.

  12. The predictive power of physical function assessed by questionnaire and physical performance measures for subsequent disability.

    PubMed

    Hoshi, Masayuki; Hozawa, Atsushi; Kuriyama, Shinichi; Nakaya, Naoki; Ohmori-Matsuda, Kaori; Sone, Toshimasa; Kakizaki, Masako; Niu, Kaijun; Fujita, Kazuki; Ueki, Shouzoh; Haga, Hiroshi; Nagatomi, Ryoichi; Tsuji, Ichiro

    2012-08-01

    To compare the predictive power of physical function assessed by questionnaire and physical performance measures for subsequent disability in community-dwelling elderly persons. Prospective cohort study. Participants were 813 aged 70 years and older, elderly Japanese residing in the community, included in the Tsurugaya Project, who were not disabled at the baseline in 2003. Physical function was assessed by the questionnaire of "Motor Fitness Scale". Physical performance measures consisted of maximum walking velocity, timed up and go test (TUG), leg extension power, and functional reach test. The area under the curve (AUC) of the receiver operating characteristic curve for disability was used to compare screening accuracy between Motor Fitness Scale and physical performance measures. Incident disability, defined as certification for long-term care insurance, was used as the endpoint. We observed 135 cases of incident disability during follow-up. The third or fourth quartile for each measure was associated with a significantly increased risk of disability in comparison with the highest quartile. The AUC was 0.70, 0.72, 0.70, 0.68, 0.69 and 0.74, for Motor Fitness Scale, maxi- mum walking velocity, TUG, leg extension power, functional reach test, and total performance score, respectively. The predictive power of physical function assessed by the Motor Fitness Scale was equivalent to that assessed by physical performance measures. Since Motor Fitness Scale can evaluate physical function safely and simply in comparison with physical performance tests, it would be a practical tool for screening persons at high risk of disability.

  13. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  14. On mechanics and material length scales of failure in heterogeneous interfaces using a finite strain high performance solver

    NASA Astrophysics Data System (ADS)

    Mosby, Matthew; Matouš, Karel

    2015-12-01

    Three-dimensional simulations capable of resolving the large range of spatial scales, from the failure-zone thickness up to the size of the representative unit cell, in damage mechanics problems of particle reinforced adhesives are presented. We show that resolving this wide range of scales in complex three-dimensional heterogeneous morphologies is essential in order to apprehend fracture characteristics, such as strength, fracture toughness and shape of the softening profile. Moreover, we show that computations that resolve essential physical length scales capture the particle size-effect in fracture toughness, for example. In the vein of image-based computational materials science, we construct statistically optimal unit cells containing hundreds to thousands of particles. We show that these statistically representative unit cells are capable of capturing the first- and second-order probability functions of a given data-source with better accuracy than traditional inclusion packing techniques. In order to accomplish these large computations, we use a parallel multiscale cohesive formulation and extend it to finite strains including damage mechanics. The high-performance parallel computational framework is executed on up to 1024 processing cores. A mesh convergence and a representative unit cell study are performed. Quantifying the complex damage patterns in simulations consisting of tens of millions of computational cells and millions of highly nonlinear equations requires data-mining the parallel simulations, and we propose two damage metrics to quantify the damage patterns. A detailed study of volume fraction and filler size on the macroscopic traction-separation response of heterogeneous adhesives is presented.

  15. Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application

    NASA Technical Reports Server (NTRS)

    Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom; hide

    2013-01-01

    Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.

  16. A Meta-Analysis of Differences in IQ Profiles between Individuals with Asperger's Disorder and High-Functioning Autism

    ERIC Educational Resources Information Center

    Chiang, Hsu-Min; Tsai, Luke Y.; Cheung, Ying Kuen; Brown, Alice; Li, Huacheng

    2014-01-01

    A meta-analysis was performed to examine differences in IQ profiles between individuals with Asperger's disorder (AspD) and high-functioning autism (HFA). Fifty-two studies were included for this study. The results showed that (a) individuals with AspD had significantly higher full-scale IQ, verbal IQ (VIQ), and performance IQ (PIQ) than did…

  17. Skin and scales of teleost fish: Simple structure but high performance and multiple functions

    NASA Astrophysics Data System (ADS)

    Vernerey, Franck J.; Barthelat, Francois

    2014-08-01

    Natural and man-made structural materials perform similar functions such as structural support or protection. Therefore they rely on the same types of properties: strength, robustness, lightweight. Nature can therefore provide a significant source of inspiration for new and alternative engineering designs. We report here some results regarding a very common, yet largely unknown, type of biological material: fish skin. Within a thin, flexible and lightweight layer, fish skins display a variety of strain stiffening and stabilizing mechanisms which promote multiple functions such as protection, robustness and swimming efficiency. We particularly discuss four important features pertaining to scaled skins: (a) a strongly elastic tensile behavior that is independent from the presence of rigid scales, (b) a compressive response that prevents buckling and wrinkling instabilities, which are usually predominant for thin membranes, (c) a bending response that displays nonlinear stiffening mechanisms arising from geometric constraints between neighboring scales and (d) a robust structure that preserves the above characteristics upon the loss or damage of structural elements. These important properties make fish skin an attractive model for the development of very thin and flexible armors and protective layers, especially when combined with the high penetration resistance of individual scales. Scaled structures inspired by fish skin could find applications in ultra-light and flexible armor systems, flexible electronics or the design of smart and adaptive morphing structures for aerospace vehicles.

  18. Oxidation behavior and area specific resistance of La, Cu and B alloyed Fe-22Cr ferritic steels for solid oxide fuel cell interconnects

    NASA Astrophysics Data System (ADS)

    Swaminathan, Srinivasan; Ko, Yoon Seok; Lee, Young-Su; Kim, Dong-Ik

    2017-11-01

    Two Fe-22 wt% Cr ferritic stainless steels containing varying concentrations of La (0.14 or 0.52 wt%), Cu (0.17 or 1.74 wt%) and B (48 or 109 ppm) are investigated with respect to oxidation behavior and high temperature area specific resistance (ASR) of the surface oxide scales. To determine the oxidation resistance of developed steels, continuous isothermal oxidation is carried out at 800 °C in air, for 2000 h, and their thermally grown oxide scale is characterized using dynamic SIMS, SEM/EDX, XRD and GI-XRD techniques. To assess their electrical performance, the ASR measurement by four-point probe method is conducted at 800 °C in air, for 400 h. In higher La content steel, the La-oxides at the scale/alloy interface promotes the oxygen transport which resulted in sub-surface oxidation of Mn, Cr, Ti and Al. Moreover, the inward growth of oxides contributes to increase of Fe-Cr alloy protrusions within the scale, which reduced the ASR. In contrast, sub-surface oxidation is reduced in high Cu-alloyed steel by segregated Cu at the scale/alloy interface. Thus, addition of Cu is effective to oxidation resistance and also to better electrical performance. However, no obvious impact of B on the scale sequence and/or ASR is observed.

  19. Leisure-time physical activity and psychological well-being in university students.

    PubMed

    Molina-García, J; Castillo, I; Queralt, A

    2011-10-01

    An analysis of psychological well-being (self-esteem and subjective vitality) of 639 Spanish university students was performed, while accounting for the amount of leisure-time physical activity. The Spanish versions of the Rosenberg Self-Esteem Scale and Subjective Vitality Scale were employed. Participants were divided into four groups (Low, Moderate, High, and Very high) depending on estimation of energy expenditure in leisure-time physical activity. Men and women having higher physical activity rated higher mean subjective vitality; however, differences in self-esteem were observed only in men, specifically between Very high and the other physical activity groups.

  20. Influence of Alumina Reaction Tube Impurities on the Oxidation of Chemically-Vapor-Deposited Silicon Carbide

    NASA Technical Reports Server (NTRS)

    Opila, Elizabeth

    1995-01-01

    Pure coupons of chemically vapor deposited (CVD) SiC were oxidized for 100 h in dry flowing oxygen at 1300 C. The oxidation kinetics were monitored using thermogravimetry (TGA). The experiments were first performed using high-purity alumina reaction tubes. The experiments were then repeated using fused quartz reaction tubes. Differences in oxidation kinetics, scale composition, and scale morphology were observed. These differences were attributed to impurities in the alumina tubes. Investigators interested in high-temperature oxidation of silica formers should be aware that high-purity alumina can have significant effects on experiment results.

  1. Panoscopic approach for high-performance Te-doped skutterudite

    DOE PAGES

    Liang, Tao; Su, Xianli; Yan, Yonggao; ...

    2017-02-24

    One-step plasma-activated sintering (OS-PAS) fabrication of single-phase high-performance CoSb 3-based skutterudite thermoelectric material with a hierarchical structure on a time scale of a few minutes is first reported here. The formation mechanism of the CoSb 3 phase and the effects of the current and pressure fields on the phase transformation and microstructure evolution are studied in the one-step PAS process. The application of the panoscopic approach to this system and its effect on the transport properties are investigated. The results show that the hierarchical structure forms during the formation of the skutterudite phase under the effects of both current andmore » sintering pressure. The samples fabricated by the OS-PAS technique have defined hierarchical structures, which scatter phonons more intensely over a broader range of frequencies and significantly reduce the lattice thermal conductivity. High-performance bulk Te-doped skutterudite with the maximum ZT of 1.1 at 820 K for the composition CoSb 2.875Te 0.125 was obtained. Such high ZT values rival those obtained from single filled skutterudites. As a result, this newly developed OS-PAS technique enhances the thermoelectric performance, dramatically shortens the synthesis period and provides a facile method for obtaining hierarchical thermoelectric materials on a large scale.« less

  2. Low-voltage back-gated atmospheric pressure chemical vapor deposition based graphene-striped channel transistor with high-κ dielectric showing room-temperature mobility > 11,000 cm(2)/V·s.

    PubMed

    Smith, Casey; Qaisi, Ramy; Liu, Zhihong; Yu, Qingkai; Hussain, Muhammad Mustafa

    2013-07-23

    Utilization of graphene may help realize innovative low-power replacements for III-V materials based high electron mobility transistors while extending operational frequencies closer to the THz regime for superior wireless communications, imaging, and other novel applications. Device architectures explored to date suffer a fundamental performance roadblock due to lack of compatible deposition techniques for nanometer-scale dielectrics required to efficiently modulate graphene transconductance (gm) while maintaining low gate capacitance-voltage product (CgsVgs). Here we show integration of a scaled (10 nm) high-κ gate dielectric aluminum oxide (Al2O3) with an atmospheric pressure chemical vapor deposition (APCVD)-derived graphene channel composed of multiple 0.25 μm stripes to repeatedly realize room-temperature mobility of 11,000 cm(2)/V·s or higher. This high performance is attributed to the APCVD graphene growth quality, excellent interfacial properties of the gate dielectric, conductivity enhancement in the graphene stripes due to low tox/Wgraphene ratio, and scaled high-κ dielectric gate modulation of carrier density allowing full actuation of the device with only ±1 V applied bias. The superior drive current and conductance at Vdd = 1 V compared to other top-gated devices requiring undesirable seed (such as aluminum and poly vinyl alcohol)-assisted dielectric deposition, bottom gate devices requiring excessive gate voltage for actuation, or monolithic (nonstriped) channels suggest that this facile transistor structure provides critical insight toward future device design and process integration to maximize CVD-based graphene transistor performance.

  3. Evaluation of modern cotton harvest systems on irrigated cotton: harvester performance

    USDA-ARS?s Scientific Manuscript database

    Picker and stripper harvest systems were evaluated on production-scale irrigated cotton on the High Plains of Texas over three harvest seasons. Observations on harvester performance, including time-in-motion, harvest loss, seed cotton composition, and turnout, were conducted at seven locations with...

  4. Vapor and healing treatment for CH3NH3PbI3-xClx films toward large-area perovskite solar cells

    NASA Astrophysics Data System (ADS)

    Gouda, Laxman; Gottesman, Ronen; Tirosh, Shay; Haltzi, Eynav; Hu, Jiangang; Ginsburg, Adam; Keller, David A.; Bouhadana, Yaniv; Zaban, Arie

    2016-03-01

    Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method.Hybrid methyl-ammonium lead trihalide perovskites are promising low-cost materials for use in solar cells and other optoelectronic applications. With a certified photovoltaic conversion efficiency record of 20.1%, scale-up for commercial purposes is already underway. However, preparation of large-area perovskite films remains a challenge, and films of perovskites on large electrodes suffer from non-uniform performance. Thus, production and characterization of the lateral uniformity of large-area films is a crucial step towards scale-up of devices. In this paper, we present a reproducible method for improving the lateral uniformity and performance of large-area perovskite solar cells (32 cm2). The method is based on methyl-ammonium iodide (MAI) vapor treatment as a new step in the sequential deposition of perovskite films. Following the MAI vapor treatment, we used high throughput techniques to map the photovoltaic performance throughout the large-area device. The lateral uniformity and performance of all photovoltaic parameters (Voc, Jsc, Fill Factor, Photo-conversion efficiency) increased, with an overall improved photo-conversion efficiency of ~100% following a vapor treatment at 140 °C. Based on XRD and photoluminescence measurements, We propose that the MAI treatment promotes a ``healing effect'' to the perovskite film which increases the lateral uniformity across the large-area solar cell. Thus, the straightforward MAI vapor treatment is highly beneficial for large scale commercialization of perovskite solar cells, regardless of the specific deposition method. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr08658b

  5. Trajectories of Symptom Clusters, Performance Status, and Quality of Life During Concurrent Chemoradiotherapy in Patients With High-Grade Brain Cancers.

    PubMed

    Kim, Sang-Hee; Byun, Youngsoon

    Symptom clusters must be identified in patients with high-grade brain cancers for effective symptom management during cancer-related therapy. The aims of this study were to identify symptom clusters in patients with high-grade brain cancers and to determine the relationship of each cluster with the performance status and quality of life (QOL) during concurrent chemoradiotherapy (CCRT). Symptoms were assessed using the Memorial Symptom Assessment Scale, and the performance status was evaluated using the Karnofsky Performance Scale. Quality of life was assessed using the Functional Assessment of Cancer Therapy-General. This prospective longitudinal survey was conducted before CCRT and at 2 to 3 weeks and 4 to 6 weeks after the initiation of CCRT. A total of 51 patients with newly diagnosed primary malignant brain cancer were included. Six symptom clusters were identified, and 2 symptom clusters were present at each time point (ie, "negative emotion" and "neurocognitive" clusters before CCRT, "negative emotion and decreased vitality" and "gastrointestinal and decreased sensory" clusters at 2-3 weeks, and "body image and decreased vitality" and "gastrointestinal" clusters at 4-6 weeks). The symptom clusters at each time point demonstrated a significant relationship with the performance status or QOL. Differences were observed in symptom clusters in patients with high-grade brain cancers during CCRT. In addition, the symptom clusters were correlated with the performance status and QOL of patients, and these effects could change during CCRT. The results of this study will provide suggestions for interventions to treat or prevent symptom clusters in patients with high-grade brain cancer during CCRT.

  6. Calibrating EASY-Care independence scale to improve accuracy

    PubMed Central

    Jotheeswaran, A. T.; Dias, Amit; Philp, Ian; Patel, Vikram; Prince, Martin

    2016-01-01

    Background there is currently limited support for the reliability and validity of the EASY-Care independence scale, with little work carried out in low- or middle-income countries. Therefore, we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Objective we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Methods three primary care physicians administered EASY-Care comprehensive geriatric assessment for 150 frail and/or dependent older people in the primary care setting. A Mokken model was applied to investigate hierarchical scaling properties of EASY-Care independence scale, and internal consistency (Cronbach's alpha) of the scale was also examined. Results we found that EASY-Care independence scale is highly internally consistent and is a strong hierarchical scale, hence providing strong evidence for unidimensionality. However, two items in the scale (unable to use telephone and manage finances) had much lower item Loevinger H coefficients than others. Exclusion of these two items improved the overall internal consistency of the scale. Conclusions the strong performance of the EASY-Care independence scale among community-dwelling frail older people is encouraging. This study confirms that EASY-Care independence scale is highly internally consistent and a strong hierarchical scale. PMID:27496925

  7. Relativistic thermal electron scale instabilities in sheared flow plasma

    NASA Astrophysics Data System (ADS)

    Miller, Evan D.; Rogers, Barrett N.

    2016-04-01

    > The linear dispersion relation obeyed by finite-temperature, non-magnetized, relativistic two-fluid plasmas is presented, in the special case of a discontinuous bulk velocity profile and parallel wave vectors. It is found that such flows become universally unstable at the collisionless electron skin-depth scale. Further analyses are performed in the limits of either free-streaming ions or ultra-hot plasmas. In these limits, the system is highly unstable in the parameter regimes associated with either the electron scale Kelvin-Helmholtz instability (ESKHI) or the relativistic electron scale sheared flow instability (RESI) recently highlighted by Gruzinov. Coupling between these modes provides further instability throughout the remaining parameter space, provided both shear flow and temperature are finite. An explicit parameter space bound on the highly unstable region is found.

  8. Automated AFM for small-scale and large-scale surface profiling in CMP applications

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2018-03-01

    As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.

  9. Landslide model performance in a high resolution small-scale landscape

    NASA Astrophysics Data System (ADS)

    De Sy, V.; Schoorl, J. M.; Keesstra, S. D.; Jones, K. E.; Claessens, L.

    2013-05-01

    The frequency and severity of shallow landslides in New Zealand threatens life and property, both on- and off-site. The physically-based shallow landslide model LAPSUS-LS is tested for its performance in simulating shallow landslide locations induced by a high intensity rain event in a small-scale landscape. Furthermore, the effect of high resolution digital elevation models on the performance was tested. The performance of the model was optimised by calibrating different parameter values. A satisfactory result was achieved with a high resolution (1 m) DEM. Landslides, however, were generally predicted lower on the slope than mapped erosion scars. This discrepancy could be due to i) inaccuracies in the DEM or in other model input data such as soil strength properties; ii) relevant processes for this environmental context that are not included in the model; or iii) the limited validity of the infinite length assumption in the infinite slope stability model embedded in the LAPSUS-LS. The trade-off between a correct prediction of landslides versus stable cells becomes increasingly worse with coarser resolutions; and model performance decreases mainly due to altering slope characteristics. The optimal parameter combinations differ per resolution. In this environmental context the 1 m resolution topography resembles actual topography most closely and landslide locations are better distinguished from stable areas than for coarser resolutions. More gain in model performance could be achieved by adding landslide process complexities and parameter heterogeneity of the catchment.

  10. Effect of high-frequency repetitive transcranial magnetic stimulation on major depressive disorder in patients with Parkinson's disease.

    PubMed

    Shin, Hae-Won; Youn, Young C; Chung, Sun J; Sohn, Young H

    2016-07-01

    Major depressive disorder (MDD) occurs in a small proportion of patients with Parkinson's disease (PD) and reduces their quality of life. We performed a randomized sham-controlled study to evaluate the effect of high-frequency (HF) repetitive transcranial magnetic stimulation (rTMS) of the left dorsolateral prefrontal cortex (DLPFC) on MDD in patients with PD. Ten patients participated to a real-rTMS group and eight patients to a sham-rTMS group. Evaluations were performed at baseline, 2 and 6 weeks after rTMS treatment. All participants underwent examinations of depression rating scales, including the Hamilton Rating Scale, the Montgomery-Asberg Depression Rating Scale (MADRS), and the Beck Depression Inventory (BDI) and the motor part of the Unified Parkinson Disease Rating Scale (UPDRS-III). The real-rTMS group had improved scores on HRS and the MADRS after 10 sessions, and these beneficial effects persisted for 6 weeks after the initial session. The BDI score did not change immediately after the sessions. The sham-rTMS group had no significant changes in any of the depression rating scales. The UPDRS-III did not change in either group. HF-rTMS of the left DLPFC is an effective treatment for MDD in patients with PD.

  11. Implementing High-Performance Geometric Multigrid Solver with Naturally Grained Messages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; Zheng, Yili

    2015-10-26

    Structured-grid linear solvers often require manually packing and unpacking of communication data to achieve high performance.Orchestrating this process efficiently is challenging, labor-intensive, and potentially error-prone.In this paper, we explore an alternative approach that communicates the data with naturally grained messagesizes without manual packing and unpacking. This approach is the distributed analogue of shared-memory programming, taking advantage of the global addressspace in PGAS languages to provide substantial programming ease. However, its performance may suffer from the large number of small messages. We investigate theruntime support required in the UPC ++ library for this naturally grained version to close the performance gapmore » between the two approaches and attain comparable performance at scale using the High-Performance Geometric Multgrid (HPGMG-FV) benchmark as a driver.« less

  12. Impact of Data Placement on Resilience in Large-Scale Object Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carns, Philip; Harms, Kevin; Jenkins, John

    Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model tomore » investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.« less

  13. A study on the required performance of a 2G HTS wire for HTS wind power generators

    NASA Astrophysics Data System (ADS)

    Sung, Hae-Jin; Park, Minwon; Go, Byeong-Soo; Yu, In-Keun

    2016-05-01

    YBCO or REBCO coated conductor (2G) materials are developed for their superior performance at high magnetic field and temperature. Power system applications based on high temperature superconducting (HTS) 2G wire technology are attracting attention, including large-scale wind power generators. In particular, to solve problems associated with the foundations and mechanical structure of offshore wind turbines, due to the large diameter and heavy weight of the generator, an HTS generator is suggested as one of the key technologies. Many researchers have tried to develop feasible large-scale HTS wind power generator technologies. In this paper, a study on the required performance of a 2G HTS wire for large-scale wind power generators is discussed. A 12 MW class large-scale wind turbine and an HTS generator are designed using 2G HTS wire. The total length of the 2G HTS wire for the 12 MW HTS generator is estimated, and the essential prerequisites of the 2G HTS wire based generator are described. The magnetic field distributions of a pole module are illustrated, and the mechanical stress and strain of the pole module are analysed. Finally, a reasonable price for 2G HTS wire for commercialization of the HTS generator is suggested, reflecting the results of electromagnetic and mechanical analyses of the generator.

  14. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  15. High performance bilateral telerobot control.

    PubMed

    Kline-Schoder, Robert; Finger, William; Hogan, Neville

    2002-01-01

    Telerobotic systems are used when the environment that requires manipulation is not easily accessible to humans, as in space, remote, hazardous, or microscopic applications or to extend the capabilities of an operator by scaling motions and forces. The Creare control algorithm and software is an enabling technology that makes possible guaranteed stability and high performance for force-feedback telerobots. We have developed the necessary theory, structure, and software design required to implement high performance telerobot systems with time delay. This includes controllers for the master and slave manipulators, the manipulator servo levels, the communication link, and impedance shaping modules. We verified the performance using both bench top hardware as well as a commercial microsurgery system.

  16. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  17. Chip-scale integrated optical interconnects: a key enabler for future high-performance computing

    NASA Astrophysics Data System (ADS)

    Haney, Michael; Nair, Rohit; Gu, Tian

    2012-01-01

    High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide a seamless interconnect fabric spanning the intra-

  18. HACC: Extreme Scaling and Performance Across Diverse Architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin

    2013-11-01

    Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.

  19. Challenge toward the prediction of typhoon behaviour and down pour

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.

    2013-08-01

    Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.

  20. Atomic Scale Analysis of the Enhanced Electro- and Photo-Catalytic Activity in High-Index Faceted Porous NiO Nanowires

    NASA Astrophysics Data System (ADS)

    Shen, Meng; Han, Ali; Wang, Xijun; Ro, Yun Goo; Kargar, Alireza; Lin, Yue; Guo, Hua; Du, Pingwu; Jiang, Jun; Zhang, Jingyu; Dayeh, Shadi A.; Xiang, Bin

    2015-02-01

    Catalysts play a significant role in clean renewable hydrogen fuel generation through water splitting reaction as the surface of most semiconductors proper for water splitting has poor performance for hydrogen gas evolution. The catalytic performance strongly depends on the atomic arrangement at the surface, which necessitates the correlation of the surface structure to the catalytic activity in well-controlled catalyst surfaces. Herein, we report a novel catalytic performance of simple-synthesized porous NiO nanowires (NWs) as catalyst/co-catalyst for the hydrogen evolution reaction (HER). The correlation of catalytic activity and atomic/surface structure is investigated by detailed high resolution transmission electron microscopy (HRTEM) exhibiting a strong dependence of NiO NW photo- and electrocatalytic HER performance on the density of exposed high-index-facet (HIF) atoms, which corroborates with theoretical calculations. Significantly, the optimized porous NiO NWs offer long-term electrocatalytic stability of over one day and 45 times higher photocatalytic hydrogen production compared to commercial NiO nanoparticles. Our results open new perspectives in the search for the development of structurally stable and chemically active semiconductor-based catalysts for cost-effective and efficient hydrogen fuel production at large scale.

  1. A comparative study of all-vanadium and iron-chromium redox flow batteries for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; An, L.; Zhou, X. L.; Wei, L.

    2015-12-01

    The promise of redox flow batteries (RFBs) utilizing soluble redox couples, such as all vanadium ions as well as iron and chromium ions, is becoming increasingly recognized for large-scale energy storage of renewables such as wind and solar, owing to their unique advantages including scalability, intrinsic safety, and long cycle life. An ongoing question associated with these two RFBs is determining whether the vanadium redox flow battery (VRFB) or iron-chromium redox flow battery (ICRFB) is more suitable and competitive for large-scale energy storage. To address this concern, a comparative study has been conducted for the two types of battery based on their charge-discharge performance, cycle performance, and capital cost. It is found that: i) the two batteries have similar energy efficiencies at high current densities; ii) the ICRFB exhibits a higher capacity decay rate than does the VRFB; and iii) the ICRFB is much less expensive in capital costs when operated at high power densities or at large capacities.

  2. Teaching Elliptical Excision Skills to Novice Medical Students: A Randomized Controlled Study Comparing Low- and High-Fidelity Bench Models

    PubMed Central

    Denadai, Rafael; Oshiiwa, Marie; Saad-Hossne, Rogério

    2014-01-01

    Background: The search for alternative and effective forms of training simulation is needed due to ethical and medico-legal aspects involved in training surgical skills on living patients, human cadavers and living animals. Aims: To evaluate if the bench model fidelity interferes in the acquisition of elliptical excision skills by novice medical students. Materials and Methods: Forty novice medical students were randomly assigned to 5 practice conditions with instructor-directed elliptical excision skills’ training (n = 8): didactic materials (control); organic bench model (low-fidelity); ethylene-vinyl acetate bench model (low-fidelity); chicken legs’ skin bench model (high-fidelity); or pig foot skin bench model (high-fidelity). Pre- and post-tests were applied. Global rating scale, effect size, and self-perceived confidence based on Likert scale were used to evaluate all elliptical excision performances. Results: The analysis showed that after training, the students practicing on bench models had better performance based on Global rating scale (all P < 0.0000) and felt more confident to perform elliptical excision skills (all P < 0.0000) when compared to the control. There was no significant difference (all P > 0.05) between the groups that trained on bench models. The magnitude of the effect (basic cutaneous surgery skills’ training) was considered large (>0.80) in all measurements. Conclusion: The acquisition of elliptical excision skills after instructor-directed training on low-fidelity bench models was similar to the training on high-fidelity bench models; and there was a more substantial increase in elliptical excision performances of students that trained on all simulators compared to the learning on didactic materials. PMID:24700937

  3. Solution-Processable High-Purity Semiconducting SWCNTs for Large-Area Fabrication of High-Performance Thin-Film Transistors.

    PubMed

    Gu, Jianting; Han, Jie; Liu, Dan; Yu, Xiaoqin; Kang, Lixing; Qiu, Song; Jin, Hehua; Li, Hongbo; Li, Qingwen; Zhang, Jin

    2016-09-01

    For the large-area fabrication of thin-film transistors (TFTs), a new conjugated polymer poly[9-(1-octylonoyl)-9H-carbazole-2,7-diyl] is developed to harvest ultrahigh-purity semiconducting single-walled carbon nanotubes. Combined with spectral and nanodevice characterization, the purity is estimated up to 99.9%. High density and uniform network formed by dip-coating process is liable to fabricate high-performance TFTs on a wafer-scale and the as-fabricated TFTs exhibit a high degree of uniformity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. High-speed and high-fidelity system and method for collecting network traffic

    DOEpatents

    Weigle, Eric H [Los Alamos, NM

    2010-08-24

    A system is provided for the high-speed and high-fidelity collection of network traffic. The system can collect traffic at gigabit-per-second (Gbps) speeds, scale to terabit-per-second (Tbps) speeds, and support additional functions such as real-time network intrusion detection. The present system uses a dedicated operating system for traffic collection to maximize efficiency, scalability, and performance. A scalable infrastructure and apparatus for the present system is provided by splitting the work performed on one host onto multiple hosts. The present system simultaneously addresses the issues of scalability, performance, cost, and adaptability with respect to network monitoring, collection, and other network tasks. In addition to high-speed and high-fidelity network collection, the present system provides a flexible infrastructure to perform virtually any function at high speeds such as real-time network intrusion detection and wide-area network emulation for research purposes.

  5. Evaluation of low impact development approach for mitigating flood inundation at a watershed scale in China.

    PubMed

    Hu, Maochuan; Sayama, Takahiro; Zhang, Xingqi; Tanaka, Kenji; Takara, Kaoru; Yang, Hong

    2017-05-15

    Low impact development (LID) has attracted growing attention as an important approach for urban flood mitigation. Most studies evaluating LID performance for mitigating floods focus on the changes of peak flow and runoff volume. This paper assessed the performance of LID practices for mitigating flood inundation hazards as retrofitting technologies in an urbanized watershed in Nanjing, China. The findings indicate that LID practices are effective for flood inundation mitigation at the watershed scale, and especially for reducing inundated areas with a high flood hazard risk. Various scenarios of LID implementation levels can reduce total inundated areas by 2%-17% and areas with a high flood hazard level by 6%-80%. Permeable pavement shows better performance than rainwater harvesting against mitigating urban waterlogging. The most efficient scenario is combined rainwater harvesting on rooftops with a cistern capacity of 78.5 mm and permeable pavement installed on 75% of non-busy roads and other impervious surfaces. Inundation modeling is an effective approach to obtaining the information necessary to guide decision-making for designing LID practices at watershed scales. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Mass spectrometric-based stable isotopic 2-aminobenzoic acid glycan mapping for rapid glycan screening of biotherapeutics.

    PubMed

    Prien, Justin M; Prater, Bradley D; Qin, Qiang; Cockrill, Steven L

    2010-02-15

    Fast, sensitive, robust methods for "high-level" glycan screening are necessary during various stages of a biotherapeutic product's lifecycle, including clone selection, process changes, and quality control for lot release testing. Traditional glycan screening involves chromatographic or electrophoretic separation-based methods, and, although reproducible, these methods can be time-consuming. Even ultrahigh-performance chromatographic and microfluidic integrated LC/MS systems, which work on the tens of minute time scale, become lengthy when hundreds of samples are to be analyzed. Comparatively, a direct infusion mass spectrometry (MS)-based glycan screening method acquires data on a millisecond time scale, exhibits exquisite sensitivity and reproducibility, and is amenable to automated peak annotation. In addition, characterization of glycan species via sequential mass spectrometry can be performed simultaneously. Here, we demonstrate a quantitative high-throughput MS-based mapping approach using stable isotope 2-aminobenzoic acid (2-AA) for rapid "high-level" glycan screening.

  7. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  8. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  9. The architecture of the High Performance Storage System (HPSS)

    NASA Technical Reports Server (NTRS)

    Teaff, Danny; Watson, Dick; Coyne, Bob

    1994-01-01

    The rapid growth in the size of datasets has caused a serious imbalance in I/O and storage system performance and functionality relative to application requirements and the capabilities of other system components. The High Performance Storage System (HPSS) is a scalable, next-generation storage system that will meet the functionality and performance requirements or large-scale scientific and commercial computing environments. Our goal is to improve the performance and capacity of storage by two orders of magnitude or more over what is available in the general or mass marketplace today. We are also providing corresponding improvements in architecture and functionality. This paper describes the architecture and functionality of HPSS.

  10. Constructing experimental devices for half-ton synthesis of gadolinium-loaded liquid scintillator and its performance.

    PubMed

    Park, Young Seo; Jang, Yeong Min; Joo, Kyung Kwang

    2018-04-01

    This paper describes in brief features of various experimental devices constructed for half-ton synthesis of gadolinium(Gd)-loaded liquid scintillator (GdLS) and also includes the performances and detailed chemical and physical results of a 0.5% high-concentration GdLS. Various feasibility studies on useful apparatus used for loading Gd into solvents have been carried out. The transmittance, Gd concentration, density, light yield, and moisture content were measured for quality control. We show that with the help of adequate automated experimental devices and tools, it is possible to perform ton scale synthesis of GdLS at moderate laboratory scale without difficulty. The synthesized GdLS was satisfactory to meet chemical, optical, and physical properties and various safety requirements. These synthesizing devices can be expanded into massive scale next-generation neutrino experiments of several hundred tons.

  11. Constructing experimental devices for half-ton synthesis of gadolinium-loaded liquid scintillator and its performance

    NASA Astrophysics Data System (ADS)

    Park, Young Seo; Jang, Yeong Min; Joo, Kyung Kwang

    2018-04-01

    This paper describes in brief features of various experimental devices constructed for half-ton synthesis of gadolinium(Gd)-loaded liquid scintillator (GdLS) and also includes the performances and detailed chemical and physical results of a 0.5% high-concentration GdLS. Various feasibility studies on useful apparatus used for loading Gd into solvents have been carried out. The transmittance, Gd concentration, density, light yield, and moisture content were measured for quality control. We show that with the help of adequate automated experimental devices and tools, it is possible to perform ton scale synthesis of GdLS at moderate laboratory scale without difficulty. The synthesized GdLS was satisfactory to meet chemical, optical, and physical properties and various safety requirements. These synthesizing devices can be expanded into massive scale next-generation neutrino experiments of several hundred tons.

  12. Reducing adolescent clients' anger in a residential substance abuse treatment facility.

    PubMed

    Adelman, Robert; McGee, Patricia; Power, Robert; Hanson, Cathy

    2005-06-01

    Sundown Ranch, a residential behavioral health care treatment facility for adolescents, tracked the progress and results of treatment by selecting performance measures from a psychosocial screening inventory. The temper scale was one of the two highest scales at admission and the highest scale at discharge. A clinical performance improvement (PI) project was conducted to assess improvements in clients' ability to manage anger after the incorporation of Rational Emotive Behavior Therapy (REBT) into treatment. Eighteen months of baseline data (July 1, 1999 - February 1, 2001) were collected, and 20 months of data (May 1, 2001 - December 31, 2002) were collected after the introduction of the PI activity. In all, data were collected for 541 consecutive admissions. A comparison of five successive quarterly reviews indicated average scores of 1.4 standard deviations (SDs) above the mean on the temper scale before the PI activity and .45 SD above the mean after. The performance threshold of reduction of the average temper scale score to < or =1 SD was met for 17 of 20 months. The fact that the PI activity reduced the temper scale elevations by almost one full SD is highly suggestive of the efficacy of REBT with the treatment population. After the project was completed, REBT was promoted as an additional therapeutic modality within the treatment program.

  13. Aerodynamic Simulation of Ice Accretion on Airfoils

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Bragg, Michael B.; Busch, Greg T.; Montreuil, Emmanuel

    2011-01-01

    This report describes recent improvements in aerodynamic scaling and simulation of ice accretion on airfoils. Ice accretions were classified into four types on the basis of aerodynamic effects: roughness, horn, streamwise, and spanwise ridge. The NASA Icing Research Tunnel (IRT) was used to generate ice accretions within these four types using both subscale and full-scale models. Large-scale, pressurized windtunnel testing was performed using a 72-in.- (1.83-m-) chord, NACA 23012 airfoil model with high-fidelity, three-dimensional castings of the IRT ice accretions. Performance data were recorded over Reynolds numbers from 4.5 x 10(exp 6) to 15.9 x 10(exp 6) and Mach numbers from 0.10 to 0.28. Lower fidelity ice-accretion simulation methods were developed and tested on an 18-in.- (0.46-m-) chord NACA 23012 airfoil model in a small-scale wind tunnel at a lower Reynolds number. The aerodynamic accuracy of the lower fidelity, subscale ice simulations was validated against the full-scale results for a factor of 4 reduction in model scale and a factor of 8 reduction in Reynolds number. This research has defined the level of geometric fidelity required for artificial ice shapes to yield aerodynamic performance results to within a known level of uncertainty and has culminated in a proposed methodology for subscale iced-airfoil aerodynamic simulation.

  14. Performance Characterization of Global Address Space Applications: A Case Study with NWChem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Jeffrey R.; Krishnamoorthy, Sriram; Shende, Sameer

    The use of global address space languages and one-sided communication for complex applications is gaining attention in the parallel computing community. However, lack of good evaluative methods to observe multiple levels of performance makes it difficult to isolate the cause of performance deficiencies and to understand the fundamental limitations of system and application design for future improvement. NWChem is a popular computational chemistry package which depends on the Global Arrays/ ARMCI suite for partitioned global address space functionality to deliver high-end molecular modeling capabilities. A workload characterization methodology was developed to support NWChem performance engineering on large-scale parallel platforms. Themore » research involved both the integration of performance instrumentation and measurement in the NWChem software, as well as the analysis of one-sided communication performance in the context of NWChem workloads. Scaling studies were conducted for NWChem on Blue Gene/P and on two large-scale clusters using different generation Infiniband interconnects and x86 processors. The performance analysis and results show how subtle changes in the runtime parameters related to the communication subsystem could have significant impact on performance behavior. The tool has successfully identified several algorithmic bottlenecks which are already being tackled by computational chemists to improve NWChem performance.« less

  15. Boeing Smart Rotor Full-scale Wind Tunnel Test Data Report

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi; Hagerty, Brandon; Salazar, Denise

    2016-01-01

    A full-scale helicopter smart material actuated rotor technology (SMART) rotor test was conducted in the USAF National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames. The SMART rotor system is a five-bladed MD 902 bearingless rotor with active trailing-edge flaps. The flaps are actuated using piezoelectric actuators. Rotor performance, structural loads, and acoustic data were obtained over a wide range of rotor shaft angles of attack, thrust, and airspeeds. The primary test objective was to acquire unique validation data for the high-performance computing analyses developed under the Defense Advanced Research Project Agency (DARPA) Helicopter Quieting Program (HQP). Other research objectives included quantifying the ability of the on-blade flaps to achieve vibration reduction, rotor smoothing, and performance improvements. This data set of rotor performance and structural loads can be used for analytical and experimental comparison studies with other full-scale rotor systems and for analytical validation of computer simulation models. The purpose of this final data report is to document a comprehensive, highquality data set that includes only data points where the flap was actively controlled and each of the five flaps behaved in a similar manner.

  16. Improving Student Performance through Parent Involvement.

    ERIC Educational Resources Information Center

    Steventon, Candace E.

    A personalized parenting program was implemented to address poor academic performance and low self-esteem of high school students. Student records, the Coopersmith Self-Esteem Inventory, the Behavior Evaluation Scale, and teacher surveys were employed to identify and measure academic and/or self-perception growth. Parents participated in an 8-week…

  17. IceCube

    Science.gov Websites

    . PDF file High pT muons in Cosmic-Ray Air Showers with IceCube. PDF file IceCube Performance with Artificial Light Sources: the road to a Cascade Analyses + Energy scale calibration for EHE. PDF file , 2006. PDF file Thorsten Stetzelberger "IceCube DAQ Design & Performance" Nov 2005 PPT

  18. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Reliability Considerations of ULP Scaled CMOS in Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    White, Mark; MacNeal, Kristen; Cooper, Mark

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub-micron region. Decreasing the feature size of CMOS devices not only allows more components to be placed on a single chip, but it increases performance by allowing faster switching (or clock) speeds with reduced power compared to larger scaled devices. Higher performance, and lower operating and stand-by power characteristics of Ultra-Low Power (ULP) microelectronics are not only desirable, but also necessary to meet low power consumption design goals of critical spacecraft systems. The integration of these components in such systems, however, must be balanced with the overall risk tolerance of the project.

  20. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  1. Afterbody External Aerodynamic and Performance Prediction at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Carlson, John R.

    1999-01-01

    This CFD experiment concludes that the potential difference between the flow between a flight Reynolds number test and a sub-scale wind tunnel test are substantial for this particular nozzle boattail geometry. The early study was performed using a linear k-epsilon turbulence model. The present study was performed using the Girimaji formulation of a algebraic Reynolds stress turbulent simulation.

  2. Effect of High-Fidelity Ice Accretion Simulations on the Performance of a Full-Scale Airfoil Model

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Bragg, Michael B.; Addy, Harold E., Jr.; Lee, Sam; Moens, Frederic; Guffond, Didier

    2010-01-01

    The simulation of ice accretion on a wing or other surface is often required for aerodynamic evaluation, particularly at small scale or low-Reynolds number. While there are commonly accepted practices for ice simulation, there are no established and validated guidelines. The purpose of this article is to report the results of an experimental study establishing a high-fidelity, full-scale, iced-airfoil aerodynamic performance database. This research was conducted as a part of a larger program with the goal of developing subscale aerodynamic simulation methods for iced airfoils. Airfoil performance testing was carried out at the ONERA F1 pressurized wind tunnel using a 72-in. (1828.8-mm) chord NACA 23012 airfoil over a Reynolds number range of 4.5x10(exp 6) to 16.0 10(exp 6) and a Mach number range of 0.10 to 0.28. The high-fidelity, ice-casting simulations had a significant impact on the aerodynamic performance. A spanwise-ridge ice shape resulted in a maximum lift coefficient of 0.56 compared to the clean value of 1.85 at Re = 15.9x10(exp 6) and M = 0.20. Two roughness and streamwise shapes yielded maximum lift values in the range of 1.09 to 1.28, which was a relatively small variation compared to the differences in the ice geometry. The stalling characteristics of the two roughness and one streamwise ice simulation maintained the abrupt leading-edge stall type of the clean NACA 23012 airfoil, despite the significant decrease in maximum lift. Changes in Reynolds and Mach number over the large range tested had little effect on the iced-airfoil performance.

  3. The Effects of Specialization and Sex on Anterior Y-Balance Performance in High School Athletes.

    PubMed

    Miller, Madeline M; Trapp, Jessica L; Post, Eric G; Trigsted, Stephanie M; McGuine, Timothy A; Brooks, M Alison; Bell, David R

    Sport specialization and movement asymmetry have been separately discussed as potential risk factors for lower extremity injury. Early specialization may lead to the development of movement asymmetries that can predispose an athlete to injury, but this has not been thoroughly examined. Athletes rated as specialized would exhibit greater between-limb anterior reach asymmetry and decreased anterior reach distance on the Y-balance test (YBT) as compared with nonspecialized high school athletes, and these differences would not be dependent on sex. Cross-sectional study. Level 3. Two hundred ninety-five athletes (117 male, 178 female; mean age, 15.6 ± 1.2 years) from 2 local high schools participating in basketball, soccer, volleyball, and tennis responded to a questionnaire regarding sport specialization status and performed trials of the YBT during preseason testing. Specialization was categorized according to 3 previously utilized specialization classification methods (single/multisport, 3-point scale, and 6-point scale), and interactions between specialization and sex with Y-balance performance were calculated using 2-way analyses of variance. Single-sport male athletes displayed greater anterior reach asymmetry than other interaction groups. A consistent main effect was observed for sex, with men displaying greater anterior asymmetry and decreased anterior reach distance than women. However, the interaction effects of specialization and sex on anterior Y-balance performance varied based on the classification method used. Single-sport male athletes displayed greater anterior reach asymmetry on the YBT than multisport and female athletes. Specialization classification method is important because the 6- and 3-point scales may not accurately identify balance abnormalities. Male athletes performed worse than female athletes on both of the Y-balance tasks. Clinicians should be aware that single-sport male athletes may display deficits in dynamic balance, potentially increasing their risk of injury.

  4. Rotor Performance at High Advance Ratio: Theory versus Test

    NASA Technical Reports Server (NTRS)

    Harris, Franklin D.

    2008-01-01

    Five analytical tools have been used to study rotor performance at high advance ratio. One is representative of autogyro rotor theory in 1934 and four are representative of helicopter rotor theory in 2008. The five theories are measured against three sets of well documented, full-scale, isolated rotor performance experiments. The major finding of this study is that the decades spent by many rotorcraft theoreticians to improve prediction of basic rotor aerodynamic performance has paid off. This payoff, illustrated by comparing the CAMRAD II comprehensive code and Wheatley & Bailey theory to H-34 test data, shows that rational rotor lift to drag ratios are now predictable. The 1934 theory predicted L/D ratios as high as 15. CAMRAD II predictions compared well with H-34 test data having L/D ratios more on the order of 7 to 9. However, the detailed examination of the selected codes compared to H-34 test data indicates that not one of the codes can predict to engineering accuracy above an advance ratio of 0.62 the control positions and shaft angle of attack required for a given lift. There is no full-scale rotor performance data available for advance ratios above 1.0 and extrapolation of currently available data to advance ratios on the order of 2.0 is unreasonable despite the needs of future rotorcraft. Therefore, it is recommended that an overly strong full-scale rotor blade set be obtained and tested in a suitable wind tunnel to at least an advance ratio of 2.5. A tail rotor from a Sikorsky CH-53 or other large single rotor helicopter should be adequate for this exploratory experiment.

  5. Channeling of multikilojoule high-intensity laser beams in an inhomogeneous plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivancic, S.; Haberberger, D.; Habara, H.

    Channeling experiments were performed that demonstrate the transport of high-intensity (>10¹⁸ W/cm²), multikilojoule laser light through a millimeter-sized, inhomogeneous (~300-μm density scale length) laser produced plasma up to overcritical density, which is an important step forward for the fast-ignition concept. The background plasma density and the density depression inside the channel were characterized with a novel optical probe system. The channel progression velocity was measured, which agrees well with theoretical predictions based on large scale particle-in-cell simulations, confirming scaling laws for the required channeling laser energy and laser pulse duration, which are important parameters for future integrated fast-ignition channeling experiments.

  6. Susceptibility of the MMPI-2-RF neurological complaints and cognitive complaints scales to over-reporting in simulated head injury.

    PubMed

    Bolinger, Elizabeth; Reese, Caitlin; Suhr, Julie; Larrabee, Glenn J

    2014-02-01

    We examined the effect of simulated head injury on scores on the Neurological Complaints (NUC) and Cognitive Complaints (COG) scales of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF). Young adults with a history of mild head injury were randomly assigned to simulate head injury or give their best effort on a battery of neuropsychological tests, including the MMPI-2-RF. Simulators who also showed poor effort on performance validity tests (PVTs) were compared with controls who showed valid performance on PVTs. Results showed that both scales, but especially NUC, are elevated in individuals simulating head injury, with medium to large effect sizes. Although both scales were highly correlated with all MMPI-2-RF over-reporting validity scales, the relationship of Response Bias Scale to both NUC and COG was much stronger in the simulators than controls. Even accounting for over-reporting on the MMPI-2-RF, NUC was related to general somatic complaints regardless of group membership, whereas COG was related to both psychological distress and somatic complaints in the control group only. Neither scale was related to actual neuropsychological performance, regardless of group membership. Overall, results provide further evidence that self-reported cognitive symptoms can be due to many causes, not necessarily cognitive impairment, and can be exaggerated in a non-credible manner.

  7. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  8. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  9. Scaling up access to oral rehydration solution for diarrhea: Learning from historical experience in low- and high-performing countries.

    PubMed

    Wilson, Shelby E; Morris, Saul S; Gilbert, Sarah Skye; Mosites, Emily; Hackleman, Rob; Weum, Kristoffer L M; Pintye, Jillian; Manhart, Lisa E; Hawes, Stephen E

    2013-06-01

    This paper aims to identify factors that systematically predict why some countries that have tried to scale up oral rehydration solution (ORS) have succeeded, and others have not. We examined ORS coverage over time, across countries, and through case studies. We conducted expert interviews and literature and data searches to better understand the history of ORS scale-up efforts and why they failed or succeeded in nine countries. We used qualitative, pairwise (or three-country) comparisons of geographically or otherwise similar countries that had different outcomes in terms of ORS scale-up. An algorithm was developed which scored country performance across key supply, demand and financing activities to quantitatively assess the scale-up efforts in each country. The vast majority of countries have neither particularly low nor encouragingly high ORS use rates. We observed three clearly identifiable contrasts between countries that achieved and sustained high ORS coverage and those that did not. Key partners across sectors have critical roles to play to effectively address supply- and demand-side barriers. Efforts must synchronize demand generation, private provider outreach and public sector work. Many donor funds are either suspended or redirected in the event of political instability, exacerbating the health challenges faced by countries in these contexts. We found little information on the cost of scale-up efforts. We identified a number of characteristics of successful ORS scale-up programs, including involvement of a broad range of key players, addressing supply and demand generation together, and working with both public and private sectors. Dedicated efforts are needed to launch and sustain success, including monitoring and evaluation plans to track program costs and impacts. These case studies were designed to inform programmatic decision-making; thus, rigorous academic methods to qualitatively and quantitatively evaluate country ORS scale-up programs might yield additional, critical insights and confirm our conclusions.

  10. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  11. Valence Scaling of Dynamic Facial Expressions Is Altered in High-Functioning Subjects with Autism Spectrum Disorders: An FMRI Study

    ERIC Educational Resources Information Center

    Rahko, Jukka S.; Paakki, Jyri-Johan; Starck, Tuomo H.; Nikkinen, Juha; Pauls, David L.; Katsyri, Jari V.; Jansson-Verkasalo, Eira M.; Carter, Alice S.; Hurtig, Tuula M.; Mattila, Marja-Leena; Jussila, Katja K.; Remes, Jukka J.; Kuusikko-Gauffin, Sanna A.; Sams, Mikko E.; Bolte, Sven; Ebeling, Hanna E.; Moilanen, Irma K.; Tervonen, Osmo; Kiviniemi, Vesa

    2012-01-01

    FMRI was performed with the dynamic facial expressions fear and happiness. This was done to detect differences in valence processing between 25 subjects with autism spectrum disorders (ASDs) and 27 typically developing controls. Valence scaling was abnormal in ASDs. Positive valence induces lower deactivation and abnormally strong activity in ASD…

  12. Executive Function in Children with High and Low Attentional Skills: Correspondences between Behavioural and Cognitive Profiles

    ERIC Educational Resources Information Center

    Scope, Alison; Empson, Janet; McHale, Sue

    2010-01-01

    Cognitive performance was compared between two groups of typically developing children, who had been observed and rated as differing significantly in their attentional skills at school. The participants were 24 8- and 9-year-old children scoring poorly relative to peers, on a classroom observation scale and teacher rating scale for attention,…

  13. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    PubMed Central

    Azad, Ariful; Ouzounis, Christos A; Kyrpides, Nikos C; Buluç, Aydin

    2018-01-01

    Abstract Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times and memory demands. Here, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ∼70 million nodes with ∼68 billion edges in ∼2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license. PMID:29315405

  14. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE PAGES

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.; ...

    2018-01-05

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  15. HipMCL: a high-performance parallel implementation of the Markov clustering algorithm for large-scale networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Pavlopoulos, Georgios A.; Ouzounis, Christos A.

    Biological networks capture structural or functional properties of relevant entities such as molecules, proteins or genes. Characteristic examples are gene expression networks or protein–protein interaction networks, which hold information about functional affinities or structural similarities. Such networks have been expanding in size due to increasing scale and abundance of biological data. While various clustering algorithms have been proposed to find highly connected regions, Markov Clustering (MCL) has been one of the most successful approaches to cluster sequence similarity or expression networks. Despite its popularity, MCL’s scalability to cluster large datasets still remains a bottleneck due to high running times andmore » memory demands. In this paper, we present High-performance MCL (HipMCL), a parallel implementation of the original MCL algorithm that can run on distributed-memory computers. We show that HipMCL can efficiently utilize 2000 compute nodes and cluster a network of ~70 million nodes with ~68 billion edges in ~2.4 h. By exploiting distributed-memory environments, HipMCL clusters large-scale networks several orders of magnitude faster than MCL and enables clustering of even bigger networks. Finally, HipMCL is based on MPI and OpenMP and is freely available under a modified BSD license.« less

  16. NASA/GE Energy Efficient Engine low pressure turbine scaled test vehicle performance report

    NASA Technical Reports Server (NTRS)

    Bridgeman, M. J.; Cherry, D. G.; Pedersen, J.

    1983-01-01

    The low pressure turbine for the NASA/General Electric Energy Efficient Engine is a highly loaded five-stage design featuring high outer wall slope, controlled vortex aerodynamics, low stage flow coefficient, and reduced clearances. An assessment of the performance of the LPT has been made based on a series of scaled air-turbine tests divided into two phases: Block 1 and Block 2. The transition duct and the first two stages of the turbine were evaluated during the Block 1 phase from March through August 1979. The full five-stage scale model, representing the final integrated core/low spool (ICLS) design and incorporating redesigns of stages 1 and 2 based on Block 1 data analysis, was tested as Block 2 in June through September 1981. Results from the scaled air-turbine tests, reviewed herein, indicate that the five-stage turbine designed for the ICLS application will attain an efficiency level of 91.5 percent at the Mach 0.8/10.67-km (35,000-ft), max-climb design point. This is relative to program goals of 91.1 percent for the ICLS and 91.7 percent for the flight propulsion system (FPS).

  17. Dynamic Smagorinsky model on anisotropic grids

    NASA Technical Reports Server (NTRS)

    Scotti, A.; Meneveau, C.; Fatica, M.

    1996-01-01

    Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.

  18. Combined climate and carbon-cycle effects of large-scale deforestation

    PubMed Central

    Bala, G.; Caldeira, K.; Wickett, M.; Phillips, T. J.; Lobell, D. B.; Delire, C.; Mirin, A.

    2007-01-01

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO2 to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate. PMID:17420463

  19. Combined climate and carbon-cycle effects of large-scale deforestation.

    PubMed

    Bala, G; Caldeira, K; Wickett, M; Phillips, T J; Lobell, D B; Delire, C; Mirin, A

    2007-04-17

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO(2) to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.

  20. Combined Climate and Carbon-Cycle Effects of Large-Scale Deforestation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bala, G; Caldeira, K; Wickett, M

    2006-10-17

    The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO{sub 2} to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These are the first such simulations performed using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has amore » net cooling influence on Earth's climate, since the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. While these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  2. Guided Growth of Horizontal ZnSe Nanowires and their Integration into High-Performance Blue-UV Photodetectors.

    PubMed

    Oksenberg, Eitan; Popovitz-Biro, Ronit; Rechav, Katya; Joselevich, Ernesto

    2015-07-15

    Perfectly aligned horizontal ZnSe nano-wires are obtained by guided growth, and easily integrated into high-performance blue-UV photodetectors. Their crystal phase and crystallographic orientation are controlled by the epitaxial relations with six different sapphire planes. Guided growth paves the way for the large-scale integration of nanowires into optoelectronic devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A dual-scale metal nanowire network transparent conductor for highly efficient and flexible organic light emitting diodes.

    PubMed

    Lee, Jinhwan; An, Kunsik; Won, Phillip; Ka, Yoonseok; Hwang, Hyejin; Moon, Hyunjin; Kwon, Yongwon; Hong, Sukjoon; Kim, Changsoon; Lee, Changhee; Ko, Seung Hwan

    2017-02-02

    Although solution processed metal nanowire (NW) percolation networks are a strong candidate to replace commercial indium tin oxide, their performance is limited in thin film device applications due to reduced effective electrical areas arising from the dimple structure and percolative voids that single size metal NW percolation networks inevitably possess. Here, we present a transparent electrode based on a dual-scale silver nanowire (AgNW) percolation network embedded in a flexible substrate to demonstrate a significant enhancement in the effective electrical area by filling the large percolative voids present in a long/thick AgNW network with short/thin AgNWs. As a proof of concept, the performance enhancement of a flexible phosphorescent OLED is demonstrated with the dual-scale AgNW percolation network compared to the previous mono-scale AgNWs. Moreover, we report that mechanical and oxidative robustness, which are critical for flexible OLEDs, are greatly increased by embedding the dual-scale AgNW network in a resin layer.

  4. Can Alberta infant motor scale and milani comparetti motor development screening test be rapid alternatives to bayley scales of infant development-II at high-risk infants

    PubMed Central

    Yıldırım, Zeynep Hoşbay; Aydınlı, Nur; Ekici, Barış; Tatlı, Burak; Çalişkan, Mine

    2012-01-01

    Purpose: The main object of the present study is to assess neuromotor development of high-risk infants by using three tests, and to determine inter-test concordance and the feasibility of these tests. Materials and Methods: One-hundred and nine patients aged between 0 and 6 months and identified as “high-risk infant” according to the Kliegman's criteria were enrolled to the study. Three different tests were used to assess neuromotor development of the patients: Bayley scales of infant development-II (BSID-II), Alberta infant motor scale (AIMS), and Milani Comparetti Motor Development Screening Test (MCMDST). Results: Correlation analysis was performed between pure scores of BSID-II motor scale and total scores of AIMS. These two tests were highly correlated (r:0.92). Moderate concordance was found between BSID-II and AIMS (k:0.35). Slight concordance was found between BSID-II and MCMDST; and the concordance was slight again for AIMS and MCMDST (k:0.11 and k:0.16, respectively) too. Conclusion: AIMS has a high correlation and consistency with BSID-II and can be used with routine neurological examination as it is based on observations, has few items, and requires less time to complete. PMID:22919192

  5. Can Alberta infant motor scale and milani comparetti motor development screening test be rapid alternatives to bayley scales of infant development-II at high-risk infants.

    PubMed

    Yıldırım, Zeynep Hoşbay; Aydınlı, Nur; Ekici, Barış; Tatlı, Burak; Calişkan, Mine

    2012-07-01

    The main object of the present study is to assess neuromotor development of high-risk infants by using three tests, and to determine inter-test concordance and the feasibility of these tests. One-hundred and nine patients aged between 0 and 6 months and identified as "high-risk infant" according to the Kliegman's criteria were enrolled to the study. Three different tests were used to assess neuromotor development of the patients: Bayley scales of infant development-II (BSID-II), Alberta infant motor scale (AIMS), and Milani Comparetti Motor Development Screening Test (MCMDST). Correlation analysis was performed between pure scores of BSID-II motor scale and total scores of AIMS. These two tests were highly correlated (r:0.92). Moderate concordance was found between BSID-II and AIMS (k:0.35). Slight concordance was found between BSID-II and MCMDST; and the concordance was slight again for AIMS and MCMDST (k:0.11 and k:0.16, respectively) too. AIMS has a high correlation and consistency with BSID-II and can be used with routine neurological examination as it is based on observations, has few items, and requires less time to complete.

  6. Domain-averaged snow depth over complex terrain from flat field measurements

    NASA Astrophysics Data System (ADS)

    Helbig, Nora; van Herwijnen, Alec

    2017-04-01

    Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.

  7. Criticality of Low-Energy Protons in Single-Event Effects Testing of Highly-Scaled Technologies

    NASA Technical Reports Server (NTRS)

    Pellish, Jonathan A.; Marshall, Paul W.; Rodbell, Kenneth P.; Gordon, Michael S.; LaBel, Kenneth A.; Schwank, James R.; Dodds, Nathaniel A.; Castaneda, Carlos M.; Berg, Melanie D.; Kim, Hak S.; hide

    2014-01-01

    We report low-energy proton and low-energy alpha particle single-event effects (SEE) data on a 32 nm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) latches and static random access memory (SRAM) that demonstrates the criticality of using low-energy protons for SEE testing of highly-scaled technologies. Low-energy protons produced a significantly higher fraction of multi-bit upsets relative to single-bit upsets when compared to similar alpha particle data. This difference highlights the importance of performing hardness assurance testing with protons that include energy distribution components below 2 megaelectron-volt. The importance of low-energy protons to system-level single-event performance is based on the technology under investigation as well as the target radiation environment.

  8. Statistical machine translation for biomedical text: are we there yet?

    PubMed

    Wu, Cuijun; Xia, Fei; Deleger, Louise; Solti, Imre

    2011-01-01

    In our paper we addressed the research question: "Has machine translation achieved sufficiently high quality to translate PubMed titles for patients?". We analyzed statistical machine translation output for six foreign language - English translation pairs (bi-directionally). We built a high performing in-house system and evaluated its output for each translation pair on large scale both with automated BLEU scores and human judgment. In addition to the in-house system, we also evaluated Google Translate's performance specifically within the biomedical domain. We report high performance for German, French and Spanish -- English bi-directional translation pairs for both Google Translate and our system.

  9. Scale and geometry effects on heat-recirculating combustors

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Hua; Ronney, Paul D.

    2013-10-01

    A simple analysis of linear and spiral counterflow heat-recirculating combustors was conducted to identify the dimensionless parameters expected to quantify the performance of such devices. A three-dimensional (3D) numerical model of spiral counterflow 'Swiss roll' combustors was then used to confirm and extend the applicability of the identified parameters. It was found that without property adjustment to maintain constant values of these parameters, at low Reynolds number (Re) smaller-scale combustors actually showed better performance (in terms of having lower lean extinction limits at the same Re) due to lower heat loss and internal wall-to-wall radiation effects, whereas at high Re, larger-scale combustors showed better performance due to longer residence time relative to chemical reaction time. By adjustment of property values, it was confirmed that four dimensionless parameters were sufficient to characterise combustor performance at all scales: Re, a heat loss coefficient (α), a Damköhler number (Da) and a radiative transfer number (R). The effect of diffusive transport effect (i.e. Lewis number) was found to be significant only at low Re. Substantial differences were found between the performance of linear and spiral combustors; these were explained in terms of the effects of the area exposed to heat loss to ambient and the sometimes detrimental effect of increasing heat transfer to adjacent outlet turns of the spiral exchanger. These results provide insight into the optimal design of small-scale combustors and choice of operation conditions.

  10. Similarity spectra analysis of high-performance jet aircraft noise.

    PubMed

    Neilsen, Tracianne B; Gee, Kent L; Wall, Alan T; James, Michael M

    2013-04-01

    Noise measured in the vicinity of an F-22A Raptor has been compared to similarity spectra found previously to represent mixing noise from large-scale and fine-scale turbulent structures in laboratory-scale jet plumes. Comparisons have been made for three engine conditions using ground-based sideline microphones, which covered a large angular aperture. Even though the nozzle geometry is complex and the jet is nonideally expanded, the similarity spectra do agree with large portions of the measured spectra. Toward the sideline, the fine-scale similarity spectrum is used, while the large-scale similarity spectrum provides a good fit to the area of maximum radiation. Combinations of the two similarity spectra are shown to match the data in between those regions. Surprisingly, a combination of the two is also shown to match the data at the farthest aft angle. However, at high frequencies the degree of congruity between the similarity and the measured spectra changes with engine condition and angle. At the higher engine conditions, there is a systematically shallower measured high-frequency slope, with the largest discrepancy occurring in the regions of maximum radiation.

  11. The teamwork in assertive community treatment (TACT) scale: development and validation.

    PubMed

    Wholey, Douglas R; Zhu, Xi; Knoke, David; Shah, Pri; Zellmer-Bruhn, Mary; Witheridge, Thomas F

    2012-11-01

    Team design is meticulously specified for assertive community treatment (ACT) teams, yet performance can vary across ACT teams, even those with high fidelity. By developing and validating the Teamwork in Assertive Community Treatment (TACT) scale, investigators examined the role of team processes in ACT performance. The TACT scale measuring ACT teamwork was developed from a conceptual model grounded in organizational research and adapted for the ACT and mental health context. TACT subscales were constructed after exploratory and confirmatory factor analyses. The reliability, discriminant validity, predictive validity, temporal stability, internal consistency, and within-team agreement were established with surveys from approximately 300 members of 26 Minnesota ACT teams who completed the questionnaire three times, at six-month intervals. Nine TACT subscales emerged from the analyses: exploration, exploitation of new and existing knowledge, psychological safety, goal agreement, conflict, constructive controversy, information accessibility, encounter preparedness, and consumer-centered care. These nine subscales demonstrated fit and temporal stability (confirmatory factor analysis), high internal consistency (Cronbach's alpha), and within-team agreement and between-team differences (rwg and intraclass correlations). Correlational analyses of the subscales revealed that they measure related yet distinctive aspects of ACT team processes, and regression analyses demonstrated predictive validity (encounter preparedness is related to staff outcomes). The TACT scale demonstrated high reliability and validity and can be included in research and evaluation of teamwork in ACT and mental health teams.

  12. The revised Generalized Expectancy for Success Scale: a validity and reliability study.

    PubMed

    Hale, W D; Fiedler, L R; Cochran, C D

    1992-07-01

    The Generalized Expectancy for Success Scale (GESS; Fibel & Hale, 1978) was revised and assessed for reliability and validity. The revised version was administered to 199 college students along with other conceptually related measures, including the Rosenberg Self-Esteem Scale, the Life Orientation Test, and Rotter's Internal-External Locus of Control Scale. One subsample of students also completed the Eysenck Personality Inventory, while another subsample performed a criterion-related task that involved risk taking. Item analysis yielded 25 items with correlations of .45 or higher with the total score. Results indicated high internal consistency and test-retest reliability.

  13. Contribution to the epidemiology of postnatal depression in Germany--implications for the utilization of treatment.

    PubMed

    v Ballestrem, C-L; Strauss, M; Kächele, H

    2005-05-01

    Using a longitudinal screening model, 772 mothers were screened for postnatal depression after delivery in Stuttgart (Germany). This model contained the Edinburgh Postnatal Depression Scale (EPDS) and the Hamilton Depression Scale (HAMD). The first screening was 6-8 weeks after delivery with the EPDS. Mothers with high scores in the first screening had a second screening 9-12 weeks after delivery with the EPDS at least three weeks after the first. Mothers with high scores in both screenings were investigated with the Hamilton Depression Scale (HAMD). Classification was performed with the DSM-IV. After observation until the third month after delivery, 3.6% (N = 28) of the 772 mothers were diagnosed with postnatal depression. Various methods of therapy were offered to those mothers. 18% (N = 5) accepted one or more of these methods of treatment. The rest of the mothers with postnatal depression refused--mostly for attitudinal or practical reasons. 13.4% of the mothers showed high scores in the first screening but not in the second. For those mothers a longitudinal observation is currently being performed to distinguish between a depressive episode and a depression with oscillating symptoms.

  14. Is perfect good? - Dimensions of perfectionism in newly admitted medical students.

    PubMed

    Seeliger, Helen; Harendza, Sigrid

    2017-11-13

    Society expects physicians to perform perfectly but high levels of perfectionism are associated with symptoms of distress in medical students. This study investigated whether medical students admitted to medical school by different selection criteria differ in the occurrence of perfectionism. Newly enrolled undergraduate medical students (n = 358) filled out the following instruments: Multidimensional Perfectionism Scale (MPS-H), Multidimensional Perfectionism Scale (MPS-F), Big Five Inventory (BFI-10), General Self-Efficacy Scale (GSE), Patient Health Questionnaire 9 (PHQ-9), and Generalized Anxiety Disorder 7 (GAD-7). Sociodemographic data such as age, gender, high school degrees, and the way of admission to medical school were also included in the questionnaire. The 298 participating students had significantly lower scores in Socially-Prescribed Perfectionism than the general population independently of their way of admission to medical school. Students who were selected for medical school by their high school degree showed the highest score for Adaptive Perfectionism. Maladaptive Perfectionism was the strongest predictor for the occurrence symptoms of depression and anxiety regardless of the way of admission. Students from all admission groups should be observed longitudinally for performance and to assess whether perfectionism questionnaires might be an additional useful instrument for medical school admission processes.

  15. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  16. Co-variation of tests commonly used in stroke rehabilitation.

    PubMed

    Langhammer, Birgitta; Stanghelle, Johan Kvalvik

    2006-12-01

    The aim of the present study was to analyse the co-variation of different tests commonly used in stroke rehabilitation, and specifically used in a recent randomized, controlled study of two different physiotherapy models in stroke rehabilitation. Correlations of the performed tests and recordings from previous work were studied. The test results from three-month, one-year and four-year follow-up were analysed in an SPSS Version 11 statistical package with Pearson and Spearman correlations. There was an expected high correlation between the motor function tests, both based on partial and total scores. The correlations between Nottingham Health Profile Part 1 and Motor Assessment Scale (MAS), Sødring Motor Evaluation Scale (SMES), the Berg Balance Scale (BBS) and Barthel Activities of Daily Living (ADL) index were low for all items except physical condition. The correlations between registered living conditions, assistive devices, recurrent stroke, motor function (MAS, SMES), ADL (Barthel ADL index) and balance (BBS) were high. The same variables showed weak or poor correlation to the Nottingham Health Profile (NHP). The co-variations of motor function tests and functional tests were high, but the co-variations of motor, functional and self-reported life-quality tests were poor. The patients rated themselves on a higher functional level in the self-reported tests than was observed objectively in the performance-based tests. A possible reason for this is that the patients may have been unaware they modified their performance to adjust for physical decline, and consequently overestimate their physical condition. This result underlines the importance of both performance-based and self-reported tests as complementary tools in a rehabilitation process.

  17. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  18. Turbulence sources, character, and effects in the stable boundary layer: Insights from multi-scale direct numerical simulations and new, high-resolution measurements

    NASA Astrophysics Data System (ADS)

    Fritts, Dave; Wang, Ling; Balsley, Ben; Lawrence, Dale

    2013-04-01

    A number of sources contribute to intermittent small-scale turbulence in the stable boundary layer (SBL). These include Kelvin-Helmholtz instability (KHI), gravity wave (GW) breaking, and fluid intrusions, among others. Indeed, such sources arise naturally in response to even very simple "multi-scale" superpositions of larger-scale GWs and smaller-scale GWs, mean flows, or fine structure (FS) throughout the atmosphere and the oceans. We describe here results of two direct numerical simulations (DNS) of these GW-FS interactions performed at high resolution and high Reynolds number that allow exploration of these turbulence sources and the character and effects of the turbulence that arises in these flows. Results include episodic turbulence generation, a broad range of turbulence scales and intensities, PDFs of dissipation fields exhibiting quasi-log-normal and more complex behavior, local turbulent mixing, and "sheet and layer" structures in potential temperature that closely resemble high-resolution measurements. Importantly, such multi-scale dynamics differ from their larger-scale, quasi-monochromatic gravity wave or quasi-horizontally homogeneous shear flow instabilities in significant ways. The ability to quantify such multi-scale dynamics with new, very high-resolution measurements is also advancing rapidly. New in-situ sensors on small, unmanned aerial vehicles (UAVs), balloons, or tethered systems are enabling definition of SBL (and deeper) environments and turbulence structure and dissipation fields with high spatial and temporal resolution and precision. These new measurement and modeling capabilities promise significant advances in understanding small-scale instability and turbulence dynamics, in quantifying their roles in mixing, transport, and evolution of the SBL environment, and in contributing to improved parameterizations of these dynamics in mesoscale, numerical weather prediction, climate, and general circulation models. We expect such measurement and modeling capabilities to also aid in the design of new and more comprehensive future SBL measurement programs.

  19. Evaluating scale-up rules of a high-shear wet granulation process.

    PubMed

    Tao, Jing; Pandey, Preetanshu; Bindra, Dilbir S; Gao, Julia Z; Narang, Ajit S

    2015-07-01

    This work aimed to evaluate the commonly used scale-up rules for high-shear wet granulation process using a microcrystalline cellulose-lactose-based low drug loading formulation. Granule properties such as particle size, porosity, flow, and tabletability, and tablet dissolution were compared across scales using scale-up rules based on different impeller speed calculations or extended wet massing time. Constant tip speed rule was observed to produce slightly less granulated material at the larger scales. Longer wet massing time can be used to compensate for the lower shear experienced by the granules at the larger scales. Constant Froude number and constant empirical stress rules yielded granules that were more comparable across different scales in terms of compaction performance and tablet dissolution. Granule porosity was shown to correlate well with blend tabletability and tablet dissolution, indicating the importance of monitoring granule densification (porosity) during scale-up. It was shown that different routes can be chosen during scale-up to achieve comparable granule growth and densification by altering one of the three parameters: water amount, impeller speed, and wet massing time. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Academic performance among adolescents with behaviorally induced insufficient sleep syndrome.

    PubMed

    Lee, Yu Jin; Park, Juhyun; Kim, Soohyun; Cho, Seong-Jin; Kim, Seog Ju

    2015-01-15

    The present study investigated academic performance among adolescents with behaviorally induced insufficient sleep syndrome (BISS) and attempted to identify independent predictors of academic performance among BISS-related factors. A total of 51 students with BISS and 50 without BISS were recruited from high schools in South Korea based on self-reported weekday sleep durations, weekend oversleep, and the Epworth Sleepiness Scale (ESS). Participants reported their academic performance in the form of class quartile ranking. The Korean version of the Composite Scale (KtCS) for morningness/eveningness, the Beck Depression Inventory (BDI) for depression, and the Barratt Impulsiveness Scale-II (BIS-II) for impulsivity were administered. Adolescents with BISS reported poorer academic performance than adolescents without BISS (p = 0.02). Adolescents with BISS also exhibited greater levels of eveningness (p < 0.001), depressive symptoms (p < 0.001), and impulsiveness (p < 0.01). Longer weekend oversleep predicted poorer academic performance among adolescents with BISS even after controlling for ESS, KtCS, BDI, and BIS-II (β = 0.42, p < 0.01). BISS among adolescents is associated with poor academic performance and that sleep debt, as represented by weekend oversleep, predicts poorer academic performance independent of depression, impulsiveness, weekday sleep duration, daytime sleepiness, and morningness/eveningness among adolescents with BISS. © 2015 American Academy of Sleep Medicine.

  1. Development of circulation control technology for powered-lift STOL aircraft

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.

    1987-01-01

    The flow entraining capabilities of the Circulation Control Wing high lift system were employed to provide an even stronger STOL potential when synergistically combined with upper surface mounted engines. The resulting configurations generate very high supercirculation lift in addition to a vertical component of the pneumatically deflected engine thrust. A series of small scale wind tunnel tests and full scale static thrust deflection tests are discussed which provide a sufficient data base performance. These tests results show thrust deflections of greater than 90 deg produced pneumatically by nonmoving aerodynamic surfaces, and the ability to maintain constant high lift while varying the propulsive force from high thrust recovery required for short takeoff to high drag generation required for short low speed landings.

  2. Detection of microstructural defects in chalcopyrite Cu(In,Ga)Se2 solar cells by spectrally-filtered electroluminescence

    NASA Astrophysics Data System (ADS)

    Skvarenina, L.; Gajdos, A.; Macku, R.; Skarvada, P.

    2017-12-01

    The aim of this research is to detect and localize microstructural defects by using an electrically excited light emission from a forward/reverse-bias stressed pn-junction in thin-film Cu(In; Ga)Se2 solar cells with metal wrap through architecture. A different origin of the local light emission from intrinsic/extrinsic imperfections in these chalcopyrite-based solar cells can be distinguished by a spectrally-filtered electroluminescence mapping. After a light emission mapping and localization of the defects in a macro scale is performed a micro scale exploration of the solar cell surface by a scanning electron microscope which follows the particular defects obtained by an electroluminescence. In particular, these macroscopic/microscopic examinations are performed independently, then the searching of the corresponding defects in the micro scale is rather difficult due to a diffused light emission obtained from the macro scale localization. Some of the defects accompanied by a highly intense light emission very often lead to a strong local overheating. Therefore, the lock-in infrared thermography is also performed along with an electroluminescence mapping.

  3. A general theory of DC electromagnetic launchers

    NASA Astrophysics Data System (ADS)

    Engel, Thomas G.; Timpson, Erik J.

    2015-08-01

    The non-linear, transient operation of DC electromagnetic launchers (EMLs) complicates their theoretical understanding and prevents scaling studies and performance comparisons without the aid of detailed numerical models. This paper presents a general theory for DC electromagnetic launchers that has simplified these tasks by identifying critical EML parameters and relationships affecting the EML's voltage, current, and power scaling, as well as its performance and energy conversion efficiency. EML parameters and relationships discussed in this paper include the specific force, the operating mode, the launcher constant, the launcher characteristic velocity, the contact characteristic velocity, the energy conversion efficiency, and the kinetic power and voltage-current scaling relationship. The concepts of the ideal EML, same-scale comparisons, and EML impedance are discussed. This paper defines conditions needed for the EML to operate in the steady-state. A comparison of the general theory with experimental results of several different types of DC (i.e., non-induction) electromagnetic launchers ranging from medium velocity (100's m/s) to high velocity (1000's m/s) is performed. There is good agreement between the general theory and the experimental results.

  4. Assessing the relationship between perceived emotional intelligence and academic performance of medical students

    NASA Astrophysics Data System (ADS)

    Rajasingam, Uma; Suat-Cheng, Peh; Aung, Thidar; Dipolog-Ubanan, Genevieve; Wei, Wee Kok

    2014-12-01

    This study examines the association between emotional intelligence and its influence on academic performance on medical students to see if emotional intelligence emerges as a significant influencer of academic achievement. The instrument used is the Trait-Meta Mood Scale (TMMS), a 30-item self-report questionnaire designed to measure an individual's perceived emotional intelligence (PEI). Participants are required to rate the extent to which they agree with each item on a 5-point Likert scale. The TMMS consists of three subscales - Attention to Feelings (which measures the extent to which individuals notice and think about their feelings, Clarity (which measures the extent to which an individual is able to discriminate among different moods) and Mood Repair (related to an individual's ability to repair/terminate negative moods or maintain pleasant ones). Of special interest is whether high scores in the Clarity and Repair subscales correlate positively with academic performance, and whether high scores on the Attention subscale, without correspondingly high scores in the Clarity and Mood Repair subscales, correlates negatively with academic performance. Sample population includes all medical students (Years 1-5) of the MD program in UCSI University, Malaysia. Preliminary analysis indicates no significant relationship between overall TMMS scores and academic performance; however, the Attention subscale is significantly correlated to academic performance. Therefore even though PEI has to be ruled out as an influencer on academic performance for this particular sample, the fact that Attention has a significant relationship with academic performance may give some insight into the factors that possibly influence medical students' academic performance.

  5. Rasch analysis of the carers quality of life questionnaire for parkinsonism.

    PubMed

    Pillas, Marios; Selai, Caroline; Schrag, Anette

    2017-03-01

    To assess the psychometric properties of the Carers Quality of Life Questionnaire for Parkinsonism using a Rasch modeling approach and determine the optimal cut-off score. We performed a Rasch analysis of the survey answers of 430 carers of patients with atypical parkinsonism. All of the scale items demonstrated acceptable goodness of fit to the Rasch model. The scale was unidimensional and no notable differential item functioning was detected in the items regarding age and disease type. Rating categories were functioning adequately in all scale items. The scale had high reliability (.95) and construct validity and a high degree of precision, distinguishing between 5 distinct groups of carers with different levels of quality of life. A cut-off score of 62 was found to have the optimal screening accuracy based on Hospital Anxiety and Depression Scale subscores. The results suggest that the Carers Quality of Life Questionnaire for Parkinsonism is a useful scale to assess carers' quality of life and allows analyses requiring interval scaling of variables. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  6. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-08-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS - a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive.

  7. Hadoop-GIS: A High Performance Spatial Data Warehousing System over MapReduce

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Vo, Hoang; Lee, Rubao; Liu, Qiaoling; Zhang, Xiaodong; Saltz, Joel

    2013-01-01

    Support of high performance queries on large volumes of spatial data becomes increasingly important in many application domains, including geospatial problems in numerous fields, location based services, and emerging scientific applications that are increasingly data- and compute-intensive. The emergence of massive scale spatial data is due to the proliferation of cost effective and ubiquitous positioning technologies, development of high resolution imaging technologies, and contribution from a large number of community users. There are two major challenges for managing and querying massive spatial data to support spatial queries: the explosion of spatial data, and the high computational complexity of spatial queries. In this paper, we present Hadoop-GIS – a scalable and high performance spatial data warehousing system for running large scale spatial queries on Hadoop. Hadoop-GIS supports multiple types of spatial queries on MapReduce through spatial partitioning, customizable spatial query engine RESQUE, implicit parallel spatial query execution on MapReduce, and effective methods for amending query results through handling boundary objects. Hadoop-GIS utilizes global partition indexing and customizable on demand local spatial indexing to achieve efficient query processing. Hadoop-GIS is integrated into Hive to support declarative spatial queries with an integrated architecture. Our experiments have demonstrated the high efficiency of Hadoop-GIS on query response and high scalability to run on commodity clusters. Our comparative experiments have showed that performance of Hadoop-GIS is on par with parallel SDBMS and outperforms SDBMS for compute-intensive queries. Hadoop-GIS is available as a set of library for processing spatial queries, and as an integrated software package in Hive. PMID:24187650

  8. High dimensional biological data retrieval optimization with NoSQL technology.

    PubMed

    Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike

    2014-01-01

    High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data.

  9. High dimensional biological data retrieval optimization with NoSQL technology

    PubMed Central

    2014-01-01

    Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating tranSMART's implementation to a more scalable solution for Big Data. PMID:25435347

  10. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  11. Early-state damage detection, characterization, and evolution using high-resolution computed tomography

    NASA Astrophysics Data System (ADS)

    Grandin, Robert John

    Safely using materials in high performance applications requires adequately understanding the mechanisms which control the nucleation and evolution of damage. Most of a material's operational life is spent in a state with noncritical damage, and, for example in metals only a small portion of its life falls within the classical Paris Law regime of crack growth. Developing proper structural health and prognosis models requires understanding the behavior of damage in these early stages within the material's life, and this early-stage damage occurs on length scales at which the material may be considered "granular'' in the sense that the discrete regions which comprise the whole are large enough to require special consideration. Material performance depends upon the characteristics of the granules themselves as well as the interfaces between granules. As a result, properly studying early-stage damage in complex, granular materials requires a means to characterize changes in the granules and interfaces. The granular-scale can range from tenths of microns in ceramics, to single microns in fiber-reinforced composites, to tens of millimeters in concrete. The difficulty of direct-study is often overcome by exhaustive testing of macro-scale damage caused by gross material loads and abuse. Such testing, for example optical or electron microscopy, destructive and further, is costly when used to study the evolution of damage within a material and often limits the study to a few snapshots. New developments in high-resolution computed tomography (HRCT) provide the necessary spatial resolution to directly image the granule length-scale of many materials. Successful application of HRCT with fiber-reinforced composites, however, requires extending the HRCT performance beyond current limits. This dissertation will discuss improvements made in the field of CT reconstruction which enable resolutions to be pushed to the point of being able to image the fiber-scale damage structures and the application of this new capability to the study of early-stage damage.

  12. Scalable 96-well Plate Based iPSC Culture and Production Using a Robotic Liquid Handling System.

    PubMed

    Conway, Michael K; Gerger, Michael J; Balay, Erin E; O'Connell, Rachel; Hanson, Seth; Daily, Neil J; Wakatsuki, Tetsuro

    2015-05-14

    Continued advancement in pluripotent stem cell culture is closing the gap between bench and bedside for using these cells in regenerative medicine, drug discovery and safety testing. In order to produce stem cell derived biopharmaceutics and cells for tissue engineering and transplantation, a cost-effective cell-manufacturing technology is essential. Maintenance of pluripotency and stable performance of cells in downstream applications (e.g., cell differentiation) over time is paramount to large scale cell production. Yet that can be difficult to achieve especially if cells are cultured manually where the operator can introduce significant variability as well as be prohibitively expensive to scale-up. To enable high-throughput, large-scale stem cell production and remove operator influence novel stem cell culture protocols using a bench-top multi-channel liquid handling robot were developed that require minimal technician involvement or experience. With these protocols human induced pluripotent stem cells (iPSCs) were cultured in feeder-free conditions directly from a frozen stock and maintained in 96-well plates. Depending on cell line and desired scale-up rate, the operator can easily determine when to passage based on a series of images showing the optimal colony densities for splitting. Then the necessary reagents are prepared to perform a colony split to new plates without a centrifugation step. After 20 passages (~3 months), two iPSC lines maintained stable karyotypes, expressed stem cell markers, and differentiated into cardiomyocytes with high efficiency. The system can perform subsequent high-throughput screening of new differentiation protocols or genetic manipulation designed for 96-well plates. This technology will reduce the labor and technical burden to produce large numbers of identical stem cells for a myriad of applications.

  13. A low cost, high energy density and long cycle life potassium-sulfur battery for grid-scale energy storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Xiaochuan; Bowden, Mark E.; Sprenkle, Vincent L.

    2015-08-15

    Alkali metal-sulfur batteries are attractive for energy storage applications because of their high energy density. Among the batteries, lithium-sulfur batteries typically use liquid in the battery electrolyte, which causes problems in both performance and safety. Sodium-sulfur batteries can use a solid electrolyte such as beta alumina but this requires a high operating temperature. Here we report a novel potassium-sulfur battery with K+-conducting beta-alumina as the electrolyte. Our studies indicate that liquid potassium exhibits much better wettability on the surface of beta-alumina compared to liquid sodium at lower temperatures. Based on this observation, we develop a potassium-sulfur battery that can operatemore » at as low as 150°C with excellent performance. In particular, the battery shows excellent cycle life with negligible capacity fade in 1000 cycles because of the dense ceramic membrane. This study demonstrates a new battery with a high energy density, long cycle life, low cost and high safety, which is ideal for grid-scale energy storage.« less

  14. Wafer-size free-standing single-crystalline graphene device arrays

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jing, Gaoshan; Zhang, Bo; Sando, Shota; Cui, Tianhong

    2014-08-01

    We report an approach of wafer-scale addressable single-crystalline graphene (SCG) arrays growth by using pre-patterned seeds to control the nucleation. The growth mechanism and superb properties of SCG were studied. Large array of free-standing SCG devices were realized. Characterization of SCG as nano switches shows excellent performance with life time (>22 000 times) two orders longer than that of other graphene nano switches reported so far. This work not only shows the possibility of producing wafer-scale high quality SCG device arrays but also explores the superb performance of SCG as nano devices.

  15. Scale model performance test investigation of exhaust system mixers for an Energy Efficient Engine /E3/ propulsion system

    NASA Technical Reports Server (NTRS)

    Kuchar, A. P.; Chamberlin, R.

    1980-01-01

    A scale model performance test was conducted as part of the NASA Energy Efficient Engine (E3) Program, to investigate the geometric variables that influence the aerodynamic design of exhaust system mixers for high-bypass, mixed-flow engines. Mixer configuration variables included lobe number, penetration and perimeter, as well as several cutback mixer geometries. Mixing effectiveness and mixer pressure loss were determined using measured thrust and nozzle exit total pressure and temperature surveys. Results provide a data base to aid the analysis and design development of the E3 mixed-flow exhaust system.

  16. Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models

    PubMed Central

    Prinold, Joe A.I.; Bull, Anthony M.J.

    2014-01-01

    Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621

  17. Fabrication of electron beam deposited tip for atomic-scale atomic force microscopy in liquid.

    PubMed

    Miyazawa, K; Izumi, H; Watanabe-Nakayama, T; Asakawa, H; Fukuma, T

    2015-03-13

    Recently, possibilities of improving operation speed and force sensitivity in atomic-scale atomic force microscopy (AFM) in liquid using a small cantilever with an electron beam deposited (EBD) tip have been intensively explored. However, the structure and properties of an EBD tip suitable for such an application have not been well-understood and hence its fabrication process has not been established. In this study, we perform atomic-scale AFM measurements with a small cantilever and clarify two major problems: contaminations from a cantilever and tip surface, and insufficient mechanical strength of an EBD tip having a high aspect ratio. To solve these problems, here we propose a fabrication process of an EBD tip, where we attach a 2 μm silica bead at the cantilever end and fabricate a 500-700 nm EBD tip on the bead. The bead height ensures sufficient cantilever-sample distance and enables to suppress long-range interaction between them even with a short EBD tip having high mechanical strength. After the tip fabrication, we coat the whole cantilever and tip surface with Si (30 nm) to prevent the generation of contamination. We perform atomic-scale AFM imaging and hydration force measurements at a mica-water interface using the fabricated tip and demonstrate its applicability to such an atomic-scale application. With a repeated use of the proposed process, we can reuse a small cantilever for atomic-scale measurements for several times. Therefore, the proposed method solves the two major problems and enables the practical use of a small cantilever in atomic-scale studies on various solid-liquid interfacial phenomena.

  18. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  19. Female Athletes and Performance-Enhancer Usage

    ERIC Educational Resources Information Center

    Fralinger, Barbara K.; Pinto-Zipp, Genevieve; Olson, Valerie; Simpkins, Susan

    2007-01-01

    The purpose of this study was to develop a knowledge base on factors associated with performance-enhancer usage among female athletes at the high school level in order to identify markers for a future prevention-education program. The study used a pretest-only, between-subjects Likert Scale survey to rank the importance of internal and external…

  20. Synergistic effects from graphene and carbon nanotubes endow ordered hierarchical structure foams with a combination of compressibility, super-elasticity and stability and potential application as pressure sensors

    NASA Astrophysics Data System (ADS)

    Kuang, Jun; Dai, Zhaohe; Liu, Luqi; Yang, Zhou; Jin, Ming; Zhang, Zhong

    2015-05-01

    Nanostructured carbon material based three-dimensional porous architectures have been increasingly developed for various applications, e.g. sensors, elastomer conductors, and energy storage devices. Maintaining architectures with good mechanical performance, including elasticity, load-bearing capacity, fatigue resistance and mechanical stability, is prerequisite for realizing these functions. Though graphene and CNT offer opportunities as nanoscale building blocks, it still remains a great challenge to achieve good mechanical performance in their microarchitectures because of the need to precisely control the structure at different scales. Herein, we fabricate a hierarchical honeycomb-like structured hybrid foam based on both graphene and CNT. The resulting materials possess excellent properties of combined high specific strength, elasticity and mechanical stability, which cannot be achieved in neat CNT and graphene foams. The improved mechanical properties are attributed to the synergistic-effect-induced highly organized, multi-scaled hierarchical architectures. Moreover, with their excellent electrical conductivity, we demonstrated that the hybrid foams could be used as pressure sensors in the fields related to artificial skin.Nanostructured carbon material based three-dimensional porous architectures have been increasingly developed for various applications, e.g. sensors, elastomer conductors, and energy storage devices. Maintaining architectures with good mechanical performance, including elasticity, load-bearing capacity, fatigue resistance and mechanical stability, is prerequisite for realizing these functions. Though graphene and CNT offer opportunities as nanoscale building blocks, it still remains a great challenge to achieve good mechanical performance in their microarchitectures because of the need to precisely control the structure at different scales. Herein, we fabricate a hierarchical honeycomb-like structured hybrid foam based on both graphene and CNT. The resulting materials possess excellent properties of combined high specific strength, elasticity and mechanical stability, which cannot be achieved in neat CNT and graphene foams. The improved mechanical properties are attributed to the synergistic-effect-induced highly organized, multi-scaled hierarchical architectures. Moreover, with their excellent electrical conductivity, we demonstrated that the hybrid foams could be used as pressure sensors in the fields related to artificial skin. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr00841g

  1. 2013 R&D 100 Award: ‘Miniapps’ Bolster High Performance Computing

    ScienceCinema

    Belak, Jim; Richards, David

    2018-06-12

    Two Livermore computer scientists served on a Sandia National Laboratories-led team that developed Mantevo Suite 1.0, the first integrated suite of small software programs, also called "miniapps," to be made available to the high performance computing (HPC) community. These miniapps facilitate the development of new HPC systems and the applications that run on them. Miniapps (miniature applications) serve as stripped down surrogates for complex, full-scale applications that can require a great deal of time and effort to port to a new HPC system because they often consist of hundreds of thousands of lines of code. The miniapps are a prototype that contains some or all of the essentials of the real application but with many fewer lines of code, making the miniapp more versatile for experimentation. This allows researchers to more rapidly explore options and optimize system design, greatly improving the chances the full-scale application will perform successfully. These miniapps have become essential tools for exploring complex design spaces because they can reliably predict the performance of full applications.

  2. Towards Cloud-Resolving European-Scale Climate Simulations using a fully GPU-enabled Prototype of the COSMO Regional Model

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph

    2014-05-01

    The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs. We present our redesign and porting approach as well as our experience and lessons learned. Furthermore, we discuss relevant performance benchmarks obtained on the new hybrid Cray XC30 system "Piz Daint" installed at the Swiss National Supercomputing Centre (CSCS), both in terms of time-to-solution as well as energy consumption. We will demonstrate a first set of short cloud-resolving climate simulations at the European-scale using the GPU-enabled COSMO prototype and elaborate our future plans on how to exploit this new model capability.

  3. Surface knowledge and risks to landing and roving - The scale problem

    NASA Technical Reports Server (NTRS)

    Bourke, Roger D.

    1991-01-01

    The role of surface information in the performance of surface exploration missions is discussed. Accurate surface models based on direct measurements or inference are considered to be an important component in mission risk management. These models can be obtained using high resolution orbital photography or a combination of laser profiling, thermal inertia measurements, and/or radar. It is concluded that strategies for Martian exploration should use high confidence models to achieve maximum performance and low risk.

  4. Accuracy of risk scales for predicting repeat self-harm and suicide: a multicentre, population-level cohort study using routine clinical data.

    PubMed

    Steeg, Sarah; Quinlivan, Leah; Nowland, Rebecca; Carroll, Robert; Casey, Deborah; Clements, Caroline; Cooper, Jayne; Davies, Linda; Knipe, Duleeka; Ness, Jennifer; O'Connor, Rory C; Hawton, Keith; Gunnell, David; Kapur, Nav

    2018-04-25

    Risk scales are used widely in the management of patients presenting to hospital following self-harm. However, there is evidence that their diagnostic accuracy in predicting repeat self-harm is limited. Their predictive accuracy in population settings, and in identifying those at highest risk of suicide is not known. We compared the predictive accuracy of the Manchester Self-Harm Rule (MSHR), ReACT Self-Harm Rule (ReACT), SAD PERSONS Scale (SPS) and Modified SAD PERSONS Scale (MSPS) in an unselected sample of patients attending hospital following self-harm. Data on 4000 episodes of self-harm presenting to Emergency Departments (ED) between 2010 and 2012 were obtained from four established monitoring systems in England. Episodes were assigned a risk category for each scale and followed up for 6 months. The episode-based repeat rate was 28% (1133/4000) and the incidence of suicide was 0.5% (18/3962). The MSHR and ReACT performed with high sensitivity (98% and 94% respectively) and low specificity (15% and 23%). The SPS and the MSPS performed with relatively low sensitivity (24-29% and 9-12% respectively) and high specificity (76-77% and 90%). The area under the curve was 71% for both MSHR and ReACT, 51% for SPS and 49% for MSPS. Differences in predictive accuracy by subgroup were small. The scales were less accurate at predicting suicide than repeat self-harm. The scales failed to accurately predict repeat self-harm and suicide. The findings support existing clinical guidance not to use risk classification scales alone to determine treatment or predict future risk.

  5. Multi-Scale Multi-Domain Model | Transportation Research | NREL

    Science.gov Websites

    framework for NREL's MSMD model. NREL's MSMD model quantifies the impacts of electrical/thermal pathway : NREL Macroscopic design factors and highly dynamic environmental conditions significantly influence the design of affordable, long-lasting, high-performing, and safe large battery systems. The MSMD framework

  6. SYNTHESIS OF HIGHLY FLUORINATED CHLOROFORMATES AND THEIR USE AS DERIVATIZING AGENTS FOR HYDROPHILIC COMPOUNDS AND DRINKING WATER DISINFECTION BY-PRODUCTS

    EPA Science Inventory

    A rapid, safe and efficient procedure was developed to synthesize perfluorinated chloroformates in the small scale generally required to perform analytical derivatizations. This new family of derivatizing agents allows straightforward derivatization of highly polar compounds, co...

  7. A transportable Paul-trap for levitation and accurate positioning of micron-scale particles in vacuum for laser-plasma experiments

    NASA Astrophysics Data System (ADS)

    Ostermayr, T. M.; Gebhard, J.; Haffa, D.; Kiefer, D.; Kreuzer, C.; Allinger, K.; Bömer, C.; Braenzel, J.; Schnürer, M.; Cermak, I.; Schreiber, J.; Hilz, P.

    2018-01-01

    We report on a Paul-trap system with large access angles that allows positioning of fully isolated micrometer-scale particles with micrometer precision as targets in high-intensity laser-plasma interactions. This paper summarizes theoretical and experimental concepts of the apparatus as well as supporting measurements that were performed for the trapping process of single particles.

  8. Seismic Source Scaling and Discrimination in Diverse Tectonic Environments

    DTIC Science & Technology

    2009-09-30

    3349-3352. Imanishi, K., W. L. Ellsworth, and S. G. Prejean (2004). Earthquake source parameters determined by the SAFOD Pilot Hole seismic array ... seismic discrimination by performing a thorough investigation of* earthquake source scaling using diverse, high-quality datascts from varied tectonic...these corrections has a direct impact on our ability to identify clandestine explosions in the broad regional areas characterized by low seismicity

  9. Cognitive Model Exploration and Optimization: A New Challenge for Computational Science

    DTIC Science & Technology

    2010-03-01

    the generation and analysis of computational cognitive models to explain various aspects of cognition. Typically the behavior of these models...computational scale of a workstation, so we have turned to high performance computing (HPC) clusters and volunteer computing for large-scale...computational resources. The majority of applications on the Department of Defense HPC clusters focus on solving partial differential equations (Post

  10. Newly invented biobased materials from low-carbon, diverted waste fibers: research methods, testing, and full-scale application in a case study structure

    Treesearch

    Julee A Herdt; John Hunt; Kellen Schauermann

    2016-01-01

    This project demonstrates newly invented, biobased construction materials developed by applying lowcarbon, biomass waste sources through the Authors’ engineered fiber processes and technology. If manufactured and applied large-scale the project inventions can divert large volumes of cellulose waste into high-performance, low embodied energy, environmental construction...

  11. A scale-invariant keypoint detector in log-polar space

    NASA Astrophysics Data System (ADS)

    Tao, Tao; Zhang, Yun

    2017-02-01

    The scale-invariant feature transform (SIFT) algorithm is devised to detect keypoints via the difference of Gaussian (DoG) images. However, the DoG data lacks the high-frequency information, which can lead to a performance drop of the algorithm. To address this issue, this paper proposes a novel log-polar feature detector (LPFD) to detect scale-invariant blubs (keypoints) in log-polar space, which, in contrast, can retain all the image information. The algorithm consists of three components, viz. keypoint detection, descriptor extraction and descriptor matching. Besides, the algorithm is evaluated in detecting keypoints from the INRIA dataset by comparing with the SIFT algorithm and one of its fast versions, the speed up robust features (SURF) algorithm in terms of three performance measures, viz. correspondences, repeatability, correct matches and matching score.

  12. Towards High-Performance Aqueous Sodium-Ion Batteries: Stabilizing the Solid/Liquid Interface for NASICON-Type Na2 VTi(PO4 )3 using Concentrated Electrolytes.

    PubMed

    Zhang, Huang; Jeong, Sangsik; Qin, Bingsheng; Vieira Carvalho, Diogo; Buchholz, Daniel; Passerini, Stefano

    2018-04-25

    Aqueous Na-ion batteries may offer a solution to the cost and safety issues of high-energy batteries. However, substantial challenges remain in the development of electrode materials and electrolytes enabling high performance and long cycle life. Herein, we report the characterization of a symmetric Na-ion battery with a NASICON-type Na 2 VTi(PO 4 ) 3 electrode material in conventional aqueous and "water-in-salt" electrolytes. Extremely stable cycling performance for 1000 cycles at a high rate (20 C) is found with the highly concentrated aqueous electrolytes owing to the formation of a resistive but protective interphase between the electrode and electrolyte. These results provide important insight for the development of aqueous Na-ion batteries with stable long-term cycling performance for large-scale energy storage. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Fabrication of ordered NiO coated Si nanowire array films as electrodes for a high performance lithium ion battery.

    PubMed

    Qiu, M C; Yang, L W; Qi, X; Li, Jun; Zhong, J X

    2010-12-01

    Highly ordered NiO coated Si nanowire array films are fabricated as electrodes for a high performance lithium ion battery via depositing Ni on electroless-etched Si nanowires and subsequently annealing. The structures and morphologies of as-prepared films are characterized by X-ray diffraction, scanning electron microscopy, and transmission electron microscopy. When the potential window versus lithium was controlled, the coated NiO can be selected to be electrochemically active to store and release Li+ ions, while highly conductive crystalline Si cores function as nothing more than a stable mechanical support and an efficient electrical conducting pathway. The hybrid nanowire array films exhibit superior cyclic stability and reversible capacity compared to that of NiO nanostructured films. Owing to the ease of large-scale fabrication and superior electrochemical performance, these hybrid nanowire array films will be promising anode materials for high performance lithium-ion batteries.

  14. Nicholas Long | NREL

    Science.gov Websites

    Orcid ID http://orcid.org/0000-0001-9244-6736 Nicholas joined NREL in 2003 and works in the Commercial -scale analyses using high-performance computing to evaluate commercial buildings. Education B.S

  15. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    NASA Astrophysics Data System (ADS)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  16. Examining unusual digit span performance in a population of postsecondary students assessed for academic difficulties.

    PubMed

    Harrison, Allyson G; Rosenblum, Yoni; Currie, Shannon

    2010-09-01

    Methods of identifying poor test-related motivation using the Wechsler Adult Intelligence Scale Digit Span subtest are based on identification of performance patterns that are implausible if the test taker is investing full effort. No studies to date, however, have examined the specificity of such measures, particularly when evaluating persons with either known or suspected learning or attention disorders. This study investigated performance of academically challenged students on three measures embedded in the Wechsler Adult Intelligence Scale-III, namely, low Digit Span, high Vocabulary-Digit span (Voc-DS), and low Reliable Digit Span scores. Evaluating subjects believed to be investing full effort in testing, it was found that both Digit Span and Reliable Digit Span had high specificity, although both showed relatively lower sensitivity. In contrast, VOC-DS was especially weak in both sensitivity and specificity, with an apparent false positive rate of 28%. Use of VOC-DS is therefore not appropriate for those with a history of learning or attention problems.

  17. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE PAGES

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian; ...

    2017-09-29

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  18. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzman, Burt; Bauerdick, Lothar A. T.; Bockelman, Brian

    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized bothmore » local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. Additionally, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.« less

  19. Sachem: a chemical cartridge for high-performance substructure search.

    PubMed

    Kratochvíl, Miroslav; Vondrášek, Jiří; Galgonek, Jakub

    2018-05-23

    Structure search is one of the valuable capabilities of small-molecule databases. Fingerprint-based screening methods are usually employed to enhance the search performance by reducing the number of calls to the verification procedure. In substructure search, fingerprints are designed to capture important structural aspects of the molecule to aid the decision about whether the molecule contains a given substructure. Currently available cartridges typically provide acceptable search performance for processing user queries, but do not scale satisfactorily with dataset size. We present Sachem, a new open-source chemical cartridge that implements two substructure search methods: The first is a performance-oriented reimplementation of substructure indexing based on the OrChem fingerprint, and the second is a novel method that employs newly designed fingerprints stored in inverted indices. We assessed the performance of both methods on small, medium, and large datasets containing 1, 10, and 94 million compounds, respectively. Comparison of Sachem with other freely available cartridges revealed improvements in overall performance, scaling potential and screen-out efficiency. The Sachem cartridge allows efficient substructure searches in databases of all sizes. The sublinear performance scaling of the second method and the ability to efficiently query large amounts of pre-extracted information may together open the door to new applications for substructure searches.

  20. Abnormal ranges of vital signs in children in Japanese prehospital settings.

    PubMed

    Nosaka, Nobuyuki; Muguruma, Takashi; Knaup, Emily; Tsukahara, Kohei; Enomoto, Yuki; Kaku, Noriyuki

    2015-10-01

    The revised Fire Service Law obliges each prefectural government in Japan to establish a prehospital acuity scale. The Foundation for Ambulance Service Development (FASD) created an acuity scale for use as a reference. Our preliminary survey revealed that 32 of 47 prefectures directly applied the FASD scale for children. This scale shows abnormal ranges of heart rate and respiratory rate in young children. This study aimed to evaluate the validity of the abnormal ranges on the FASD scale to assess its overall performance for triage purposes in paediatric patients. We evaluated the validity of the ranges by comparing published centile charts for these vital signs with records of 1,296 ambulance patients. A large portion of the abnormal ranges on the scale substantially overlapped with the normal centile charts. Triage decisions using the FASD scale of vital signs properly classified 22% ( n  = 287) of children. The sensitivity and specificity for high urgency were as high as 91% (95% confidence interval, 82-96%) and as low as 18% (95% confidence interval, 16-20%). We found there is room for improvement of the abnormal ranges on the FASD scale.

  1. High-resolution Observations of Hα Spectra with a Subtractive Double Pass

    NASA Astrophysics Data System (ADS)

    Beck, C.; Rezaei, R.; Choudhary, D. P.; Gosain, S.; Tritschler, A.; Louis, R. E.

    2018-02-01

    High-resolution imaging spectroscopy in solar physics has relied on Fabry-Pérot interferometers (FPIs) in recent years. FPI systems, however, become technically challenging and expensive for telescopes larger than the 1 m class. A conventional slit spectrograph with a diffraction-limited performance over a large field of view (FOV) can be built at much lower cost and effort. It can be converted into an imaging spectro(polari)meter using the concept of a subtractive double pass (SDP). We demonstrate that an SDP system can reach a similar performance as FPI-based systems with a high spatial and moderate spectral resolution across a FOV of 100^'' ×100^' ' with a spectral coverage of 1 nm. We use Hα spectra taken with an SDP system at the Dunn Solar Telescope and complementary full-disc data to infer the properties of small-scale superpenumbral filaments. We find that the majority of all filaments end in patches of opposite-polarity fields. The internal fine-structure in the line-core intensity of Hα at spatial scales of about 0.5'' exceeds that in other parameters such as the line width, indicating small-scale opacity effects in a larger-scale structure with common properties. We conclude that SDP systems in combination with (multi-conjugate) adaptive optics are a valid alternative to FPI systems when high spatial resolution and a large FOV are required. They can also reach a cadence that is comparable to that of FPI systems, while providing a much larger spectral range and a simultaneous multi-line capability.

  2. Development of the color scale of perceived exertion: preliminary validation.

    PubMed

    Serafim, Thais H S; Tognato, Andrea C; Nakamura, Priscila M; Queiroga, Marcos R; Nakamura, Fábio Y; Pereira, Gleber; Kokubun, Eduardo

    2014-12-01

    This study developed a Color Scale of Perceived Exertion (RPE-color scale) and assessed its concurrent and construct validity in adult women. One hundred participants (18-77 years), who were habitual exercisers, associated colors with verbal anchors of the Borg RPE scale (RPE-Borg scale) for RPE-color scale development. For RPE-color scale validation, 12 Young (M = 21.7 yr., SD = 1.5) and 10 Older (M = 60.3 yr., SD = 3.5) adult women performed a maximal graded exercise test on a treadmill and reported perceived exertion in both RPE-color and RPE-Borg scales. In the Young group, the RPE-color scale was significantly associated with heart rate and oxygen consumption, having strong correlations with the RPE-Borg scale. In the Older group, the RPE-color scale was significantly associated with heart rate, having moderate to high correlations with the RPE-Borg scale. The RPE-color scale demonstrated concurrent and construct validity in the Young women, as well as construct validity in Older adults.

  3. Bridging the scales in atmospheric composition simulations using a nudging technique

    NASA Astrophysics Data System (ADS)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean values of run C lie between run A and run B. A propagation of the signal outside the nudging region is observed, and is evaluated in terms of differences between coarse resolution (with and without nudging) and fine resolution simulations.

  4. Si Nanocrystal-Embedded SiO x nanofoils: Two-Dimensional Nanotechnology-Enabled High Performance Li Storage Materials.

    PubMed

    Yoo, Hyundong; Park, Eunjun; Bae, Juhye; Lee, Jaewoo; Chung, Dong Jae; Jo, Yong Nam; Park, Min-Sik; Kim, Jung Ho; Dou, Shi Xue; Kim, Young-Jun; Kim, Hansu

    2018-05-02

    Silicon (Si) based materials are highly desirable to replace currently used graphite anode for lithium ion batteries. Nevertheless, its usage is still a big challenge due to poor battery performance and scale-up issue. In addition, two-dimensional (2D) architectures, which remain unresolved so far, would give them more interesting and unexpected properties. Herein, we report a facile, cost-effective, and scalable approach to synthesize Si nanocrystals embedded 2D SiO x nanofoils for next-generation lithium ion batteries through a solution-evaporation-induced interfacial sol-gel reaction of hydrogen silsesquioxane (HSiO 1.5 , HSQ). The unique nature of the thus-prepared centimeter scale 2D nanofoil with a large surface area enables ultrafast Li + insertion and extraction, with a reversible capacity of more than 650 mAh g -1 , even at a high current density of 50 C (50 A g -1 ). Moreover, the 2D nanostructured Si/SiO x nanofoils show excellent cycling performance up to 200 cycles and maintain their initial dimensional stability. This superior performance stems from the peculiar nanoarchitecture of 2D Si/SiO x nanofoils, which provides short diffusion paths for lithium ions and abundant free space to effectively accommodate the huge volume changes of Si during cycling.

  5. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    PubMed Central

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  6. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  7. Parallel Visualization of Large-Scale Aerodynamics Calculations: A Case Study on the Cray T3E

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Crockett, Thomas W.

    1999-01-01

    This paper reports the performance of a parallel volume rendering algorithm for visualizing a large-scale, unstructured-grid dataset produced by a three-dimensional aerodynamics simulation. This dataset, containing over 18 million tetrahedra, allows us to extend our performance results to a problem which is more than 30 times larger than the one we examined previously. This high resolution dataset also allows us to see fine, three-dimensional features in the flow field. All our tests were performed on the Silicon Graphics Inc. (SGI)/Cray T3E operated by NASA's Goddard Space Flight Center. Using 511 processors, a rendering rate of almost 9 million tetrahedra/second was achieved with a parallel overhead of 26%.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfram, Phillip J.; Ringler, Todd D.; Maltrud, Mathew E.

    Isopycnal diffusivity due to stirring by mesoscale eddies in an idealized, wind-forced, eddying, midlatitude ocean basin is computed using Lagrangian, in Situ, Global, High-Performance Particle Tracking (LIGHT). Simulation is performed via LIGHT within the Model for Prediction across Scales Ocean (MPAS-O). Simulations are performed at 4-, 8-, 16-, and 32-km resolution, where the first Rossby radius of deformation (RRD) is approximately 30 km. Scalar and tensor diffusivities are estimated at each resolution based on 30 ensemble members using particle cluster statistics. Each ensemble member is composed of 303 665 particles distributed across five potential density surfaces. Diffusivity dependence upon modelmore » resolution, velocity spatial scale, and buoyancy surface is quantified and compared with mixing length theory. The spatial structure of diffusivity ranges over approximately two orders of magnitude with values of O(10 5) m 2 s –1 in the region of western boundary current separation to O(10 3) m 2 s –1 in the eastern region of the basin. Dominant mixing occurs at scales twice the size of the first RRD. Model resolution at scales finer than the RRD is necessary to obtain sufficient model fidelity at scales between one and four RRD to accurately represent mixing. Mixing length scaling with eddy kinetic energy and the Lagrangian time scale yield mixing efficiencies that typically range between 0.4 and 0.8. In conclusion, a reduced mixing length in the eastern region of the domain relative to the west suggests there are different mixing regimes outside the baroclinic jet region.« less

  9. Aerodynamic Simulation of Runback Ice Accretion

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Whalen, Edward A.; Busch, Greg T.; Bragg, Michael B.

    2010-01-01

    This report presents the results of recent investigations into the aerodynamics of simulated runback ice accretion on airfoils. Aerodynamic tests were performed on a full-scale model using a high-fidelity, ice-casting simulation at near-flight Reynolds (Re) number. The ice-casting simulation was attached to the leading edge of a 72-in. (1828.8-mm ) chord NACA 23012 airfoil model. Aerodynamic performance tests were conducted at the ONERA F1 pressurized wind tunnel over a Reynolds number range of 4.7?10(exp 6) to 16.0?10(exp 6) and a Mach (M) number ran ge of 0.10 to 0.28. For Re = 16.0?10(exp 6) and M = 0.20, the simulated runback ice accretion on the airfoil decreased the maximum lift coe fficient from 1.82 to 1.51 and decreased the stalling angle of attack from 18.1deg to 15.0deg. The pitching-moment slope was also increased and the drag coefficient was increased by more than a factor of two. In general, the performance effects were insensitive to Reynolds numb er and Mach number changes over the range tested. Follow-on, subscale aerodynamic tests were conducted on a quarter-scale NACA 23012 model (18-in. (457.2-mm) chord) at Re = 1.8?10(exp 6) and M = 0.18, using low-fidelity, geometrically scaled simulations of the full-scale castin g. It was found that simple, two-dimensional simulations of the upper- and lower-surface runback ridges provided the best representation of the full-scale, high Reynolds number iced-airfoil aerodynamics, whereas higher-fidelity simulations resulted in larger performance degrada tions. The experimental results were used to define a new subclassification of spanwise ridge ice that distinguishes between short and tall ridges. This subclassification is based upon the flow field and resulting aerodynamic characteristics, regardless of the physical size of the ridge and the ice-accretion mechanism.

  10. High-resolution, detailed simulations of low foot and high foot implosion experiments on the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Clark, Daniel

    2015-11-01

    In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3-D) character of the flow, accurately modeling NIF implosions remains at the edge of current radiation hydrodynamics simulation capabilities. This talk describes the current state of progress of 3-D, high-resolution, capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. Most importantly, it is found that a single, standard simulation methodology appears adequate to model both implosion types and gives confidence that such a model can be used to guide future implosion designs toward ignition. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Performance of the Multi-Radar Multi-Sensor System over the Lower Colorado River, Texas

    NASA Astrophysics Data System (ADS)

    Bayabil, H. K.; Sharif, H. O.; Fares, A.; Awal, R.; Risch, E.

    2017-12-01

    Recently observed increases in intensities and frequencies of climate extremes (e.g., floods, dam failure, and overtopping of river banks) necessitate the development of effective disaster prevention and mitigation strategies. Hydrologic models can be useful tools in predicting such events at different spatial and temporal scales. However, accuracy and prediction capability of such models are often constrained by the availability of high-quality representative hydro-meteorological data (e.g., precipitation) that are required to calibrate and validate such models. Improved technologies and products such as the Multi-Radar Multi-Sensor (MRMS) system that allows gathering and transmission of vast meteorological data have been developed to provide such data needs. While the MRMS data are available with high spatial and temporal resolutions (1 km and 15 min, respectively), its accuracy in estimating precipitation is yet to be fully investigated. Therefore, the main objective of this study is to evaluate the performance of the MRMS system in effectively capturing precipitation over the Lower Colorado River, Texas using observations from a dense rain gauge network. In addition, effects of spatial and temporal aggregation scales on the performance of the MRMS system were evaluated. Point scale comparisons were made at 215 gauging locations using rain gauges and MRMS data from May 2015. Moreover, the effects of temporal and spatial data aggregation scales (30, 45, 60, 75, 90, 105, and 120 min) and (4 to 50 km), respectively on the performance of the MRMS system were tested. Overall, the MRMS system (at 15 min temporal resolution) captured precipitation reasonably well, with an average R2 value of 0.65 and RMSE of 0.5 mm. In addition, spatial and temporal data aggregations resulted in increases in R2 values. However, reduction in RMSE was achieved only with an increase in spatial aggregations.

  12. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  13. Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines

    PubMed Central

    Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram

    2014-01-01

    When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002

  14. A High-Performance Sintered Iron Electrode for Rechargeable Alkaline Batteries to Enable Large-Scale Energy Storage

    DOE PAGES

    Yang, Chenguang; Manohar, Aswin K.; Narayanan, S. R.

    2017-01-07

    Iron-based alkaline rechargeable batteries such as iron-air and nickel-iron batteries are particularly attractive for large-scale energy storage because these batteries can be relatively inexpensive, environment- friendly, and also safe. Therefore, our study has focused on achieving the essential electrical performance and cycling properties needed for the widespread use of iron-based alkaline batteries in stationary and distributed energy storage applications.We have demonstrated for the first time, an advanced sintered iron electrode capable of 3500 cycles of repeated charge and discharge at the 1-hour rate and 100% depth of discharge in each cycle, and an average Coulombic efficiency of over 97%. Suchmore » a robust and efficient rechargeable iron electrode is also capable of continuous discharge at rates as high as 3C with no noticeable loss in utilization. We have shown that the porosity, pore size and thickness of the sintered electrode can be selected rationally to optimize specific capacity, rate capability and robustness. As a result, these advances in the electrical performance and durability of the iron electrode enables iron-based alkaline batteries to be a viable technology solution for meeting the dire need for large-scale electrical energy storage.« less

  15. Micro-mechanical properties of the tendon-to-bone attachment.

    PubMed

    Deymier, Alix C; An, Yiran; Boyle, John J; Schwartz, Andrea G; Birman, Victor; Genin, Guy M; Thomopoulos, Stavros; Barber, Asa H

    2017-07-01

    The tendon-to-bone attachment (enthesis) is a complex hierarchical tissue that connects stiff bone to compliant tendon. The attachment site at the micrometer scale exhibits gradients in mineral content and collagen orientation, which likely act to minimize stress concentrations. The physiological micromechanics of the attachment thus define resultant performance, but difficulties in sample preparation and mechanical testing at this scale have restricted understanding of structure-mechanical function. Here, microscale beams from entheses of wild type mice and mice with mineral defects were prepared using cryo-focused ion beam milling and pulled to failure using a modified atomic force microscopy system. Micromechanical behavior of tendon-to-bone structures, including elastic modulus, strength, resilience, and toughness, were obtained. Results demonstrated considerably higher mechanical performance at the micrometer length scale compared to the millimeter tissue length scale, describing enthesis material properties without the influence of higher order structural effects such as defects. Micromechanical investigation revealed a decrease in strength in entheses with mineral defects. To further examine structure-mechanical function relationships, local deformation behavior along the tendon-to-bone attachment was determined using local image correlation. A high compliance zone near the mineralized gradient of the attachment was clearly identified and highlighted the lack of correlation between mineral distribution and strain on the low-mineral end of the attachment. This compliant region is proposed to act as an energy absorbing component, limiting catastrophic failure within the tendon-to-bone attachment through higher local deformation. This understanding of tendon-to-bone micromechanics demonstrates the critical role of micrometer scale features in the mechanics of the tissue. The tendon-to-bone attachment (enthesis) is a complex hierarchical tissue with features at a numerous scales that dissipate stress concentrations between compliant tendon and stiff bone. At the micrometer scale, the enthesis exhibits gradients in collagen and mineral composition and organization. However, the physiological mechanics of the enthesis at this scale remained unknown due to difficulty in preparing and testing micrometer scale samples. This study is the first to measure the tensile mechanical properties of the enthesis at the micrometer scale. Results demonstrated considerably enhanced mechanical performance at the micrometer length scale compared to the millimeter tissue length scale and identified a high-compliance zone near the mineralized gradient of the attachment. This understanding of tendon-to-bone micromechanics demonstrates the critical role of micrometer scale features in the mechanics of the tissue. Copyright © 2017. Published by Elsevier Ltd.

  16. Translational bioinformatics in the cloud: an affordable alternative

    PubMed Central

    2010-01-01

    With the continued exponential expansion of publicly available genomic data and access to low-cost, high-throughput molecular technologies for profiling patient populations, computational technologies and informatics are becoming vital considerations in genomic medicine. Although cloud computing technology is being heralded as a key enabling technology for the future of genomic research, available case studies are limited to applications in the domain of high-throughput sequence data analysis. The goal of this study was to evaluate the computational and economic characteristics of cloud computing in performing a large-scale data integration and analysis representative of research problems in genomic medicine. We find that the cloud-based analysis compares favorably in both performance and cost in comparison to a local computational cluster, suggesting that cloud computing technologies might be a viable resource for facilitating large-scale translational research in genomic medicine. PMID:20691073

  17. Phthalimide Copolymer Solar Cells

    NASA Astrophysics Data System (ADS)

    Xin, Hao; Guo, Xugang; Ren, Guoqiang; Kim, Felix; Watson, Mark; Jenekhe, Samson

    2010-03-01

    Photovoltaic properties of bulk heterojunction solar cells based on phthalimide donor-acceptor copolymers have been investigated. Due to the strong π-π stacking of the polymers, the state-of-the-art thermal annealing approach resulted in micro-scale phase separation and thus negligible photocurrent. To achieve ideal bicontinuous morphology, different strategies including quickly film drying and mixed solvent for film processing have been explored. In these films, nano-sale phase separation was achieved and a power conversion efficiency of 3.0% was obtained. Absorption and space-charge limited current mobility measurements reveal similar light harvesting and hole mobilities in all the films, indicating that the morphology is the dominant factor determining the photovoltaic performance. Our results demonstrate that for highly crystalline and/or low-solubility polymers, finding a way to prevent polymer aggregation and large scale phase separation is critical to realizing high performance solar cells.

  18. Nanowire active-matrix circuitry for low-voltage macroscale artificial skin.

    PubMed

    Takei, Kuniharu; Takahashi, Toshitake; Ho, Johnny C; Ko, Hyunhyub; Gillies, Andrew G; Leu, Paul W; Fearing, Ronald S; Javey, Ali

    2010-10-01

    Large-scale integration of high-performance electronic components on mechanically flexible substrates may enable new applications in electronics, sensing and energy. Over the past several years, tremendous progress in the printing and transfer of single-crystalline, inorganic micro- and nanostructures on plastic substrates has been achieved through various process schemes. For instance, contact printing of parallel arrays of semiconductor nanowires (NWs) has been explored as a versatile route to enable fabrication of high-performance, bendable transistors and sensors. However, truly macroscale integration of ordered NW circuitry has not yet been demonstrated, with the largest-scale active systems being of the order of 1 cm(2) (refs 11,15). This limitation is in part due to assembly- and processing-related obstacles, although larger-scale integration has been demonstrated for randomly oriented NWs (ref. 16). Driven by this challenge, here we demonstrate macroscale (7×7 cm(2)) integration of parallel NW arrays as the active-matrix backplane of a flexible pressure-sensor array (18×19 pixels). The integrated sensor array effectively functions as an artificial electronic skin, capable of monitoring applied pressure profiles with high spatial resolution. The active-matrix circuitry operates at a low operating voltage of less than 5 V and exhibits superb mechanical robustness and reliability, without performance degradation on bending to small radii of curvature (2.5 mm) for over 2,000 bending cycles. This work presents the largest integration of ordered NW-array active components, and demonstrates a model platform for future integration of nanomaterials for practical applications.

  19. Scale factor and noise performance tests of the Bendix Corporation Rate Gyro Assembly (RGA)

    NASA Astrophysics Data System (ADS)

    Kim, R.; Hoffman, J.

    1980-08-01

    Three Bendix Corporation gyroscopes in a Rate Gyro Assembly (RGA) were tested at the Central Inertial Guidance Test Facility (CIGTF), 6585th Test Group, Holloman Air Force Base, New Mexico, from 29 May through 19 June 1980, for the National Aeronautics and Space Administration (NASA), Marshall Space Flight Center (MSFC), Huntsville, Alabama. The purpose of the tests was to characterize the noise performance of each gyro in the RGA in the frequency range of 0.01 hertz to 20 hertz. Gyro noise performance was then compared with seismic activity and previous results from Bendix Corporation testing. Eight-point tests were performed to obtain scale factors which were used to scale the Power Spectral Density (PSD) data. The PSD test series consisted of 1, 2.5, 5, 40 and 180 minute tests under various operating conditions (wheels on and off, low and high rate modes, and horizontal and vertical output axis orientations). The data are presented as PSD plots in the frequency domain. These results show a negligible seismic contribution and are comparable with data obtained at the Bendix test facility.

  20. Personality traits as predictors of occupational performance and life satisfaction among mentally disordered offenders.

    PubMed

    Lindstedt, Helena; Söderlund, Anne; Stålenheim, Gunilla; Sjödén, Per-Olow

    2005-01-01

    The study investigated to what extent personality traits, e.g. socialization, proneness for anxiety, aggression and hostility were associated with and predictive of self-reported and observed occupational performance and perceived life satisfaction among male mentally disordered offenders (MDOs). Also, subjects with psychopathic-related personality traits were compared with subjects without such traits regarding demographic data and dependent variables. The MDOs were included from the Swedish National Board of Forensic Medicine. A total of 55 subjects were visited at their hospital ward for data collection with the Karolinska Scales of Personality (KSP), Capability to Perform Daily Occupation (CPDO), Allen Cognitive Level Screen (ACLS) and the Manchester Quality of Life Scale (MANSA). Seven KSP scales and two KSP factors correlated significantly with the dependent variables. Regression analyses revealed that the KSP Socialization scale, the KSP Anxiety-proneness and Psychopathy factors were the most important predictors. Subjects with psychopathy differed from remaining groups by having more conduct disorders before 15 years, being more often brought up in outcasted families and less subjected to measures of pupil welfare activities. The life history was concluded to be important influencing occupational performance and life satisfaction. Subjects with high anxiety proneness should be given attention in treatment planning.

  1. Accelerating large-scale simulation of seismic wave propagation by multi-GPUs and three-dimensional domain decomposition

    NASA Astrophysics Data System (ADS)

    Okamoto, Taro; Takenaka, Hiroshi; Nakamura, Takeshi; Aoki, Takayuki

    2010-12-01

    We adopted the GPU (graphics processing unit) to accelerate the large-scale finite-difference simulation of seismic wave propagation. The simulation can benefit from the high-memory bandwidth of GPU because it is a "memory intensive" problem. In a single-GPU case we achieved a performance of about 56 GFlops, which was about 45-fold faster than that achieved by a single core of the host central processing unit (CPU). We confirmed that the optimized use of fast shared memory and registers were essential for performance. In the multi-GPU case with three-dimensional domain decomposition, the non-contiguous memory alignment in the ghost zones was found to impose quite long time in data transfer between GPU and the host node. This problem was solved by using contiguous memory buffers for ghost zones. We achieved a performance of about 2.2 TFlops by using 120 GPUs and 330 GB of total memory: nearly (or more than) 2200 cores of host CPUs would be required to achieve the same performance. The weak scaling was nearly proportional to the number of GPUs. We therefore conclude that GPU computing for large-scale simulation of seismic wave propagation is a promising approach as a faster simulation is possible with reduced computational resources compared to CPUs.

  2. Performance study of protective clothing against hot water splashes: from bench scale test to instrumented manikin test.

    PubMed

    Lu, Yehu; Song, Guowen; Wang, Faming

    2015-03-01

    Hot liquid hazards existing in work environments are shown to be a considerable risk for industrial workers. In this study, the predicted protection from fabric was assessed by a modified hot liquid splash tester. In these tests, conditions with and without an air spacer were applied. The protective performance of a garment exposed to hot water spray was investigated by a spray manikin evaluation system. Three-dimensional body scanning technique was used to characterize the air gap size between the protective clothing and the manikin skin. The relationship between bench scale test and manikin test was discussed and the regression model was established to predict the overall percentage of skin burn while wearing protective clothing. The results demonstrated strong correlations between bench scale test and manikin test. Based on these studies, the overall performance of protective clothing against hot water spray can be estimated on the basis of the results of the bench scale hot water splashes test and the information of air gap size entrapped in clothing. The findings provide effective guides for the design and material selection while developing high performance protective clothing. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.

  3. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  4. Three-Dimensional CdS/Au Butterfly Wing Scales with Hierarchical Rib Structures for Plasmon-Enhanced Photocatalytic Hydrogen Production.

    PubMed

    Fang, Jing; Gu, Jiajun; Liu, Qinglei; Zhang, Wang; Su, Huilan; Zhang, Di

    2018-06-13

    Localized surface plasmon resonance (LSPR) of plasmonic metals (e.g., Au) can help semiconductors improve their photocatalytic hydrogen (H 2 ) production performance. However, an artificial synthesis of hierarchical plasmonic structures down to nanoscales is usually difficult. Here, we adopt the butterfly wing scales from Morpho didius to fabricate three-dimensional (3D) CdS/Au butterfly wing scales for plasmonic photocatalysis. The as-prepared materials well-inherit the pristine hierarchical biostructures. The 3D CdS/Au butterfly wing scales exhibit a high H 2 production rate (221.8 μmol·h -1 within 420-780 nm), showing a 241-fold increase over the CdS butterfly wing scales. This is attributed to the effective potentiation effect of LSPR introduced by multilayer metallic rib structures and a good interface bonding state between Au and CdS nanoparticles. Thus, our study provides a relatively simple method to learn from nature and inspiration for preparing highly efficient plasmonic photocatalysts.

  5. Raft cultivation area extraction from high resolution remote sensing imagery by fusing multi-scale region-line primitive association features

    NASA Astrophysics Data System (ADS)

    Wang, Min; Cui, Qi; Wang, Jie; Ming, Dongping; Lv, Guonian

    2017-01-01

    In this paper, we first propose several novel concepts for object-based image analysis, which include line-based shape regularity, line density, and scale-based best feature value (SBV), based on the region-line primitive association framework (RLPAF). We then propose a raft cultivation area (RCA) extraction method for high spatial resolution (HSR) remote sensing imagery based on multi-scale feature fusion and spatial rule induction. The proposed method includes the following steps: (1) Multi-scale region primitives (segments) are obtained by image segmentation method HBC-SEG, and line primitives (straight lines) are obtained by phase-based line detection method. (2) Association relationships between regions and lines are built based on RLPAF, and then multi-scale RLPAF features are extracted and SBVs are selected. (3) Several spatial rules are designed to extract RCAs within sea waters after land and water separation. Experiments show that the proposed method can successfully extract different-shaped RCAs from HR images with good performance.

  6. Ultralight Fe@C Nanocapsules/Sponge Composite with Reversibly Tunable Microwave Absorption Performances.

    PubMed

    Li, Yixing; Mao, Zhe; Liu, Rongge; Zhao, Xiaoning; Zhang, Yanhui; Qin, Gaowu; Zhang, Xuefeng

    2017-08-11

    Microwave absorbers are usually designed to solve electromagnetic interferences at a specific frequency, while the requirements may be dynamic during service life. Therefore, a recoverable tuning for microwave absorption properties in response to an external stimulus would be highly desirable. We herein present a micro/nano-scale hybrid absorber, in which high-performance Fe@C nanocapsule absorbents are integrated with a porous melamine sponge skeleton, exhibiting multiple merits of light weight, strong absorption and high elasticity. By mechanically compressing and decompressing the absorber, microwave absorption performances can be effectively shifted between 18 GHz and 26.5 GHz. The present study thus provides a new strategy for the design of a 'dynamic' microwave absorber.

  7. Ultralight Fe@C Nanocapsules/Sponge Composite with Reversibly Tunable Microwave Absorption Performances

    NASA Astrophysics Data System (ADS)

    Li, Yixing; Mao, Zhe; Liu, Rongge; Zhao, Xiaoning; Zhang, Yanhui; Qin, Gaowu; Zhang, Xuefeng

    2017-08-01

    Microwave absorbers are usually designed to solve electromagnetic interferences at a specific frequency, while the requirements may be dynamic during service life. Therefore, a recoverable tuning for microwave absorption properties in response to an external stimulus would be highly desirable. We herein present a micro/nano-scale hybrid absorber, in which high-performance Fe@C nanocapsule absorbents are integrated with a porous melamine sponge skeleton, exhibiting multiple merits of light weight, strong absorption and high elasticity. By mechanically compressing and decompressing the absorber, microwave absorption performances can be effectively shifted between 18 GHz and 26.5 GHz. The present study thus provides a new strategy for the design of a ‘dynamic’ microwave absorber.

  8. Scaling of flow and transport behavior in heterogeneous groundwater systems

    NASA Astrophysics Data System (ADS)

    Scheibe, Timothy; Yabusaki, Steven

    1998-11-01

    Three-dimensional numerical simulations using a detailed synthetic hydraulic conductivity field developed from geological considerations provide insight into the scaling of subsurface flow and transport processes. Flow and advective transport in the highly resolved heterogeneous field were modeled using massively parallel computers, providing a realistic baseline for evaluation of the impacts of parameter scaling. Upscaling of hydraulic conductivity was performed at a variety of scales using a flexible power law averaging technique. A series of tests were performed to determine the effects of varying the scaling exponent on a number of metrics of flow and transport behavior. Flow and transport simulation on high-performance computers and three-dimensional scientific visualization combine to form a powerful tool for gaining insight into the behavior of complex heterogeneous systems. Many quantitative groundwater models utilize upscaled hydraulic conductivity parameters, either implicitly or explicitly. These parameters are designed to reproduce the bulk flow characteristics at the grid or field scale while not requiring detailed quantification of local-scale conductivity variations. An example from applied groundwater modeling is the common practice of calibrating grid-scale model hydraulic conductivity or transmissivity parameters so as to approximate observed hydraulic head and boundary flux values. Such parameterizations, perhaps with a bulk dispersivity imposed, are then sometimes used to predict transport of reactive or non-reactive solutes. However, this work demonstrates that those parameters that lead to the best upscaling for hydraulic conductivity and head do not necessarily correspond to the best upscaling for prediction of a variety of transport behaviors. This result reflects the fact that transport is strongly impacted by the existence and connectedness of extreme-valued hydraulic conductivities, in contrast to bulk flow which depends more strongly on mean values. It provides motivation for continued research into upscaling methods for transport that directly address advection in heterogeneous porous media. An electronic version of this article is available online at the journal's homepage at http://www.elsevier.nl/locate/advwatres or http://www.elsevier.com/locate/advwatres (see "Special section on vizualization". The online version contains additional supporting information, graphics, and a 3D animation of simulated particle movement. Limited. All rights reserved

  9. Psychometric properties of the Italian version of the Cognitive Reserve Scale (I-CRS).

    PubMed

    Altieri, Manuela; Siciliano, Mattia; Pappacena, Simona; Roldán-Tapia, María Dolores; Trojano, Luigi; Santangelo, Gabriella

    2018-05-04

    The original definition of cognitive reserve (CR) refers to the individual differences in cognitive performance after a brain damage or pathology. Several proxies were proposed to evaluate CR (education, occupational attainment, premorbid IQ, leisure activities). Recently, some scales were developed to measure CR taking into account several cognitively stimulating activities. The aim of this study is to adapt the Cognitive Reserve Scale (I-CRS) for the Italian population and to explore its psychometric properties. I-CRS was administered to 547 healthy participants, ranging from 18 to 89 years old, along with neuropsychological and behavioral scales to evaluate cognitive functioning, depressive symptoms, and apathy. Cronbach's α, corrected item-total correlations, and the inter-item correlation matrix were calculated to evaluate the psychometric properties of the scale. Linear regression analysis was performed to build a correction grid of the I-CRS according to demographic variables. Correlational analyses were performed to explore the relationships between I-CRS and neuropsychological and behavioral scales. We found that age, sex, and education influenced the I-CRS score. Young adults and adults obtained higher I-CRS scores than elderly adults; women and participants with high educational attainment scored higher on I-CRS than men and participants with low education. I-CRS score correlated poorly with cognitive and depression scale scores, but moderately with apathy scale scores. I-CRS showed good psychometric properties and seemed to be a useful tool to assess CR in every adult life stage. Moreover, our findings suggest that apathy rather than depressive symptoms may interfere with the building of CR across the lifespan.

  10. Relationship Between Metabolic Syndrome and Clinical Features, and Its Personal-Social Performance in Patients with Schizophrenia.

    PubMed

    Saatcioglu, Omer; Kalkan, Murat; Fistikci, Nurhan; Erek, Sakire; Kilic, Kasim Candas

    2016-06-01

    The aim of this study was to evaluate the metabolic syndrome (MS) criteria and also to investigate the effects of MS on medical treatment, clinical course and personal and social performance in patients with schizophrenia. One hundred-sixteen patients with schizophrenia were included in the study. Measurements of MS were calculated in all patients. Brief Psychiatric Rating Scale, Scale for the Assessment of Positive Symptoms, Scale for the Assessment of Negative Symptoms, Calgary Depression Scale for Schizophrenia, Personal and Social Performance Scale (PSP) were applied. The frequency of MS according to IDF criteria was 42.2 % among the patients. There was no significant difference between patients with and without MS in terms of age. The ratios of MS were 62.5 % for the group taking typical and atypical antipsychotics together and 35.7 % for the group taking two or more atypical antipsychotics together. The duration of disorder in patients with MS was higher than those without MS. Furthermore there was no significant difference between the schizophrenic patients with and without MS, in terms of PSP scores. Our findings showed that the duration of illness, high scores of BMI, use of clozapine or concurrent use of typical and atypical antipsychotics, depressive and negative symptoms of schizophrenia were significant risk factors for the development of MS.

  11. NSEG: A segmented mission analysis program for low and high speed aircraft. Volume 2: Program users manual

    NASA Technical Reports Server (NTRS)

    Hague, D. S.; Rozendaal, H. L.

    1977-01-01

    A rapid mission analysis code based on the use of approximate flight path equations of motion is described. Equation form varies with the segment type, for example, accelerations, climbs, cruises, descents, and decelerations. Realistic and detailed vehicle characteristics are specified in tabular form. In addition to its mission performance calculation capabilities, the code also contains extensive flight envelop performance mapping capabilities. Approximate take off and landing analyses can be performed. At high speeds, centrifugal lift effects are taken into account. Extensive turbojet and ramjet engine scaling procedures are incorporated in the code.

  12. Experimental investigation of the crashworthiness of scaled composite sailplane fuselages

    NASA Technical Reports Server (NTRS)

    Kampf, Karl-Peter; Crawley, Edward F.; Hansman, R. John, Jr.

    1989-01-01

    The crash dynamics and energy absorption of composite sailplane fuselage segments undergoing nose-down impact were investigated. More than 10 quarter-scale structurally similar test articles, typical of high-performance sailplane designs, were tested. Fuselages segments were fabricated of combinations of fiberglass, graphite, Kevlar, and Spectra fabric materials. Quasistatic and dynamic tests were conducted. The quasistatic tests were found to replicate the strain history and failure modes observed in the dynamic tests. Failure modes of the quarter-scale model were qualitatively compared with full-scale crash evidence and quantitatively compared with current design criteria. By combining material and structural improvements, substantial increases in crashworthiness were demonstrated.

  13. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.

  14. The implementation of sea ice model on a regional high-resolution scale

    NASA Astrophysics Data System (ADS)

    Prasad, Siva; Zakharov, Igor; Bobby, Pradeep; McGuire, Peter

    2015-09-01

    The availability of high-resolution atmospheric/ocean forecast models, satellite data and access to high-performance computing clusters have provided capability to build high-resolution models for regional ice condition simulation. The paper describes the implementation of the Los Alamos sea ice model (CICE) on a regional scale at high resolution. The advantage of the model is its ability to include oceanographic parameters (e.g., currents) to provide accurate results. The sea ice simulation was performed over Baffin Bay and the Labrador Sea to retrieve important parameters such as ice concentration, thickness, ridging, and drift. Two different forcing models, one with low resolution and another with a high resolution, were used for the estimation of sensitivity of model results. Sea ice behavior over 7 years was simulated to analyze ice formation, melting, and conditions in the region. Validation was based on comparing model results with remote sensing data. The simulated ice concentration correlated well with Advanced Microwave Scanning Radiometer for EOS (AMSR-E) and Ocean and Sea Ice Satellite Application Facility (OSI-SAF) data. Visual comparison of ice thickness trends estimated from the Soil Moisture and Ocean Salinity satellite (SMOS) agreed with the simulation for year 2010-2011.

  15. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  16. Application of exergetic sustainability index to a nano-scale irreversible Brayton cycle operating with ideal Bose and Fermi gasses

    NASA Astrophysics Data System (ADS)

    Açıkkalp, Emin; Caner, Necmettin

    2015-09-01

    In this study, a nano-scale irreversible Brayton cycle operating with quantum gasses including Bose and Fermi gasses is researched. Developments in the nano-technology cause searching the nano-scale machines including thermal systems to be unavoidable. Thermodynamic analysis of a nano-scale irreversible Brayton cycle operating with Bose and Fermi gasses was performed (especially using exergetic sustainability index). In addition, thermodynamic analysis involving classical evaluation parameters such as work output, exergy output, entropy generation, energy and exergy efficiencies were conducted. Results are submitted numerically and finally some useful recommendations were conducted. Some important results are: entropy generation and exergetic sustainability index are affected mostly for Bose gas and power output and exergy output are affected mostly for the Fermi gas by x. At the high temperature conditions, work output and entropy generation have high values comparing with other degeneracy conditions.

  17. Accelerated Dimension-Independent Adaptive Metropolis

    DOE PAGES

    Chen, Yuxin; Keyes, David E.; Law, Kody J.; ...

    2016-10-27

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  18. Predictors and correlations of emotional intelligence among medical students at King Abdulaziz University, Jeddah

    PubMed Central

    Ibrahim, Nahla Khamis; Algethmi, Wafaa Ali; Binshihon, Safia Mohammad; Almahyawi, Rawan Aesh; Alahmadi, Razan Faisal; Baabdullah, Maha Yousef

    2017-01-01

    Objectives: To determine the predictors of Emotional Intelligence (EI), and its relationship with academic performance, leadership capacity, self-efficacy and the perceived stress between medical students at King Abdulaziz University, Jeddah, Saudi Arabia. Methods: A cross-sectional study was done among 540 students selected through a multi-stage stratified random sampling method during 2015/2016. A standardized, confidential data collection sheet was used. It included Schutte Self-Report Emotional Intelligence (SSREI) scale, Authentic Leadership questionnaire, General Self-Efficacy Scale and the short version of Perceived Stress Scale (PSS-4). Both descriptive and inferential statistics were done, and a multiple linear regression model was constructed. Results: The predictors of high EI were gender (female), increasing age, and being non-smoker. EI was positively associated with better academic performance, leadership capacity and self-efficacy. It was negatively correlated to perceived-stress. Conclusion: Female gender, age, non-smoking were the predictors of high EI. Conduction of holistic training programs on EI, leadership and self-efficacy are recommended. More smoking control programs and stress management courses are required. PMID:29142542

  19. Accelerated Dimension-Independent Adaptive Metropolis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuxin; Keyes, David E.; Law, Kody J.

    This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less

  20. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  1. Transient Structures and Possible Limits of Data Recording in Phase-Change Materials.

    PubMed

    Hu, Jianbo; Vanacore, Giovanni M; Yang, Zhe; Miao, Xiangshui; Zewail, Ahmed H

    2015-07-28

    Phase-change materials (PCMs) represent the leading candidates for universal data storage devices, which exploit the large difference in the physical properties of their transitional lattice structures. On a nanoscale, it is fundamental to determine their performance, which is ultimately controlled by the speed limit of transformation among the different structures involved. Here, we report observation with atomic-scale resolution of transient structures of nanofilms of crystalline germanium telluride, a prototypical PCM, using ultrafast electron crystallography. A nonthermal transformation from the initial rhombohedral phase to the cubic structure was found to occur in 12 ps. On a much longer time scale, hundreds of picoseconds, equilibrium heating of the nanofilm is reached, driving the system toward amorphization, provided that high excitation energy is invoked. These results elucidate the elementary steps defining the structural pathway in the transformation of crystalline-to-amorphous phase transitions and describe the essential atomic motions involved when driven by an ultrafast excitation. The establishment of the time scales of the different transient structures, as reported here, permits determination of the possible limit of performance, which is crucial for high-speed recording applications of PCMs.

  2. The Effect of a State Department of Education Teacher Mentor Initiative on Science Achievement

    NASA Astrophysics Data System (ADS)

    Pruitt, Stephen L.; Wallace, Carolyn S.

    2012-06-01

    This study investigated the effectiveness of a southern state's department of education program to improve science achievement through embedded professional development of science teachers in the lowest performing schools. The Science Mentor Program provided content and inquiry-based coaching by teacher leaders to science teachers in their own classrooms. The study analyzed the mean scale scores for the science portion of the state's high school graduation test for the years 2004 through 2007 to determine whether schools receiving the intervention scored significantly higher than comparison schools receiving no intervention. The results showed that all schools achieved significant improvement of scale scores between 2004 and 2007, but there were no significant performance differences between intervention and comparison schools, nor were there any significant differences between various subgroups in intervention and comparison schools. However, one subgroup, economically disadvantaged (ED) students, from high-level intervention schools closed the achievement gap with ED students from no-intervention schools across the period of the study. The study provides important information to guide future research on and design of large-scale professional development programs to foster inquiry-based science.

  3. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  4. Assessment of aerodynamic performance of V/STOL and STOVL fighter aircraft

    NASA Technical Reports Server (NTRS)

    Nelms, W. P.

    1984-01-01

    The aerodynamic performance of V/STOL and STOVL fighter/attack aircraft was assessed. Aerodynamic and propulsion/airframe integration activities are described and small and large scale research programs are considered. Uncertainties affecting aerodynamic performance that are associated with special configuration features resulting from the V/STOL requirement are addressed. Example uncertainties relate to minimum drag, wave drag, high angle of attack characteristics, and power induced effects.

  5. With All Strings Attached: Composer William C. Banfield Notes the Clash of Artistry and Commerce while Weaving Together a World of Music

    ERIC Educational Resources Information Center

    Hamilton, Kendra

    2004-01-01

    William Banfield is a composer with nine symphonies to his credit, as well as countless smaller scale works--concerti, chamber works, operas, choral and jazz works--that have been performed all over the nation. He has also performed with highly acclaimed jazz performers such as Patrice Rushen, Earl Klugh, Najee, Nelson Rangell and many others. He…

  6. A Synergistic Combination of Advanced Separation and Chemical Scale Inhibitor Technologies for Efficient Use of Imparied Water As Cooling Water in Coal-based Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasbir Gill

    2010-08-30

    Nalco Company is partnering with Argonne National Laboratory (ANL) in this project to jointly develop advanced scale control technologies that will provide cost-effective solutions for coal-based power plants to operate recirculating cooling water systems at high cycles using impaired waters. The overall approach is to use combinations of novel membrane separations and scale inhibitor technologies that will work synergistically, with membrane separations reducing the scaling potential of the cooling water and scale inhibitors extending the safe operating range of the cooling water system. The project started on March 31, 2006 and ended in August 30, 2010. The project was amore » multiyear, multi-phase project with laboratory research and development as well as a small pilot-scale field demonstration. In Phase 1 (Technical Targets and Proof of Concept), the objectives were to establish quantitative technical targets and develop calcite and silica scale inhibitor chemistries for high stress conditions. Additional Phase I work included bench-scale testing to determine the feasibility of two membrane separation technologies (electrodialysis ED and electrode-ionization EDI) for scale minimization. In Phase 2 (Technology Development and Integration), the objectives were to develop additional novel scale inhibitor chemistries, develop selected separation processes, and optimize the integration of the technology components at the laboratory scale. Phase 3 (Technology Validation) validated the integrated system's performance with a pilot-scale demonstration. During Phase 1, Initial evaluations of impaired water characteristics focused on produced waters and reclaimed municipal wastewater effluents. Literature and new data were collected and evaluated. Characteristics of produced waters vary significantly from one site to another, whereas reclaimed municipal wastewater effluents have relatively more uniform characteristics. Assessment to date confirmed that calcite and silica/silicate are two common potential cycle-limiting minerals for using impaired waters. For produced waters, barium sulfate and calcium sulfate are two additional potential cycle-limiting minerals. For reclaimed municipal wastewater effluents, calcium phosphate scaling can be an issue, especially in the co-presence of high silica. Computational assessment, using a vast amount of Nalco's field data from coal fired power plants, showed that the limited use and reuse of impaired waters is due to the formation of deposit caused by the presence of iron, high hardness, high silica and high alkalinity in the water. Appropriate and cost-effective inhibitors were identified and developed - LL99B0 for calcite and gypsum inhibition and TX-15060 for silica inhibition. Nalco's existing dispersants HSP-1 and HSP-2 has excellent efficacy for dispersing Fe and Mn. ED and EDI were bench-scale tested by the CRADA partner Argonne National Laboratory for hardness, alkalinity and silica removal from synthetic make-up water and then cycled cooling water. Both systems showed low power consumption and 98-99% salt removal, however, the EDI system required 25-30% less power for silica removal. For Phase 2, the EDI system's performance was optimized and the length of time between clean-in-place (CIP) increased by varying the wafer composition and membrane configuration. The enhanced EDI system could remove 88% of the hardness and 99% of the alkalinity with a processing flux of 19.2 gal/hr/m{sup 2} and a power consumption of 0.54 kWh/100 gal water. Bench tests to screen alternative silica/silicate scale inhibitor chemistries have begun. The silica/silicate control approaches using chemical inhibitors include inhibition of silicic acid polymerization and dispersion of silica/silicate crystals. Tests were conducted with an initial silica concentration of 290-300 mg/L as SiO{sub 2} at pH 7 and room temperature. A proprietary new chemistry was found to be promising, compared with a current commercial product commonly used for silica/silicate control. Additional pilot cooling tower testing confirmed the bench study. We also developed a molecule to inhibit calcium carbonate precipitation and calcium sulfate precipitation at high supersaturations. During Phase 3, a long-term test of the EDI system and scale inhibitors was done at Nalco's cooling tower water testing facility, producing 850 gallons of high purity water (90+% salt removal) at a rate of 220 L/day. The EDI system's performance was stable when the salt concentration in the concentrate compartment (i.e. the EDI waste stream) was controlled and a CIP was done after every 48 hours of operation time. A combination of EDI and scale inhibitors completely eliminated blowdown discharge from the Pilot cooling Tower. The only water-consumption came from evaporation, CIP and EDI concentrate. Silica Inhibitor was evaluated in the field at a western coal fired power plant.« less

  7. Scaling Phenomenology in Meson Photoproduction from CLAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biplab Dey, Curtis A. Meyer

    2010-08-01

    In the high energy limit, perturbative QCD predicts that hard scattering amplitudes should follow simple scaling laws. For hard scattering at 90°, we show that experiments support this prediction even in the “medium energy” regime of 2.3 GeV<=sqrt(s)<=2.84 GeV, as long as there are no s-channel resonances present. Our data consists of high statistics measurements for five different exclusive meson photoproduction channels (pomega, peta, peta[prime], K+Lambdaand K+[summation]0) recently obtained from CLAS at Jefferson Lab. The same power-law scaling also leads to “saturated” Regge trajectories at high energies. That is, at large -t and -u, Regge trajectories must approach constant negativemore » integers. We demonstrate the application of saturated Regge phenomenology by performing a partial wave analysis fit to the gammayp-->peta[prime]differential cross sections.« less

  8. In-Flight Validation of a Pilot Rating Scale for Evaluating Failure Transients in Electronic Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Kalinowski, Kevin F.; Tucker, George E.; Moralez, Ernesto, III

    2006-01-01

    Engineering development and qualification of a Research Flight Control System (RFCS) for the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A has motivated the development of a pilot rating scale for evaluating failure transients in fly-by-wire flight control systems. The RASCAL RFCS includes a highly-reliable, dual-channel Servo Control Unit (SCU) to command and monitor the performance of the fly-by-wire actuators and protect against the effects of erroneous commands from the flexible, but single-thread Flight Control Computer. During the design phase of the RFCS, two piloted simulations were conducted on the Ames Research Center Vertical Motion Simulator (VMS) to help define the required performance characteristics of the safety monitoring algorithms in the SCU. Simulated failures, including hard-over and slow-over commands, were injected into the command path, and the aircraft response and safety monitor performance were evaluated. A subjective Failure/Recovery Rating (F/RR) scale was developed as a means of quantifying the effects of the injected failures on the aircraft state and the degree of pilot effort required to safely recover the aircraft. A brief evaluation of the rating scale was also conducted on the Army/NASA CH-47B variable stability helicopter to confirm that the rating scale was likely to be equally applicable to in-flight evaluations. Following the initial research flight qualification of the RFCS in 2002, a flight test effort was begun to validate the performance of the safety monitors and to validate their design for the safe conduct of research flight testing. Simulated failures were injected into the SCU, and the F/RR scale was applied to assess the results. The results validate the performance of the monitors, and indicate that the Failure/Recovery Rating scale is a very useful tool for evaluating failure transients in fly-by-wire flight control systems.

  9. Mach 4 Test Results of a Dual-Flowpath, Turbine Based Combined Cycle Inlet

    NASA Technical Reports Server (NTRS)

    Albertson, Cindy w.; Emami, Saied; Trexler, Carl A.

    2006-01-01

    An experimental study was conducted to evaluate the performance of a turbine based combined cycle (TBCC) inlet concept, consisting of a low speed turbojet inlet and high speed dual-mode scramjet inlet. The main objectives of the study were (1) to identify any interactions between the low and the high speed inlets during the mode transition phase in which both inlets are operating simultaneously and (2) to determine the effect of the low speed inlet operation on the performance of the high speed inlet. Tests were conducted at a nominal freestream Mach number of 4 using an 8 percent scale model representing a single module of a TBCC inlet. A flat plate was installed upstream of the model to produce a turbulent boundary layer which simulated the full-scale vehicle forebody boundary layer. A flowmeter/back pressure device, with remote actuation, was attached aft of the high speed inlet isolator to simulate the back pressure resulting from dual-mode scramjet combustion. Results indicate that the inlets did not interact with each other sufficiently to affect inlet operability. Flow spillage resulting from a high speed inlet unstart did not propagate far enough upstream to affect the low speed inlet. Also, a low speed inlet unstart did not cause the high speed inlet to unstart. The low speed inlet improved the performance of the high speed inlet at certain conditions by diverting a portion of the boundary layer generated on the forebody plate.

  10. End-effects-regime in full scale and lab scale rocket nozzles

    NASA Astrophysics Data System (ADS)

    Rojo, Raymundo; Tinney, Charles; Baars, Woutijn; Ruf, Joseph

    2014-11-01

    Modern rockets utilize a thrust-optimized parabolic-contour design for their nozzles for its high performance and reliability. However, the evolving internal flow structures within these high area ratio rocket nozzles during start up generate a powerful amount of vibro-acoustic loads that act on the launch vehicle. Modern rockets must be designed to accommodate for these heavy loads or else risk a catastrophic failure. This study quantifies a particular moment referred to as the ``end-effects regime,'' or the largest source of vibro-acoustic loading during start-up [Nave & Coffey, AIAA Paper 1973-1284]. Measurements from full scale ignitions are compared with aerodynamically scaled representations in a fully anechoic chamber. Laboratory scale data is then matched with both static and dynamic wall pressure measurements to capture the associating shock structures within the nozzle. The event generated during the ``end-effects regime'' was successfully reproduced in the both the lab-scale models, and was characterized in terms of its mean, variance and skewness, as well as the spectral properties of the signal obtained by way of time-frequency analyses.

  11. Novel Surface Modification Method for Ultrasupercritical Coal-Fired Boilers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, T. Danny

    2013-05-22

    US Department of Energy seeks an innovative coating technology for energy production to reduce the emission of SOx, NOx, and CO2 toxic gaseous species. To realize this need, Inframat Corporation (IMC) proposed an SPS thermal spray coating technique to produce ultrafine/nanocoatings that can be deposited onto the surfaces of high temperature boiler tubes, so that higher temperatures of boiler operation becomes possible, leading to significantly reduced emission of toxic gaseous species. It should be noted that the original PI was Dr. Xinqing Ma, who after 1.5 year conducting this project left Inframat in December, 2008. Thus, the PI was transferredmore » to Dr. Danny Xiao, who originally co-authored the proposal with Dr. Ma, in order to carry the project into a completion. Phase II Objectives: The proposed technology has the following attributes, including: (1). Dispersion of a nanoparticle or alloyed particle in a solvent to form a uniform slurry feedstock; (2). Feeding of the slurry feedstock into a thermal spray flame, followed by deposition of the slurry feedstock onto substrates to form tenacious nanocoatings; (3). High coating performance: including high bonding strength, and high temperature service life in the temperature range of 760oC/1400oF. Following the above premises, our past Phase I project has demonstrated the feasibility in small scale coatings on boiler substrates. The objective of this Phase II project was to focus on scale-up the already demonstrated Phase I work for the fabrication of SPS coatings that can satisfy DOE's emission reduction goals for energy production operations. Specifically, they are: (1). Solving engineering problems to scale-up the SPS-HVOF delivery system to a prototype production sub-delivery system; (2). Produce ultrafine/nanocoatings using the scale-up prototype system; (3). Demonstrate the coated components using the scale-up device having superior properties. Proposed Phase II Tasks: In the original Phase II proposal, we have six (6) technical tasks plus one (1) reporting task, as described below: Task 1 Scale-up and optimize the SPS process; Task 2 Coating design and fabrication with desired microstructure; Task 3 Evaluate microstructure and physical properties; Task 4 Test performance of long-term corrosion and erosion; Task 5 Test mechanical property and reliability; Task 6 Coating of a prototype boiler tube for evaluation; Task 7 Reporting task. To date, we have already completed all the technical tasks of 1 through 6. Major Phase II Achievements: In this four (4) year working period, Inframat had spent great effort to complete the proposed tasks. The project had been completed; the goals have been accomplished. Major achievements obtained include: (1). Developed a prototype scale-up slurry feedstock delivery system for thermal spray coatings; (2). Successfully coated high performance coatings using this scale-up slurry delivery system; (3). Commercial applications in energy efficiency and clean energy components have been developed using this newly fabricated slurry feedstock delivery system.« less

  12. Incorporation of Rubber Powder as Filler in a New Dry-Hybrid Technology: Rheological and 3D DEM Mastic Performances Evaluation

    PubMed Central

    Vignali, Valeria; Mazzotta, Francesco; Sangiorgi, Cesare; Simone, Andrea; Lantieri, Claudio; Dondi, Giulio

    2016-01-01

    In recent years, the use of crumb rubber as modifier or additive within asphalt concretes has allowed obtaining mixtures able to bind high performances to recovery and reuse of discarded tires. To date, the common technologies that permit the reuse of rubber powder are the wet and dry ones. In this paper, a dry-hybrid technology for the production of Stone Mastic Asphalt mixtures is proposed. It allows the use of the rubber powder as filler, replacing part of the limestone one. Fillers are added and mixed with a high workability bitumen, modified with SBS (styrene-butadiene-styrene) polymer and paraffinic wax. The role of rubber powder and limestone filler within the bituminous mastic has been investigated through two different approaches. The first one is a rheological approach, which comprises a macro-scale laboratory analysis and a micro-scale DEM simulation. The second, instead, is a performance approach at high temperatures, which includes Multiple Stress Creep Recovery tests. The obtained results show that the rubber works as filler and it improves rheological characteristics of the polymer modified bitumen. In particular, it increases stiffness and elasticity at high temperatures and it reduces complex modulus at low temperatures. PMID:28773965

  13. Progesterone lipid nanoparticles: Scaling up and in vivo human study.

    PubMed

    Esposito, Elisabetta; Sguizzato, Maddalena; Drechsler, Markus; Mariani, Paolo; Carducci, Federica; Nastruzzi, Claudio; Cortesi, Rita

    2017-10-01

    This investigation describes a scaling up study aimed at producing progesterone containing nanoparticles in a pilot scale. Particularly hot homogenization techniques based on ultrasound homogenization or high pressure homogenization have been employed to produce lipid nanoparticles constituted of tristearin or tristearin in association with caprylic-capric triglyceride. It was found that the high pressure homogenization method enabled to obtain nanoparticles without agglomerates and smaller mean diameters with respect to ultrasound homogenization method. X-ray characterization suggested a lamellar structural organization of both type of nanoparticles. Progesterone encapsulation efficiency was almost 100% in the case of high pressure homogenization method. Shelf life study indicated a double fold stability of progesterone when encapsulated in nanoparticles produced by the high pressure homogenization method. Dialysis and Franz cell methods were performed to mimic subcutaneous and skin administration. Nanoparticles constituted of tristearin in mixture with caprylic/capric triglyceride display a slower release of progesterone with respect to nanoparticles constituted of pure tristearin. Franz cell evidenced a higher progesterone skin uptake in the case of pure tristearin nanoparticles. A human in vivo study, based on tape stripping, was conducted to investigate the performance of nanoparticles as progesterone skin delivery systems. Tape stripping results indicated a decrease of progesterone concentration in stratum corneum within six hours, suggesting an interaction between nanoparticle material and skin lipids. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Production of fullerenes with concentrated solar flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hale, M. J.; Fields, C.; Lewandowski, A.

    1994-01-01

    Research at the National Renewable Energy Laboratory (NREL) has demonstrated that fullerenes can be produced using highly concentrated sunlight from a solar furnace. Since they were first synthesized in 1989, fullerenes have been the subject of intense research. They show considerable commercial potential in advanced materials and have potential applications that include semiconductors, superconductors, high-performance metals, and medical technologies. The most common fullerene is C{sub 60}, which is a molecule with a geometry resembling a soccer ball. Graphite vaporization methods such as pulsed-laser vaporization, resistive heating, and carbon arc have been used to produce fullerenes. None of these, however, seemsmore » capable of producing fullerenes economically on a large scale. The use of concentrated sunlight may help avoid the scale-up limitations inherent in more established production processes. Recently, researchers at NREL made fullerenes in NREL`s 10 kW High Flux Solar Furnace (HFSF) with a vacuum reaction chamber designed to deliver a solar flux of 1200 W/cm{sup 2} to a graphite pellet. Analysis of the resulting carbon soot by mass spectrometry and high-pressure liquid chromatography confirmed the existence of fullerenes. These results are very encouraging and we are optimistic that concentrated solar flux can provide a means for large-scale, economical production of fullerenes. This paper presents our method, experimental apparatus, and results of fullerene production research performed with the HFSF.« less

  15. High-frequency self-aligned graphene transistors with transferred gate stacks.

    PubMed

    Cheng, Rui; Bai, Jingwei; Liao, Lei; Zhou, Hailong; Chen, Yu; Liu, Lixin; Lin, Yung-Chen; Jiang, Shan; Huang, Yu; Duan, Xiangfeng

    2012-07-17

    Graphene has attracted enormous attention for radio-frequency transistor applications because of its exceptional high carrier mobility, high carrier saturation velocity, and large critical current density. Herein we report a new approach for the scalable fabrication of high-performance graphene transistors with transferred gate stacks. Specifically, arrays of gate stacks are first patterned on a sacrificial substrate, and then transferred onto arbitrary substrates with graphene on top. A self-aligned process, enabled by the unique structure of the transferred gate stacks, is then used to position precisely the source and drain electrodes with minimized access resistance or parasitic capacitance. This process has therefore enabled scalable fabrication of self-aligned graphene transistors with unprecedented performance including a record-high cutoff frequency up to 427 GHz. Our study defines a unique pathway to large-scale fabrication of high-performance graphene transistors, and holds significant potential for future application of graphene-based devices in ultra-high-frequency circuits.

  16. Performance evaluation of four different methods for circulating water in commercial-scale, split-pond aquaculture systems

    USDA-ARS?s Scientific Manuscript database

    The split-pond consists of a fish-culture basin that is connected to a waste-treatment lagoon by two conveyance structures. Water is circulated between the two basins with high-volume pumps and many different pumping systems are being used on commercial farms. Pump performance was evaluated with fou...

  17. Parallel-vector solution of large-scale structural analysis problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Agarwal, Tarun K.

    1989-01-01

    A direct linear equation solution method based on the Choleski factorization procedure is presented which exploits both parallel and vector features of supercomputers. The new equation solver is described, and its performance is evaluated by solving structural analysis problems on three high-performance computers. The method has been implemented using Force, a generic parallel FORTRAN language.

  18. Measuring the Sources of Self-Efficacy among Secondary School Music Students

    ERIC Educational Resources Information Center

    Zelenak, Michael S.

    2015-01-01

    The purpose of this study was to investigate the four sources of self-efficacy in music performance and examine responses from the Music Performance Self-Efficacy Scale (MPSES). Participants (N = 290) were middle and high school music students from 10 schools in two regions of the United States. Questions included the following: (1) How much…

  19. Modeling the Relations among Parental Involvement, School Engagement and Academic Performance of High School Students

    ERIC Educational Resources Information Center

    Al-Alwan, Ahmed F.

    2014-01-01

    The author proposed a model to explain how parental involvement and school engagement related to academic performance. Participants were (671) 9th and 10th graders students who completed two scales of "parental involvement" and "school engagement" in their regular classrooms. Results of the path analysis suggested that the…

  20. Flow among Musicians: Measuring Peak Experiences of Student Performers

    ERIC Educational Resources Information Center

    Sinnamon, Sarah; Moran, Aidan; O'Connell, Michael

    2012-01-01

    "Flow" is a highly coveted yet elusive state of mind that is characterized by complete absorption in the task at hand as well as by enhanced skilled performance. Unfortunately, because most measures of this construct have been developed in physical activity and sport settings, little is known about the applicability of flow scales to the…

  1. Spatially uniform resistance switching of low current, high endurance titanium–niobium-oxide memristors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Suhas; Davila, Noraica; Wang, Ziwen

    2016-11-24

    Here we analyzed micrometer-scale titanium-niobium-oxide prototype memristors, which exhibited low write-power (< 3 μW) and energy (< 200 fJ per bit per μm 2), low read-power (~nW), and high endurance ( > millions of cycles). To understand their physico-chemical operating mechanisms, we performed in operando synchrotron X-ray transmission nanoscale spectromicroscopy using an ultra-sensitive time-multiplexed technique. We observed only spatially uniform material changes during cell operation, in sharp contrast to the frequently detected formation of a localized conduction channel in transition-metal-oxide memristors. We also associated the response of assigned spectral features distinctly to non-volatile storage (resistance change) and writing of informationmore » (application of voltage and Joule heating). Lastly, these results provide critical insights into high-performance memristors that will aid in device design, scaling and predictive circuit-modeling, all of which are essential for the widespread deployment of successful memristor applications.« less

  2. Scale Model Test and Transient Analysis of Steam Injector Driven Passive Core Injection System for Innovative-Simplified Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Ohmori, Shuichi; Narabayashi, Tadashi; Mori, Michitsugu

    A steam injector (SI) is a simple, compact and passive pump and also acts as a high-performance direct-contact compact heater. This provides SI with capability to serve also as a direct-contact feed-water heater that heats up feed-water by using extracted steam from turbine. Our technology development aims to significantly simplify equipment and reduce physical quantities by applying "high-efficiency SI", which are applicable to a wide range of operation regimes beyond the performance and applicable range of existing SIs and enables unprecedented multistage and parallel operation, to the low-pressure feed-water heaters and emergency core cooling system of nuclear power plants, as well as achieve high inherent safety to prevent severe accidents by keeping the core covered with water (a severe accident-free concept). This paper describes the results of the scale model test, and the transient analysis of SI-driven passive core injection system (PCIS).

  3. Polymeric molecular sieve membranes via in situ cross-linking of non-porous polymer membrane templates.

    PubMed

    Qiao, Zhen-An; Chai, Song-Hai; Nelson, Kimberly; Bi, Zhonghe; Chen, Jihua; Mahurin, Shannon M; Zhu, Xiang; Dai, Sheng

    2014-04-16

    High-performance polymeric membranes for gas separation are attractive for molecular-level separations in industrial-scale chemical, energy and environmental processes. Molecular sieving materials are widely regarded as the next-generation membranes to simultaneously achieve high permeability and selectivity. However, most polymeric molecular sieve membranes are based on a few solution-processable polymers such as polymers of intrinsic microporosity. Here we report an in situ cross-linking strategy for the preparation of polymeric molecular sieve membranes with hierarchical and tailorable porosity. These membranes demonstrate exceptional performance as molecular sieves with high gas permeabilities and selectivities for smaller gas molecules, such as carbon dioxide and oxygen, over larger molecules such as nitrogen. Hence, these membranes have potential for large-scale gas separations of commercial and environmental relevance. Moreover, this strategy could provide a possible alternative to 'classical' methods for the preparation of porous membranes and, in some cases, the only viable synthetic route towards certain membranes.

  4. Design Sketches For Optical Crossbar Switches Intended For Large-Scale Parallel Processing Applications

    NASA Astrophysics Data System (ADS)

    Hartmann, Alfred; Redfield, Steve

    1989-04-01

    This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.

  5. Light scattering optimization of chitin random network in ultrawhite beetle scales

    NASA Astrophysics Data System (ADS)

    Utel, Francesco; Cortese, Lorenzo; Pattelli, Lorenzo; Burresi, Matteo; Vignolini, Silvia; Wiersma, Diederik

    2017-09-01

    Among the natural white colored photonics structures, a bio-system has become of great interest in the field of disordered optical media: the scale of the white beetle Chyphochilus. Despite its low thickness, on average 7 μm, and low refractive index, this beetle exhibits extreme high brightness and unique whiteness. These properties arise from the interaction of light with a complex network of chitin nano filaments embedded in the interior of the scales. As it's been recently claimed, this could be a consequence of the peculiar morphology of the filaments network that, by means of high filling fraction (0.61) and structural anisotropy, optimizes the multiple scattering of light. We therefore performed a numerical analysis on the structural properties of the chitin network in order to understand their role in the enhancement of the scale scattering intensity. Modeling the filaments as interconnected rod shaped scattering centers, we numerically generated the spatial coordinates of the network components. Controlling the quantities that are claimed to play a fundamental role in the brightness and whiteness properties of the investigated system (filling fraction and average rods orientation, i.e. the anisotropy of the ensemble of scattering centers), we obtained a set of customized random networks. FDTD simulations of light transport have been performed on these systems, observing high reflectance for all the visible frequencies and proving the implemented algorithm to numerically generate the structures is suitable to investigate the dependence of reflectance by anisotropy.

  6. High-pressure Xenon Gas Electroluminescent TPC Concept for Simultaneous Searches for Neutrino-less Double Beta Decay & WIMP Dark Matter

    NASA Astrophysics Data System (ADS)

    Nygren, David

    2013-04-01

    Xenon is an especially attractive candidate for both direct WIMP and 0- decay searches. Although the current trend has exploited the liquid phase, gas phase xenon offers some remarkable performance advantages for energy resolution, topology visualization, and discrimination between electron and nuclear recoils. The NEXT-100 experiment, now beginning construction in the Canfranc Underground Laboratory, Spain, will operate at 12 bars with 100 kg of ^136Xe for the 0- decay search. I will describe recent results with small prototypes, indicating that NEXT-100 can provide about 0.5% FWHM energy resolution at the decay 2457.83 keV Q-value, as well as rejection of -rays by topology. However, sensitivity goals for WIMP dark matter and 0- decay searches indicate the need for ton-scale active masses; NEXT-100 provides the springboard to reach this scale with xenon gas. I describe a scenario for performing both searches in a single high-pressure ton-scale xenon gas detector, without significant compromise to either. In addition, -- even in a single, ton-scale, high-pressure xenon gas TPC, an intrinsic sensitivity to the nuclear recoil direction may exist -- plausibly offering an advance of more than two orders of magnitude relative to current low-pressure TPC concepts. I argue that, in an era of deepening fiscal austerity, such a dual-purpose detector may be possible, at acceptable cost, within the time frame of interest, and deserves our collective attention.

  7. Renormalization Group scale-setting in astrophysical systems

    NASA Astrophysics Data System (ADS)

    Domazet, Silvije; Štefančić, Hrvoje

    2011-09-01

    A more general scale-setting procedure for General Relativity with Renormalization Group corrections is proposed. Theoretical aspects of the scale-setting procedure and the interpretation of the Renormalization Group running scale are discussed. The procedure is elaborated for several highly symmetric systems with matter in the form of an ideal fluid and for two models of running of the Newton coupling and the cosmological term. For a static spherically symmetric system with the matter obeying the polytropic equation of state the running scale-setting is performed analytically. The obtained result for the running scale matches the Ansatz introduced in a recent paper by Rodrigues, Letelier and Shapiro which provides an excellent explanation of rotation curves for a number of galaxies. A systematic explanation of the galaxy rotation curves using the scale-setting procedure introduced in this Letter is identified as an important future goal.

  8. Nanoscale friction properties of graphene and graphene oxide

    DOE PAGES

    Berman, Diana; Erdemir, Ali; Zinovev, Alexander V.; ...

    2015-04-03

    Achieving superlow friction and wear at the micro/nano-scales through the uses of solid and liquid lubricants may allow superior performance and long-lasting operations in a range of micromechanical system including micro-electro mechanical systems (MEMS). Previous studies have indicated that conventional solid lubricants such as highly ordered pyrolitic graphite (HOPG) can only afford low friction in humid environments at micro/macro scales; but, HOPG is not suitable for practical micro-scale applications. Here, we explored the nano-scale frictional properties of multi-layered graphene films as a potential solid lubricant for such applications. Atomic force microscopy (AFM) measurements have revealed that for high-purity multilayered graphenemore » (7–9 layers), the friction force is significantly lower than what can be achieved by the use of HOPG, regardless of the counterpart AFM tip material. We have demonstrated that the quality and purity of multilayered graphene plays an important role in reducing lateral forces, while oxidation of graphene results in dramatically increased friction values. Furthermore, for the first time, we demonstrated the possibility of achieving ultralow friction for CVD grown single layer graphene on silicon dioxide. This confirms that the deposition process insures a stronger adhesion to substrate and hence enables superior tribological performance than the previously reported mechanical exfoliation processes.« less

  9. Highly Efficient Parallel Multigrid Solver For Large-Scale Simulation of Grain Growth Using the Structural Phase Field Crystal Model

    NASA Astrophysics Data System (ADS)

    Guan, Zhen; Pekurovsky, Dmitry; Luce, Jason; Thornton, Katsuyo; Lowengrub, John

    The structural phase field crystal (XPFC) model can be used to model grain growth in polycrystalline materials at diffusive time-scales while maintaining atomic scale resolution. However, the governing equation of the XPFC model is an integral-partial-differential-equation (IPDE), which poses challenges in implementation onto high performance computing (HPC) platforms. In collaboration with the XSEDE Extended Collaborative Support Service, we developed a distributed memory HPC solver for the XPFC model, which combines parallel multigrid and P3DFFT. The performance benchmarking on the Stampede supercomputer indicates near linear strong and weak scaling for both multigrid and transfer time between multigrid and FFT modules up to 1024 cores. Scalability of the FFT module begins to decline at 128 cores, but it is sufficient for the type of problem we will be examining. We have demonstrated simulations using 1024 cores, and we expect to achieve 4096 cores and beyond. Ongoing work involves optimization of MPI/OpenMP-based codes for the Intel KNL Many-Core Architecture. This optimizes the code for coming pre-exascale systems, in particular many-core systems such as Stampede 2.0 and Cori 2 at NERSC, without sacrificing efficiency on other general HPC systems.

  10. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  11. Energy transfer, pressure tensor, and heating of kinetic plasma

    NASA Astrophysics Data System (ADS)

    Yang, Yan; Matthaeus, William H.; Parashar, Tulasi N.; Haggerty, Colby C.; Roytershteyn, Vadim; Daughton, William; Wan, Minping; Shi, Yipeng; Chen, Shiyi

    2017-07-01

    Kinetic plasma turbulence cascade spans multiple scales ranging from macroscopic fluid flow to sub-electron scales. Mechanisms that dissipate large scale energy, terminate the inertial range cascade, and convert kinetic energy into heat are hotly debated. Here, we revisit these puzzles using fully kinetic simulation. By performing scale-dependent spatial filtering on the Vlasov equation, we extract information at prescribed scales and introduce several energy transfer functions. This approach allows highly inhomogeneous energy cascade to be quantified as it proceeds down to kinetic scales. The pressure work, - ( P . ∇ ) . u , can trigger a channel of the energy conversion between fluid flow and random motions, which contains a collision-free generalization of the viscous dissipation in collisional fluid. Both the energy transfer and the pressure work are strongly correlated with velocity gradients.

  12. Hydrogen Production from Nuclear Energy via High Temperature Electrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James E. O'Brien; Carl M. Stoots; J. Stephen Herring

    2006-04-01

    This paper presents the technical case for high-temperature nuclear hydrogen production. A general thermodynamic analysis of hydrogen production based on high-temperature thermal water splitting processes is presented. Specific details of hydrogen production based on high-temperature electrolysis are also provided, including results of recent experiments performed at the Idaho National Laboratory. Based on these results, high-temperature electrolysis appears to be a promising technology for efficient large-scale hydrogen production.

  13. Prevention and management of silica scaling in membrane distillation using pH adjustment

    DOE PAGES

    Bush, John A.; Vanneste, Johan; Gustafson, Emily M.; ...

    2018-02-27

    Membrane scaling by silica is a major challenge in desalination, particularly for inland desalination of brackish groundwater or geothermal resources, which often contain high concentrations of silica and dissolved solids. Adjustment of feed pH may reduce silica scaling risk, which is important for inland facilities that operate at high water recoveries to reduce brine disposal costs. However, water recovery of reverse osmosis is also limited due to increased osmotic pressure with feed water concentration. Membrane distillation (MD) is a thermally driven membrane desalination technique that is not limited by increased osmotic pressure of the feed. In this investigation, pH adjustmentmore » was tested as a strategy to reduce silica scaling risk in the MD process. With feed water pH less than 5 or higher than 10, scaling impacts were negligible at silica concentrations up to 600 mg/L. Scaling rates were highest at neutral pH between 6 and 8. Cleaning strategies were also explored to remove silica scale from membranes. Cleaning using NaOH solutions at pH higher than 11 to induce dissolution of silica scale was effective at temporarily restoring performance; however, some silica remained on membrane surfaces and scaling upon re-exposure to supersaturated silica concentrations occurred faster than with new membranes.« less

  14. Prevention and management of silica scaling in membrane distillation using pH adjustment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, John A.; Vanneste, Johan; Gustafson, Emily M.

    Membrane scaling by silica is a major challenge in desalination, particularly for inland desalination of brackish groundwater or geothermal resources, which often contain high concentrations of silica and dissolved solids. Adjustment of feed pH may reduce silica scaling risk, which is important for inland facilities that operate at high water recoveries to reduce brine disposal costs. However, water recovery of reverse osmosis is also limited due to increased osmotic pressure with feed water concentration. Membrane distillation (MD) is a thermally driven membrane desalination technique that is not limited by increased osmotic pressure of the feed. In this investigation, pH adjustmentmore » was tested as a strategy to reduce silica scaling risk in the MD process. With feed water pH less than 5 or higher than 10, scaling impacts were negligible at silica concentrations up to 600 mg/L. Scaling rates were highest at neutral pH between 6 and 8. Cleaning strategies were also explored to remove silica scale from membranes. Cleaning using NaOH solutions at pH higher than 11 to induce dissolution of silica scale was effective at temporarily restoring performance; however, some silica remained on membrane surfaces and scaling upon re-exposure to supersaturated silica concentrations occurred faster than with new membranes.« less

  15. Atomic and close-to-atomic scale manufacturing—A trend in manufacturing development

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou

    2016-12-01

    Manufacturing is the foundation of a nation's economy. It is the primary industry to promote economic and social development. To accelerate and upgrade China's manufacturing sector from "precision manufacturing" to "high-performance and high-quality manufacturing", a new breakthrough should be found in terms of achieving a "leap-frog development". Unlike conventional manufacturing, the fundamental theory of "Manufacturing 3.0" is beyond the scope of conventional theory; rather, it is based on new principles and theories at the atomic and/or closeto- atomic scale. Obtaining a dominant role at the international level is a strategic move for China's progress.

  16. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  17. Efficacy of High-volume Evacuator in Aerosol Reduction: Truth or Myth? A Clinical and Microbiological Study.

    PubMed

    Desarda, Hitesh; Gurav, Abhijit; Dharmadhikari, Chandrakant; Shete, Abhijeet; Gaikwad, Subodh

    2014-01-01

    Background and aims. Basic periodontal treatment aims at eliminating supra- and sub-gingival plaque and establishing conditions which will allow effective self-performed plaque control. This aim is primarily achieved with sonic and ultrasonic scalers. However, generation of bacterial aerosols during these procedures is of great concern to patients, the dentist and the dental assistant. The aim of this study was to compare the reduction in aerosol with and without high-volume evacuator through a microbiological study. Materials and methods. For this clinical study a fumigated closed operatory was selected. Maxillary incisors and canines were selected as an area for scaling. Piezoelectric ultrasonic scaling was performed in the absence and in the presence of a high-volume evacuator at 12 and 20 inches from the patient's oral cavity. In both groups scaling was carried out for 10 minutes. Nutrient agar plates were exposed for a total of 20 minutes. After this procedure, nutrient agar plates were incubated in an incubator at 37°C for 24 hours. The next day the nutrient agar plates were examined for colony forming units by a single microbiologist. Results. The results showed no statistically significant differences in colony forming units (CFU) with and without the use of a high-volume evacuator either at 12 or 20 inches from the patient's oral cavity. Conclusion. It was concluded that high-volume evacuator, when used as a separate unit without any modification, is not effective in reducing aerosol counts and environmental contamination.

  18. Rapid, high-resolution measurement of leaf area and leaf orientation using terrestrial LiDAR scanning data

    USDA-ARS?s Scientific Manuscript database

    The rapid evolution of high performance computing technology has allowed for the development of extremely detailed models of the urban and natural environment. Although models can now represent sub-meter-scale variability in environmental geometry, model users are often unable to specify the geometr...

  19. Analyzing seasonal patterns of wildfire exposure factors in Sardinia, Italy

    Treesearch

    Michele Salis; Alan A. Ager; Fermin J. Alcasena; Bachisio Arca; Mark A. Finney; Grazia Pellizzaro; Donatella Spano

    2015-01-01

    In this paper, we applied landscape scale wildfire simulation modeling to explore the spatiotemporal patterns of wildfire likelihood and intensity in the island of Sardinia (Italy). We also performed wildfire exposure analysis for selected highly valued resources on the island to identify areas characterized by high risk. We observed substantial variation in burn...

  20. Adhesive performance of washed cottonseed meal at high solid contents and low temperatures

    USDA-ARS?s Scientific Manuscript database

    Water-washed cottonseed meal (WCSM) has been shown as a promising biobased wood adhesive. Recently, we prepared WSCM in a pilot scale for promoting its industrial application. In this work, we tested the adhesive strength and viscosity of the adhesive preparation with high solid contents (up to 30%...

Top