Science.gov

Sample records for scale performance studies

  1. Small-Scale Performance Testing for Studying New Explosives

    SciTech Connect

    Gagliardi, F J; Chambers, R D; Tran, T D

    2005-04-29

    The development of new high-explosive (HE) formulations involves characterizing their safety and performance. Small-scale experiments requiring only a small amount of explosives are of interest because they can facilitate development while minimizing hazards and reducing cost. A detonation-spreading, dent test, called the Floret test, was designed to obtain performance data for new explosives. It utilizes the detonation of about a 1.0 g sample of HE, initiated by an accelerated aluminum flyer. Upon impact, the HE sample detonates and a copper witness plate absorbs the ensuing shock wave. The dent of the plate is then measured and correlated to the energetic output of the HE. Additionally, the dent measurement can be used to compare the performance of different explosives. The Floret test is beneficial because it quickly returns important performance information, while requiring only a small explosive sample. This work will explain the Floret test and discuss some exemplary results.

  2. Experimental study of full-scale iced-airfoil aerodynamic performance using sub-scale simulations

    NASA Astrophysics Data System (ADS)

    Busch, Greg T.

    Determining the aerodynamic effects of ice accretion on aircraft surfaces is an important step in aircraft design and certification. The goal of this work was to develop a complete sub-scale wind tunnel simulation methodology based on knowledge of the detailed iced-airfoil flowfield that allows the accurate measurement of aerodynamic penalties associated with the accretion of ice on an airfoil and to validate this methodology using full-scale iced-airfoil performance data obtained at near-flight Reynolds numbers. In earlier work, several classifications of ice shape were developed based on key aerodynamic features in the iced-airfoil flowfield: ice roughness, streamwise ice, horn ice, and tall and short spanwise-ridge ice. Castings of each of these classifications were acquired on a full-scale NACA 23012 airfoil model and the aero-dynamic performance of each was measured at a Reynolds number of 12.0 x 106 and a Mach number = 0.20. In the current study, sub-scale simple-geometry and 2-D smooth simulations of each of these castings were constructed based on knowledge of iced-airfoil flowfields. The effects of each simulation on the aerodynamic performance of an 18-inch chord NACA 23012 airfoil model was measured in the University of Illinois 3 x 4 ft. wind tunnel at a Reynolds number of 1.8 x 106 and a Mach number of 0.18 and compared with that measured for the corresponding full-scale casting at high Reynolds number. Geometrically-scaled simulations of the horn-ice and tall spanwise-ridge ice castings modeled C l,maxto within 2% and Cd,min to within 15%. Good qualitative agreement in the Cp distributions suggests that important geometric features such as horn and ridge height, surface location, and angle with respect to the airfoil chordline were appropriately modeled. Geometrically-scaled simulations of the ice roughness, streamwise ice, and short-ridge ice tended to have conservative C l,max and Cd. The aerodynamic performance of simulations of these types of

  3. Evaluating Large-Scale Studies to Accurately Appraise Children's Performance

    ERIC Educational Resources Information Center

    Ernest, James M.

    2012-01-01

    Educational policy is often developed using a top-down approach. Recently, there has been a concerted shift in policy for educators to develop programs and research proposals that evolve from "scientific" studies and focus less on their intuition, aided by professional wisdom. This article analyzes several national and international educational…

  4. A numerical study of scale effects on performance of a tractor type podded propeller

    NASA Astrophysics Data System (ADS)

    Choi, Jung-Kyu; Park, Hyoung-Gil; Kim, Hyoung-Tae

    2014-06-01

    In this study, the scale effect on the performance of the podded propeller of tractor type is investigated. Turbulent flow computations are carried out for Reynolds numbers increasing progressively from model scale to full scale using the CFD analysis. The result of the flow calculation for model scale Reynolds numbers agrees well with that of the experiment of a large cavitation tunnel. The existing numerical analysis indicates that the performance of the podded propeller blades is mainly influenced by the advance coefficient and relatively little by the Reynolds number. However, the drag of pod housing with propeller in operation is different from that of pod housing without propeller due to the acceleration and swirl of propeller slipstream which is altered by propeller loading as well as the pressure recovery and friction according to Reynolds number, which suggests that the pod housing drag under the condition of propeller in operation is the key factor of the scale effect on the performance between model and full scale podded propellers. The so called `drag ratio', which is the ratio of pod housing drag to total thrust of podded propeller, increases as the advance coefficient increases due to accelerated flow in the slipstream of the podded propeller. However, the increasing rate of the drag ratio reduces continuously as the Reynolds number increases from model to full scale progressively. The contribution of hydrodynamic forces, which acts on the parts composed of the pod housing with propeller operating in various loading conditions, to the thrust and the torque of the total propeller unit are presented for a range of Reynolds numbers from model to full scales.

  5. A Simulation Study on the Performance of Four Multidimensional IRT Scale Linking Methods

    ERIC Educational Resources Information Center

    Wei, Youhua

    2008-01-01

    Scale linking is the process of developing the connection between scales of two or more sets of parameter estimates obtained from separate test calibrations. It is the prerequisite for many applications of IRT, such as test equating and differential item functioning analysis. Unidimensional scale linking methods have been studied and applied…

  6. Updating the Cognitive Performance Scale.

    PubMed

    Morris, John N; Howard, Elizabeth P; Steel, Knight; Perlman, Christopher; Fries, Brant E; Garms-Homolová, Vjenka; Henrard, Jean-Claude; Hirdes, John P; Ljunggren, Gunnar; Gray, Len; Szczerbińska, Katarzyna

    2016-01-01

    This study presents the first update of the Cognitive Performance Scale (CPS) in 20 years. Its goals are 3-fold: extend category options; characterize how the new scale variant tracks with the Mini-Mental State Examination; and present a series of associative findings. Secondary analysis of data from 3733 older adults from 8 countries was completed. Examination of scale dimensions using older and new items was completed using a forward-entry stepwise regression. The revised scale was validated by examining the scale's distribution with a self-reported dementia diagnosis, functional problems, living status, and distress measures. Cognitive Performance Scale 2 extends the measurement metric from a range of 0 to 6 for the original CPS, to 0 to 8. Relating CPS2 to other measures of function, living status, and distress showed that changes in these external measures correspond with increased challenges in cognitive performance. Cognitive Performance Scale 2 enables repeated assessments, sensitive to detect changes particularly in early levels of cognitive decline. PMID:26251111

  7. Scaling formula of ICF ignition targets and study of targets optimized in stability performance

    NASA Astrophysics Data System (ADS)

    Li, Xin; Dai, Zhensheng; Zheng, Wudi

    2014-10-01

    LPI and RTI are the two main ingredients affecting the success of ignition. The gas fill near the Au wall along the inner laser cone is the main region which stimulates SRS instabilities. At this region, pressure balance and energy balance between the inside and the outside of inner laser cone path are obtained. A plasma scaling model in ignition hohlraums of ICF has been developed. RTI could be described by IFAR(InFlight Aspect Ratio) according to linear theory. Considering other scaling formula in capsule, a index, SPI (Stability performance Index), has been proposed, which describes the balance between SPI and RTI. Designing of ignition targets is directed by using this index to obtain more margin for LPI and RTI.

  8. Study of performance scaling of 22-nm epitaxial delta-doped channel MOS transistor

    NASA Astrophysics Data System (ADS)

    Sengupta, Sarmista; Pandit, Soumya

    2015-06-01

    Epitaxial delta-doped channel (EδDC) profile is a promising approach for extending the scalability of bulk metal oxide semiconductor (MOS) technology for low-power system-on-chip applications. A comparative study between EδDC bulk MOS transistor with gate length Lg = 22 nm and a conventional uniformly doped channel (UDC) bulk MOS transistor, with respect to various digital and analogue performances, is presented. The study has been performed using Silvaco technology computer-aided design device simulator, calibrated with experimental results. This study reveals that at smaller gate length, EδDC transistor outperforms the UDC transistor with respect to various studied performances. The reduced contribution of the lateral electric field in the channel plays the key role in this regard. Further, the carrier mobility in EδDC transistor is higher compared to UDC transistor. For moderate gate and drain bias, the impact ionisation rate of the carriers for EδDC MOS transistor is lower than that of the UDC transistor. In addition, at 22 nm, the performances of a EδDC transistor are competitive to that of an ultra-thin body silicon-on-insulator transistor.

  9. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGESBeta

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  10. Performance study of protective clothing against hot water splashes: from bench scale test to instrumented manikin test.

    PubMed

    Lu, Yehu; Song, Guowen; Wang, Faming

    2015-03-01

    Hot liquid hazards existing in work environments are shown to be a considerable risk for industrial workers. In this study, the predicted protection from fabric was assessed by a modified hot liquid splash tester. In these tests, conditions with and without an air spacer were applied. The protective performance of a garment exposed to hot water spray was investigated by a spray manikin evaluation system. Three-dimensional body scanning technique was used to characterize the air gap size between the protective clothing and the manikin skin. The relationship between bench scale test and manikin test was discussed and the regression model was established to predict the overall percentage of skin burn while wearing protective clothing. The results demonstrated strong correlations between bench scale test and manikin test. Based on these studies, the overall performance of protective clothing against hot water spray can be estimated on the basis of the results of the bench scale hot water splashes test and the information of air gap size entrapped in clothing. The findings provide effective guides for the design and material selection while developing high performance protective clothing. PMID:25349371

  11. Study on the sensing performance of OFBG under large-scale negative strain

    NASA Astrophysics Data System (ADS)

    Wang, Chuan; Hu, Qingli; Ou, Jinping

    2010-03-01

    As a new and sensitive sensing element, OFBG(Optical Fiber Bragg Grating) has been widely used in aerospace engineering and civil engineering. The sensing mechanism and properties have been widely studied by lots of researchers, but the sensing properties of OFBG under large negative strain are still destitute. In this paper, with the aids of large shrinkage performance of PP(polypropylene) during its curing, we gained about -13000 μɛ's strain changes by embeding bare OFBG inside the PP bar to study the sensing properties of OFBG in this strain level. The results show that OFBG can remain its sensing properties well---- linearity, repeatability and the shape of centre wavelength are both reasonably. And the strain sensitivity coefficient of PP-OFBG is about 0.85 pm/μɛ, this is very near with that of calculating results considering strain transmission between PP and OFBG. Which are all helpful and useful for further use of OFBG in other applications.

  12. Study of Micro and Nano Scale Features in the Fabrication, Performance, and Degradation of Advanced Engineering Materials

    NASA Astrophysics Data System (ADS)

    Lombardo, Jeffrey John

    Increasingly, modern engineering materials are designed on a micron or nano scale to fulfill a given set of requirements or to enhance the material's performance. In this dissertation several such materials will be studied including catalyst particles for carbon nanotube (CNT) growth by use of atomic force microscopy (AFM) and x-ray photoelectron spectroscopy (XPS), multi walled carbon nanotubes (MWNTs) by reactor scale modeling, hermetic carbon coatings by focused ion beam/ scanning electron microscopy (FIB/SEM) and Fourier transform infrared spectroscopy (FTIR) the latter of which was performed by Andrei Stolov at OFS Specialty Photonics Division (Avon, CT), and Ni/Yttria stabilized zirconia (YSZ) solid oxide fuel cell (SOFC) anodes using X-ray nanotomography (XNT) and X-ray fluorescence (XRF) the second of which was performed by Barry Lai at APS (Argonne National Lab, IL). For each material, a subset of the material properties will be looked at to determine how the selected property affects either the fabrication, performance, or degradation of the material. Following the analysis of these materials, it was found that although the materials are different, the study of micron and nano scale features has many related traits. X-rays and electrons are frequently used to examine nanoscale structures, numerical study can be exploited to expedite measurements and extract additional information from experiments, and the study of these requires knowledge across many scientific fields. As a product of this research, detailed information about all of the materials studied has been contributed to the scientific literature including size dependance information about the oxidation states of nanometer size iron particles, optimal CVD reactor growth conditions for different CNT catalyst particle sizes and number of walls, a technique for rapid measurement of hermetic carbon film thickness, and detailed microstructural detail and sulfur poisoning mapping for Ni/YSZ SOFC anodes.

  13. Sensitivity of Utility-Scale Solar Deployment Projections in the SunShot Vision Study to Market and Performance Assumptions

    SciTech Connect

    Eurek, K.; Denholm, P.; Margolis, R.; Mowers, M.

    2013-04-01

    The SunShot Vision Study explored the potential growth of solar markets if solar prices decreased by about 75% from 2010 to 2020. The ReEDS model was used to simulate utility PV and CSP deployment for this present study, based on several market and performance assumptions - electricity demand, natural gas prices, coal retirements, cost and performance of non-solar renewable technologies, PV resource variability, distributed PV deployment, and solar market supply growth - in addition to the SunShot solar price projections. This study finds that utility-scale solar deployment is highly sensitive to solar prices. Other factors can have significant impacts, particularly electricity demand and natural gas prices.

  14. Performance Evaluation of Wearable Sensor Systems: A Case Study in Moderate-Scale Deployment in Hospital Environment.

    PubMed

    Sun, Wen; Ge, Yu; Zhang, Zhiqiang; Wong, Wai-Choong

    2015-01-01

    A wearable sensor system enables continuous and remote health monitoring and is widely considered as the next generation of healthcare technology. The performance, the packet error rate (PER) in particular, of a wearable sensor system may deteriorate due to a number of factors, particularly the interference from the other wearable sensor systems in the vicinity. We systematically evaluate the performance of the wearable sensor system in terms of PER in the presence of such interference in this paper. The factors that affect the performance of the wearable sensor system, such as density, traffic load, and transmission power in a realistic moderate-scale deployment case in hospital are all considered. Simulation results show that with 20% duty cycle, only 68.5% of data transmission can achieve the targeted reliability requirement (PER is less than 0.05) even in the off-peak period in hospital. We then suggest some interference mitigation schemes based on the performance evaluation results in the case study. PMID:26426015

  15. Performance Evaluation of Wearable Sensor Systems: A Case Study in Moderate-Scale Deployment in Hospital Environment

    PubMed Central

    Sun, Wen; Ge, Yu; Zhang, Zhiqiang; Wong, Wai-Choong

    2015-01-01

    A wearable sensor system enables continuous and remote health monitoring and is widely considered as the next generation of healthcare technology. The performance, the packet error rate (PER) in particular, of a wearable sensor system may deteriorate due to a number of factors, particularly the interference from the other wearable sensor systems in the vicinity. We systematically evaluate the performance of the wearable sensor system in terms of PER in the presence of such interference in this paper. The factors that affect the performance of the wearable sensor system, such as density, traffic load, and transmission power in a realistic moderate-scale deployment case in hospital are all considered. Simulation results show that with 20% duty cycle, only 68.5% of data transmission can achieve the targeted reliability requirement (PER is less than 0.05) even in the off-peak period in hospital. We then suggest some interference mitigation schemes based on the performance evaluation results in the case study. PMID:26426015

  16. Sensitivity study of a large-scale air pollution model by using high-performance computations and Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Ostromsky, Tz.; Dimov, I.; Georgieva, R.; Marinov, P.; Zlatev, Z.

    2013-10-01

    In this paper we present some new results of our work on sensitivity analysis of a large-scale air pollution model, more specificly the Danish Eulerian Model (DEM). The main purpose of this study is to analyse the sensitivity of ozone concentrations with respect to the rates of some chemical reactions. The current sensitivity study considers the rates of six important chemical reactions and is done for the areas of several European cities with different geographical locations, climate, industrialization and population density. One of the most widely used variance-based techniques for sensitivity analysis, such as Sobol estimates and their modifications, have been used in this study. A vast number of numerical experiments with a specially adapted for the purpose version of the Danish Eulerian Model (SA-DEM) were carried out to compute global Sobol sensitivity measures. SA-DEM was implemented and run on two powerful cluster supercomputers: IBM Blue Gene/P, the most powerful parallel supercomputer in Bulgaria and IBM MareNostrum III, the most powerful parallel supercomputer in Spain. The refined (480 × 480) mesh version of the model was used in the experiments on MareNostrum III, which is a challenging computational problem even on such a powerful machine. Some optimizations of the code with respect to the parallel efficiency and the memory use were performed. Tables with performance results of a number of numerical experiments on IBM BlueGene/P and on IBM MareNostrum III are presented and analysed.

  17. NAEP Validity Studies: Improving the Information Value of Performance Items in Large Scale Assessments. Working Paper No. 2003-08

    ERIC Educational Resources Information Center

    Pearson, P. David; Garavaglia, Diane R.

    2003-01-01

    The purpose of this essay is to explore both what is known and what needs to be learned about the information value of performance items "when they are used in large scale assessments." Within the context of the National Assessment of Educational Progress (NAEP), there is substantial motivation for answering these questions. Over the…

  18. Self-Beliefs Mediate Math Performance between Primary and Lower Secondary School: A Large-Scale Longitudinal Cohort Study

    ERIC Educational Resources Information Center

    Reed, Helen C.; Kirschner, Paul A.; Jolles, Jelle

    2015-01-01

    It is often argued that enhancement of self-beliefs should be one of the key goals of education. However, very little is known about the relation between self-beliefs and performance when students move from primary to secondary school in highly differentiated educational systems with early tracking. This large-scale longitudinal cohort study…

  19. Design and performance of a full-scale spray calciner for nonradioactive high-level-waste-vitrification studies

    SciTech Connect

    Miller, F.A.

    1981-06-01

    In the spray calcination process, liquid waste is spray-dried in a heated-wall spray dryer (termed a spray calciner), and then it may be combined in solid form with a glass-forming frit. This mixture is then melted in a continuous ceramic melter or in an in-can melter. Several sizes of spray calciners have been tested at PNL- laboratory scale, pilot scale and full scale. Summarized here is the experience gained during the operation of PNL's full-scale spray calciner, which has solidified approx. 38,000 L of simulated acid wastes and approx. 352,000 L of simulated neutralized wastes in 1830 h of processing time. Operating principles, operating experience, design aspects, and system descriptions of a full-scale spray calciner are discussed. Individual test run summaries are given in Appendix A. Appendices B and C are studies made by Bechtel Inc., under contract by PNL. These studies concern, respectively, feed systems for the spray calciner process and a spray calciner vibration analysis. Appendix D is a detailed structural analysis made at PNL of the spray calciner. These appendices are included in the report to provide a complete description of the spray calciner and to include all major studies made concerning PNL's full-scale spray calciner.

  20. A study on the effects of RGB-D database scale and quality on depth analogy performance

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Kim, Youngjung; Sohn, Kwanghoon

    2016-06-01

    In the past few years, depth estimation from a single image has received increased attentions due to its wide applicability in image and video understanding. For realizing these tasks, many approaches have been developed for estimating depth from a single image based on various depth cues such as shading, motion, etc. However, they failed to estimate plausible depth map when input color image is derived from different category in training images. To alleviate these problems, data-driven approaches have been popularly developed by leveraging the discriminative power of a large scale RGB-D database. These approaches assume that there exists appearance- depth correlation in natural scenes. However, this assumption is likely to be ambiguous when local image regions have similar appearance but different geometric placement within the scene. Recently, a depth analogy (DA) has been developed by using the correlation between color image and depth gradient. DA addresses depth ambiguity problem effectively and shows reliable performance. However, no experiments are conducted to investigate the relationship between database scale and the quality of the estimated depth map. In this paper, we extensively examine the effects of database scale and quality on the performance of DA method. In order to compare the quality of DA, we collect a large scale RGB-D database using Microsoft Kinect v1 and Kinect v2 on indoor and ZED stereo camera on outdoor environments. Since the depth map obtained by Kinect v2 has high quality compared to that of Kinect v1, the depth maps from the database from Kinect v2 are more reliable. It represents that the high quality and large scale RGB-D database guarantees the high quality of the depth estimation. The experimental results show that the high quality and large scale training database leads high quality estimated depth map in both indoor and outdoor scenes.

  1. Characterization of Filtration Scale-Up Performance

    SciTech Connect

    Daniel, Richard C.; Billing, Justin M.; Luna, Maria L.; Cantrell, Kirk J.; Peterson, Reid A.; Bonebrake, Michael L.; Shimskey, Rick W.; Jagoda, Lynette K.

    2009-03-09

    The scale-up performance of sintered stainless steel crossflow filter elements planned for use at the Pretreatment Engineering Platform (PEP) and at the Waste Treatment and Immobilization Plant (WTP) were characterized in partial fulfillment (see Table S.1) of the requirements of Test Plan TP RPP WTP 509. This test report details the results of experimental activities related only to filter scale-up characterization. These tests were performed under the Simulant Testing Program supporting Phase 1 of the demonstration of the pretreatment leaching processes at PEP. Pacific Northwest National Laboratory (PNNL) conducted the tests discussed herein for Bechtel National, Inc. (BNI) to address the data needs of Test Specification 24590-WTP-TSP-RT-07-004. Scale-up characterization tests employ high-level waste (HLW) simulants developed under the Test Plan TP-RPP-WTP-469. The experimental activities outlined in TP-RPP-WTP-509 examined specific processes from two broad areas of simulant behavior: 1) leaching performance of the boehmite simulant as a function of suspending phase chemistry and 2) filtration performance of the blended simulant with respect to filter scale-up and fouling. With regard to leaching behavior, the effect of anions on the kinetics of boehmite leaching was examined. Two experiments were conducted: 1) one examined the effect of the aluminate anion on the rate of boehmite dissolution and 2) another determined the effect of secondary anions typical of Hanford tank wastes on the rate of boehmite dissolution. Both experiments provide insight into how compositional variations in the suspending phase impact the effectiveness of the leaching processes. In addition, the aluminate anion studies provide information on the consequences of gibbsite in waste. The latter derives from the expected fast dissolution of gibbsite relative to boehmite. This test report concerns only results of the filtration performance with respect to scale-up. Test results for boehmite

  2. Connecting Performance to Social Structure and Pedagogy as a Pathway to Scaling Learning Analytics in MOOCs: An Exploratory Study

    ERIC Educational Resources Information Center

    Goggins, S. P.; Galyen, K. D.; Petakovic, E.; Laffey, J. M.

    2016-01-01

    This exploratory study focuses on the design and evaluation of teaching analytics that relate social learning structure with performance measures in a massive open online course (MOOC) prototype environment. Using reflexive analysis of online learning trace data and qualitative performance measures we present an exploratory empirical study that:…

  3. A Study on Developing "An Attitude Scale for Project and Performance Tasks for Turkish Language Leaching Course"

    ERIC Educational Resources Information Center

    Demir, Tazegul

    2013-01-01

    The main purpose of this study is to demonstrate the students' attitudes towards project and performance tasks in Turkish Lessons and to develop a reliable and valid measurement tool. A total of 461 junior high school students participated in this study. In this study, firstly the preparation of items, specialist be consulted (content…

  4. Coding task performance in early adolescence: a large-scale controlled study into boy-girl differences

    PubMed Central

    Dekker, Sanne; Krabbendam, Lydia; Aben, Aukje; de Groot, Renate; Jolles, Jelle

    2013-01-01

    This study examined differences between boys and girls regarding efficiency of information processing in early adolescence. Three hundred and six healthy adolescents (50.3% boys) in grade 7 and 9 (aged 13 and 15, respectively) performed a coding task based on over-learned symbols. An age effect was revealed as subjects in grade 9 performed better than subjects in grade 7. Main effects for sex were found in the advantage of girls. The 25% best-performing students comprised twice as many girls as boys. The opposite pattern was found for the worst performing 25%. In addition, a main effect was found for educational track in favor of the highest track. No interaction effects were found. School grades did not explain additional variance in LDST performance. This indicates that cognitive performance is relatively independent from school performance. Student characteristics like age, sex, and education level were more important for efficiency of information processing than school performance. The findings imply that after age 13, efficiency of information processing is still developing and that girls outperform boys in this respect. The findings provide new information on the mechanisms underlying boy-girl differences in scholastic performance. PMID:23986733

  5. EEHG Performance and Scaling Laws

    SciTech Connect

    Penn, Gregory

    2013-10-09

    This note will calculate the idealized performance of echo-enabled harmonic generation performance (EEHG), explore the parameter settings, and look at constraints determined by incoherent synchrotron radiation (ISR) and intrabeam scattering (IBS). Another important effect, time-of-flight variations related to transverse emittance, is included here but without detailed explanation because it has been described previously. The importance of ISR and IBS is that they lead to random energy shifts that lead to temporal shifts after the various beam manipulations required by the EEHG scheme. These effects give competing constraints on the beamline. For chicane magnets which are too compact for a given R56, the magnetic fields will be sufficiently strong that ISR will blur out the complex phase space structure of the echo scheme to the point where the bunching is strongly suppressed. The effect of IBS is more omnipresent, and requires an overall compact beamline. It is particularly challenging for the second pulse in a two-color attosecond beamline, due to the long delay between the first energy modulation and the modulator for the second pulse.

  6. Quantifying and scaling airplane performance in turbulence

    NASA Astrophysics Data System (ADS)

    Richardson, Johnhenri R.

    This dissertation studies the effects of turbulent wind on airplane airspeed and normal load factor, determining how these effects scale with airplane size and developing envelopes to account for them. The results have applications in design and control of aircraft, especially small scale aircraft, for robustness with respect to turbulence. Using linearized airplane dynamics and the Dryden gust model, this dissertation presents analytical and numerical scaling laws for airplane performance in gusts, safety margins that guarantee, with specified probability, that steady flight can be maintained when stochastic wind gusts act upon an airplane, and envelopes to visualize these safety margins. Presented here for the first time are scaling laws for the phugoid natural frequency, phugoid damping ratio, airspeed variance in turbulence, and flight path angle variance in turbulence. The results show that small aircraft are more susceptible to high frequency gusts, that the phugoid damping ratio does not depend directly on airplane size, that the airspeed and flight path angle variances can be parameterized by the ratio of the phugoid natural frequency to a characteristic turbulence frequency, and that the coefficient of variation of the airspeed decreases with increasing airplane size. Accompanying numerical examples validate the results using eleven different airplanes models, focusing on NASA's hypothetical Boeing 757 analog the Generic Transport Model and its operational 5.5% scale model, the NASA T2. Also presented here for the first time are stationary flight, where the flight state is a stationary random process, and the stationary flight envelope, an adjusted steady flight envelope to visualize safety margins for stationary flight. The dissertation shows that driving the linearized airplane equations of motion with stationary, stochastic gusts results in stationary flight. It also shows how feedback control can enlarge the stationary flight envelope by alleviating

  7. Combining performance and outcome indicators can be used in a standardized way: a pilot study of two multidisciplinary, full-scale major aircraft exercises

    PubMed Central

    2012-01-01

    Background Disaster medicine is a fairly young scientific discipline and there is a need for the development of new methods for evaluation and research. This includes full-scale disaster exercisers. A standardized concept on how to evaluate these exercises, could lead to easier identification of pitfalls caused by system-errors in the organization. The aim of this study was to demonstrate the feasibility of using a combination of performance and outcome indicators so that results can be compared in standardized full-scale exercises. Methods Two multidisciplinary, full-scale exercises were studied in 2008 and 2010. The panorama had the same setup. Sets of performance indicators combined with indicators for unfavorable patient outcome were recorded in predesigned templates. Evaluators, all trained in a standardized way at a national disaster medicine centre, scored the results on predetermined locations; at the scene, at hospital and at the regional command and control. Results All data regarding the performance indicators of the participants during the exercises were obtained as well as all data regarding indicators for patient outcome. Both exercises could therefore be compared regarding performance (processes) as well as outcome indicators. The data from the performance indicators during the exercises showed higher scores for the prehospital command in the second exercise 15 points and 3 points respectively. Results from the outcome indicators, patient survival and patient complications, demonstrated a higher number of preventable deaths and a lower number of preventable complications in the exercise 2010. In the exercise 2008 the number of preventable deaths was lower and the number of preventable complications was higher. Conclusions Standardized multidisciplinary, full-scale exercises in different settings can be conducted and evaluated with performance indicators combined with outcome indicators enabling results from exercises to be compared. If exercises are

  8. Small-Scale High-Performance Optics

    SciTech Connect

    WILSON, CHRISTOPHER W.; LEGER, CHRIS L.; SPLETZER, BARRY L.

    2002-06-01

    Historically, high resolution, high slew rate optics have been heavy, bulky, and expensive. Recent advances in MEMS (Micro Electro Mechanical Systems) technology and micro-machining may change this. Specifically, the advent of steerable sub-millimeter sized mirror arrays could provide the breakthrough technology for producing very small-scale high-performance optical systems. For example, an array of steerable MEMS mirrors could be the building blocks for a Fresnel mirror of controllable focal length and direction of view. When coupled with a convex parabolic mirror the steerable array could realize a micro-scale pan, tilt and zoom system that provides full CCD sensor resolution over the desired field of view with no moving parts (other than MEMS elements). This LDRD provided the first steps towards the goal of a new class of small-scale high-performance optics based on MEMS technology. A large-scale, proof of concept system was built to demonstrate the effectiveness of an optical configuration applicable to producing a small-scale (< 1cm) pan and tilt imaging system. This configuration consists of a color CCD imager with a narrow field of view lens, a steerable flat mirror, and a convex parabolic mirror. The steerable flat mirror directs the camera's narrow field of view to small areas of the convex mirror providing much higher pixel density in the region of interest than is possible with a full 360 deg. imaging system. Improved image correction (dewarping) software based on texture mapping images to geometric solids was developed. This approach takes advantage of modern graphics hardware and provides a great deal of flexibility for correcting images from various mirror shapes. An analytical evaluation of blur spot size and axi-symmetric reflector optimization were performed to address depth of focus issues that occurred in the proof of concept system. The resulting equations will provide the tools for developing future system designs.

  9. Spreadsheet Based Scaling Calculations and Membrane Performance

    SciTech Connect

    Wolfe, T D; Bourcier, W L; Speth, T F

    2000-12-28

    Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI) for each solid of

  10. Performance of Children on the Community Balance and Mobility Scale

    ERIC Educational Resources Information Center

    Wright, Marilyn J.; Bos, Cecily

    2012-01-01

    This study describes the performance of children 8-11 years of age on the Community Balance and Mobility Scale (CB&M) and associations between performance and age, body mass index (BMI), and sex. A convenience sample of 84 was recruited. The CB&M was administered using instructions we developed for children. Mean CB&M total scores (95% confidence…

  11. Effects of different pretreatments on the performance of ceramic ultrafiltration membrane during the treatment of oil sands tailings pond recycle water: a pilot-scale study.

    PubMed

    Loganathan, Kavithaa; Chelme-Ayala, Pamela; El-Din, Mohamed Gamal

    2015-03-15

    Membrane filtration is an effective treatment method for oil sands tailings pond recycle water (RCW); however, membrane fouling and rapid decrease in permeate flux caused by colloids, organic matter, and bitumen residues present in the RCW hinder its successful application. This pilot-scale study investigated the impact of different pretreatment steps on the performance of a ceramic ultrafiltration (CUF) membrane used for the treatment of RCW. Two treatment trains were examined: treatment train 1 consisted of coagulant followed by a CUF system, while treatment train 2 included softening (Multiflo™ system) and coagulant addition, followed by a CUF system. The results indicated that minimum pretreatment (train 1) was required for almost complete solids removal. The addition of a softening step (train 2) provided an additional barrier to membrane fouling by reducing hardness-causing ions to negligible levels. More than 99% removal of turbidity and less than 20% removal of total organic carbon were achieved regardless of the treatment train used. Permeate fluxes normalized at 20 °C of 127-130 L/m(2) h and 111-118 L/m(2) h, with permeate recoveries of 90-93% and 90-94% were observed for the treatment trains 1 and 2, respectively. It was also found that materials deposited onto the membrane surface had an impact on trans-membrane pressure and influenced the required frequencies of chemically enhanced backwashes (CEBs) and clean-in-place (CIP) procedures. The CIP performed was successful in removing fouling and scaling materials such that the CUF performance was restored to baseline levels. The results also demonstrated that due to their low turbidity and silt density index values, permeates produced in this pilot study were suitable for further treatment by high pressure membrane processes. PMID:25596922

  12. 30 CFR 57.3201 - Location for performing scaling.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Location for performing scaling. 57.3201... Control Scaling and Support-Surface and Underground § 57.3201 Location for performing scaling. Scaling shall be performed from a location which will not expose persons to injury from falling material,...

  13. 30 CFR 57.3201 - Location for performing scaling.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Location for performing scaling. 57.3201... Control Scaling and Support-Surface and Underground § 57.3201 Location for performing scaling. Scaling shall be performed from a location which will not expose persons to injury from falling material,...

  14. 30 CFR 56.3201 - Location for performing scaling.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Location for performing scaling. 56.3201 Section 56.3201 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Scaling and Support § 56.3201 Location for performing scaling. Scaling shall be performed from a...

  15. 30 CFR 56.3201 - Location for performing scaling.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Location for performing scaling. 56.3201 Section 56.3201 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Scaling and Support § 56.3201 Location for performing scaling. Scaling shall be performed from a...

  16. 30 CFR 56.3201 - Location for performing scaling.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Location for performing scaling. 56.3201 Section 56.3201 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR METAL AND... Scaling and Support § 56.3201 Location for performing scaling. Scaling shall be performed from a...

  17. 30 CFR 57.3201 - Location for performing scaling.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Location for performing scaling. 57.3201... Control Scaling and Support-Surface and Underground § 57.3201 Location for performing scaling. Scaling shall be performed from a location which will not expose persons to injury from falling material,...

  18. Methodology to determine the technical performance and value proposition for grid-scale energy storage systems : a study for the DOE energy storage systems program.

    SciTech Connect

    Byrne, Raymond Harry; Loose, Verne William; Donnelly, Matthew K.; Trudnowski, Daniel J.

    2012-12-01

    As the amount of renewable generation increases, the inherent variability of wind and photovoltaic systems must be addressed in order to ensure the continued safe and reliable operation of the nation's electricity grid. Grid-scale energy storage systems are uniquely suited to address the variability of renewable generation and to provide other valuable grid services. The goal of this report is to quantify the technical performance required to provide di erent grid bene ts and to specify the proper techniques for estimating the value of grid-scale energy storage systems.

  19. Behavioral Observation Scales for Performance Appraisal Purposes

    ERIC Educational Resources Information Center

    Latham, Gary P.; Wexley, Kenneth N.

    1977-01-01

    This research attempts to determine whether Behavioral Observation Scales (BOS) could be improved by developing them through quantitative methods. The underlying assumption was that developing composite scales with greater internal consistency might improve their generalizability as evidenced by the cross-validation coefficients of scales based on…

  20. Detrending moving average algorithm: Frequency response and scaling performances.

    PubMed

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed. PMID:27415389

  1. Detrending moving average algorithm: Frequency response and scaling performances

    NASA Astrophysics Data System (ADS)

    Carbone, Anna; Kiyono, Ken

    2016-06-01

    The Detrending Moving Average (DMA) algorithm has been widely used in its several variants for characterizing long-range correlations of random signals and sets (one-dimensional sequences or high-dimensional arrays) over either time or space. In this paper, mainly based on analytical arguments, the scaling performances of the centered DMA, including higher-order ones, are investigated by means of a continuous time approximation and a frequency response approach. Our results are also confirmed by numerical tests. The study is carried out for higher-order DMA operating with moving average polynomials of different degree. In particular, detrending power degree, frequency response, asymptotic scaling, upper limit of the detectable scaling exponent, and finite scale range behavior will be discussed.

  2. Technology for Large-Scale Translation of Clinical Practice Guidelines: A Pilot Study of the Performance of a Hybrid Human and Computer-Assisted Approach

    PubMed Central

    2015-01-01

    Background The construction of EBMPracticeNet, a national electronic point-of-care information platform in Belgium, began in 2011 to optimize quality of care by promoting evidence-based decision making. The project involved, among other tasks, the translation of 940 EBM Guidelines of Duodecim Medical Publications from English into Dutch and French. Considering the scale of the translation process, it was decided to make use of computer-aided translation performed by certificated translators with limited expertise in medical translation. Our consortium used a hybrid approach, involving a human translator supported by a translation memory (using SDL Trados Studio), terminology recognition (using SDL MultiTerm terminology databases) from medical terminology databases, and support from online machine translation. This resulted in a validated translation memory, which is now in use for the translation of new and updated guidelines. Objective The objective of this experiment was to evaluate the performance of the hybrid human and computer-assisted approach in comparison with translation unsupported by translation memory and terminology recognition. A comparison was also made with the translation efficiency of an expert medical translator. Methods We conducted a pilot study in which two sets of 30 new and 30 updated guidelines were randomized to one of three groups. Comparable guidelines were translated (1) by certificated junior translators without medical specialization using the hybrid method, (2) by an experienced medical translator without this support, and (3) by the same junior translators without the support of the validated translation memory. A medical proofreader who was blinded for the translation procedure, evaluated the translated guidelines for acceptability and adequacy. Translation speed was measured by recording translation and post-editing time. The human translation edit rate was calculated as a metric to evaluate the quality of the translation. A

  3. Large-Scale Organizational Performance Improvement.

    ERIC Educational Resources Information Center

    Pilotto, Rudy; Young, Jonathan O'Donnell

    1999-01-01

    Describes the steps involved in a performance improvement program in the context of a large multinational corporation. Highlights include a training program for managers that explained performance improvement; performance matrices; divisionwide implementation, including strategic planning; organizationwide training of all personnel; and the…

  4. Centrifugal fans: Similarity, scaling laws, and fan performance

    NASA Astrophysics Data System (ADS)

    Sardar, Asad Mohammad

    Centrifugal fans are rotodynamic machines used for moving air continuously against moderate pressures through ventilation and air conditioning systems. There are five major topics presented in this thesis: (1) analysis of the fan scaling laws and consequences of dynamic similarity on modelling; (2) detailed flow visualization studies (in water) covering the flow path starting at the fan blade exit to the evaporator core of an actual HVAC fan scroll-diffuser module; (3) mean velocity and turbulence intensity measurements (flow field studies) at the inlet and outlet of large scale blower; (4) fan installation effects on overall fan performance and evaluation of fan testing methods; (5) two point coherence and spectral measurements conducted on an actual HVAC fan module for flow structure identification of possible aeroacoustic noise sources. A major objective of the study was to identity flow structures within the HVAC module that are responsible for noise and in particular "rumble noise" generation. Possible mechanisms for the generation of flow induced noise in the automotive HVAC fan module are also investigated. It is demonstrated that different modes of HVAC operation represent very different internal flow characteristics. This has implications on both fan HVAC airflow performance and noise characteristics. It is demonstrated from principles of complete dynamic similarity that fan scaling laws require that Reynolds, number matching is a necessary condition for developing scale model fans or fan test facilities. The physical basis for the fan scaling laws derived was established from both pure dimensional analysis and also from the fundamental equations of fluid motion. Fan performance was measured in a three times scale model (large scale blower) in air of an actual forward curved automotive HVAC blower. Different fan testing methods (based on AMCA fan test codes) were compared on the basis of static pressure measurements. Also, the flow through an actual HVAC

  5. HIRIS performance study

    NASA Technical Reports Server (NTRS)

    Kerekes, John P.; Landgrebe, David A.

    1989-01-01

    The remote sensing system simulation is used to study a proposed sensor concept. An overview of the instrument and its parameters is presented, along with the model of the instrument as implemented in the simulation. Signal-to-noise levels of the instrument under a variety of system configurations are presented and discussed. Classification performance under these varying configurations is also shown, along with relationships between signal-to-noise ratios, feature selection, and classification performance.

  6. Scaling of Performance in Liquid Propellant Rocket Engine Combustors

    NASA Technical Reports Server (NTRS)

    Hulka, James

    2008-01-01

    The objectives are: a) Re-introduce to you the concept of scaling; b) Describe the scaling research conducted in the 1950s and early 1960s, and present some of their conclusions; c) Narrow the focus to scaling for performance of combustion devices for liquid propellant rocket engines; and d) Present some results of subscale to full-scale performance from historical programs. Scaling is "The ability to develop new combustion devices with predictable performance on the basis of test experience with old devices." Scaling can be used to develop combustion devices of any thrust size from any thrust size. Scaling is applied mostly to increase thrust. Objective is to use scaling as a development tool. - Move injector design from an "art" to a "science"

  7. Environmental and Economic Performance of Commercial-scale Solar Photovoltaic Systems: A Field Study of Complex Energy Systems at the Desert Research Institute (DRI)

    NASA Astrophysics Data System (ADS)

    Liu, X.

    2014-12-01

    Solar photovoltaic (PV) systems are being aggressively deployed at residential, commercial, and utility scales to complement power generation from conventional sources. This is motivated both by the desire to reduce carbon footprints and by policy-driven financial incentives. Although several life cycle analyses (LCA) have investigated environmental impacts and energy payback times of solar PV systems, most results are based on hypothetical systems rather than actual, deployed systems that can provide measured performance data. Over the past five years, Desert Research Institute (DRI) in Nevada has installed eight solar PV systems of scales from 3 to 1000 kW, the sum of which supply approximately 40% of the total power use at DRI's Reno and Las Vegas campuses. The goal of this work is to explore greenhouse gas (GHG) impacts and examine the economic performance of DRI's PV systems by developing and applying a comprehensive LCA and techno-economic (TEA) model. This model is built using data appropriate for each type of panel used in the DRI systems. Power output is modeled using the National Renewable Energy Laboratory (NREL) model PVWatts. The performance of PVWatts is verified by the actual measurements from DRI's PV systems. Several environmental and economic metrics are quantified for the DRI systems, including life cycle GHG emissions and energy return. GHG results are compared with Nevada grid-based electricity. Initial results indicate that DRI's solar-derived electricity offers clear GHG benefits compared to conventional grid electricity. DRI's eight systems have GHG intensity values of 29-56 gCO2e/kWh, as compared to the GHG intensity of 212 gCO2e/kWh of national average grid power. The major source of impacts (82-92% of the total) is the upstream life cycle burden of manufacturing PV panels, which are made of either mono-crystalline or multi-crystalline silicon. Given the same type of PV panel, GHG intensity decreases as the scale of the system increases

  8. Performance Assessment of a Large Scale Pulsejet- Driven Ejector System

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.

    2006-01-01

    Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems

  9. Effective Rating Scale Development for Speaking Tests: Performance Decision Trees

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred; Kemp, Jenny

    2011-01-01

    Rating scale design and development for testing speaking is generally conducted using one of two approaches: the measurement-driven approach or the performance data-driven approach. The measurement-driven approach prioritizes the ordering of descriptors onto a single scale. Meaning is derived from the scaling methodology and the agreement of…

  10. SPREADSHEET BASED SCALING CALCULATIONS AND MEMBRANE PERFORMANCE

    EPA Science Inventory

    Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total...

  11. Correlation of full-scale helicopter rotor performance in air with model-scale Freon data

    NASA Technical Reports Server (NTRS)

    Yeager, W. T., Jr.; Mantay, W. R.

    1976-01-01

    An investigation was conducted in a transonic dynamics tunnel to measure the performance of a 1/5 scale model helicopter rotor in a Freon atmosphere. Comparisons were made between these data and full scale data obtained in air. Both the model and full scale tests were conducted at advance ratios between 0.30 and 0.40 and advancing tip Mach numbers between 0.79 and 0.95. Results show that correlation of model scale rotor performance data obtained in Freon with full scale rotor performance data in air is good with regard to data trends. Mach number effects were found to be essentially the same for the model rotor performance data obtained in Freon and the full scale rotor performance data obtained in air. It was determined that Reynolds number effects may be of the same magnitude or smaller than rotor solidity effects or blade elastic modeling in rotor aerodynamic performance testing.

  12. The performance of the K6 scale in a large school sample: A follow-up study evaluating measurement invariance on the Idaho Youth Prevention Survey.

    PubMed

    Peiper, Nicholas; Lee, Alexander; Lindsay, Stephanie; Drashner, Nathan; Wing, Janeena

    2016-06-01

    Since 2013, Idaho has been building capacity and infrastructure through the Strategic Prevention Framework State Incentive Grant to prevent substance abuse and related problems, namely psychiatric morbidity. As this federal initiative requires states to engage in data-driven strategic planning at the state and community levels, clinically validated instruments are particularly valuable in the context of school surveys that have limited space and require timely administration. Thus, the K6 scale was included on the 2014 Idaho Youth Prevention Survey as a measure of nonspecific psychological distress. To verify the unidimensional structure of the K6, principal axis and confirmatory factor analyses were performed in a school-based sample of Idaho students (n = 12,150). A series of multigroup confirmatory factor analyses were then performed to evaluate measurement invariance across gender, age, and race. Overall, the prevalence of serious psychological distress in the past 30 days was 17.2% in Idaho. Factor analyses confirmed the 1-factor solution of the K6. Four levels of measurement invariance were demonstrated across gender, age, and race. Together, these results further illustrate the construct validity of the K6 for use in adolescent populations. Other states are encouraged to include the K6 on their school surveys to facilitate policy planning and resource allocation as well as generate cross-state comparisons. (PsycINFO Database Record PMID:26214014

  13. Scientific Application Performance on Candidate PetaScalePlatforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan; Iancu, Costin; Lijewski, Michael; Kamil, Shoaib; Shalf, John; Shan, Hongzang; Strohmaier, Erich; Ethier, Stephane; Goodale, Tom

    2007-01-01

    After a decade where HEC (high-end computing) capability was dominated by the rapid pace of improvements to CPU clock frequency, the performance of next-generation supercomputers is increasingly differentiated by varying interconnect designs and levels of integration. Understanding the tradeoffs of these system designs, in the context of high-end numerical simulations, is a key step towards making effective petascale computing a reality. This work represents one of the most comprehensive performance evaluation studies to date on modern HEC systems, including the IBM Power5, AMD Opteron, IBM BG/L, and Cray X1E. A novel aspect of our study is the emphasis on full applications, with real input data at the scale desired by computational scientists in their unique domain. We examine six candidate ultra-scale applications, representing a broad range of algorithms and computational structures. Our work includes the highest concurrency experiments to date on five of our six applications, including 32K processor scalability for two of our codes and describe several successful optimizations strategies on BG/L, as well as improved X1E vectorization. Overall results indicate that our evaluated codes have the potential to effectively utilize petascale resources; however, several applications will require reengineering to incorporate the additional levels of parallelism necessary to achieve the vast concurrency of upcoming ultra-scale systems.

  14. Full-scale hingeless rotor performance and loads

    NASA Technical Reports Server (NTRS)

    Peterson, Randall L.

    1995-01-01

    A full-scale BO-105 hingeless rotor system was tested in the NASA Ames 40- by 80-Foot Wind Tunnel on the rotor test apparatus. Rotor performance, rotor loads, and aeroelastic stability as functions of both collective and cyclic pitch, tunnel velocity, and shaft angle were investigated. This test was performed in support of the Rotor Data Correlation Task under the U.S. Army/German Memorandum of Understanding on Cooperative Research in the Field of Helicopter Aeromechanics. The primary objective of this test program was to create a data base for full-scale hingeless rotor performance and structural blade loads. A secondary objective was to investigate the ability to match flight test conditions in the wind tunnel. This data base can be used for the experimental and analytical studies of hingeless rotor systems over large variations in rotor thrust and tunnel velocity. Rotor performance and structural loads for tunnel velocities from hover to 170 knots and thrust coefficients (C(sub T)/sigma) from 0.0 to 0.12 are presented in this report. Thrust sweeps at tunnel velocities of 10, 20, and 30 knots are also included in this data set.

  15. The performance of moss, grass, and 1- and 2-year old spruce needles as bioindicators of contamination: a comparative study at the scale of the Czech Republic.

    PubMed

    Suchara, Ivan; Sucharova, Julie; Hola, Marie; Reimann, Clemens; Boyd, Rognvald; Filzmoser, Peter; Englmaier, Peter

    2011-05-01

    Moss (Pleurozium schreberi), grass (Avenella flexuosa), and 1- and 2-year old spruce (Picea abies) needles were collected over the territory of the Czech Republic at an average sample density of 1 site per 290km(2). The samples were analysed for 39 elements (Ag, Al, As, Ba, Be, Bi, Ca, Cd, Ce, Co, Cr, Cs, Cu, Fe, Ga, Hg, K, La, Li, Mg, Mn, Mo, Na, Nd, Ni, Pb, Pr, Rb, S, Sb, Se, Sn, Sr, Th, Tl, U, V, Y and Zn) using ICP-MS and ICP-AES techniques (the major nutrients Ca, K, Mg and Na were not analysed in moss). Moss showed by far the highest element concentrations for most elements. Exceptions were Ba (spruce), Mn (spruce), Mo (grass), Ni (spruce), Rb (grass) and S (grass). Regional distribution maps and spatial trend analysis were used to study the suitability of the four materials as bioindicators of anthropogenic contamination. The highly industrialised areas in the north-west and the far east of the country and several more local contamination sources were indicated in the distribution maps of one or several sample materials. At the scale of the whole country moss was the best indicator of known contamination sources. However, on a more local scale, it appeared that spruce needles were especially well suited for detection of urban contamination. PMID:21421258

  16. Exploring the Performance of Multifactor Dimensionality Reduction in Large Scale SNP Studies and in the Presence of Genetic Heterogeneity among Epistatic Disease Models

    PubMed Central

    Edwards, Todd L.; Lewis, Kenneth; Velez, Digna R.; Dudek, Scott; Ritchie, Marylyn D.

    2009-01-01

    Background/Aims In genetic studies of complex disease a consideration for the investigator is detection of joint effects. The Multifactor Dimensionality Reduction (MDR) algorithm searches for these effects with an exhaustive approach. Previously unknown aspects of MDR performance were the power to detect interactive effects given large numbers of non-model loci or varying degrees of heterogeneity among multiple epistatic disease models. Methods To address the performance with many non-model loci, datasets of 500 cases and 500 controls with 100 to 10,000 SNPs were simulated for two-locus models, and one hundred 500-case/500-control datasets with 100 and 500 SNPs were simulated for three-locus models. Multiple levels of locus heterogeneity were simulated in several sample sizes. Results These results show MDR is robust to locus heterogeneity when the definition of power is not as conservative as in previous simulation studies where all model loci were required to be found by the method. The results also indicate that MDR performance is related more strongly to broad-sense heritability than sample size and is not greatly affected by non-model loci. Conclusions A study in which a population with high heritability estimates is sampled predisposes the MDR study to success more than a larger ascertainment in a population with smaller estimates. PMID:19077437

  17. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­‐scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  18. Scaling of Performance in Liquid Propellant Rocket Engine Combustors

    NASA Technical Reports Server (NTRS)

    Hulka, James R.

    2007-01-01

    This paper discusses scaling of combustion and combustion performance in liquid propellant rocket engine combustion devices. In development of new combustors, comparisons are often made between predicted performance in a new combustor and measured performance in another combustor with different geometric and thermodynamic characteristics. Without careful interpretation of some key features, the comparison can be misinterpreted and erroneous information used in the design of the new device. This paper provides a review of this performance comparison, including a brief review of the initial liquid rocket scaling research conducted during the 1950s and 1960s, a review of the typical performance losses encountered and how they scale, a description of the typical scaling procedures used in development programs today, and finally a review of several historical development programs to see what insight they can bring to the questions at hand.

  19. Scaling of Performance in Liquid Propellant Rocket Engine Combustion Devices

    NASA Technical Reports Server (NTRS)

    Hulka, James R.

    2008-01-01

    This paper discusses scaling of combustion and combustion performance in liquid propellant rocket engine combustion devices. In development of new combustors, comparisons are often made between predicted performance in a new combustor and measured performance in another combustor with different geometric and thermodynamic characteristics. Without careful interpretation of some key features, the comparison can be misinterpreted and erroneous information used in the design of the new device. This paper provides a review of this performance comparison, including a brief review of the initial liquid rocket scaling research conducted during the 1950s and 1960s, a review of the typical performance losses encountered and how they scale, a description of the typical scaling procedures used in development programs today, and finally a review of several historical development programs to see what insight they can bring to the questions at hand.

  20. Use of bench-scale digesters to evaluate full-scale digester performance

    SciTech Connect

    Murk, J.S.; Frieling, J.L.; Tortorici, L.D.; Dietrich, C.C.

    1980-11-01

    The use of properly designed laboratory-scale digestion facilities afforded an economical method to investigate the causes(s) of and remedies for a severe operational problem at the Encina wastewater treatment plant located in North San Diego Country, California. These studies resulted in verification that the chronic foaming problems were related to inadequate mixing and heating and led to the implementation of design and operational modifications to optimize digester performance. As a result of this program, subsequent design changes, and the dedicated efforts of the plant operators, a severe operational problem has been eliminated.

  1. Administration Modifications on the WISC-R Performance Scale with Different Categories of Deaf Children.

    ERIC Educational Resources Information Center

    Sullivan, Patricia M.

    1982-01-01

    Two studies investigated the effects of administration modifications on subtest scaled scores of the Wechsler-Intelligence Scale for Children-Revised (WISC-R). Performance scale rated different groups of 57 severely/profoundly hearing-impaired children. Total communication was found to result in higher scores on all subtests in the genetic and…

  2. Development and Validation of a Rating Scale for Wind Jazz Improvisation Performance

    ERIC Educational Resources Information Center

    Smith, Derek T.

    2009-01-01

    The purpose of this study was to construct and validate a rating scale for collegiate wind jazz improvisation performance. The 14-item Wind Jazz Improvisation Evaluation Scale (WJIES) was constructed and refined through a facet-rational approach to scale development. Five wind jazz students and one professional jazz educator were asked to record…

  3. High-temperature EBPR process: the performance, analysis of PAOs and GAOs and the fine-scale population study of Candidatus "Accumulibacter phosphatis".

    PubMed

    Ong, Ying Hui; Chua, Adeline Seak May; Fukushima, Toshikazu; Ngoh, Gek Cheng; Shoji, Tadashi; Michinaka, Atsuko

    2014-11-01

    The applicability of the enhanced biological phosphorus removal (EBPR) process for the removal of phosphorus in warm climates is uncertain due to frequent reports of EBPR deterioration at temperature higher than 25 °C. Nevertheless, a recent report on a stable and efficient EBPR process at 28 °C has inspired the present study to examine the performance of EBPR at 24 °C-32 °C, as well as the PAOs and GAOs involved, in greater detail. Two sequencing batch reactors (SBRs) were operated for EBPR in parallel at different temperatures, i.e., SBR-1 at 28 °C and SBR-2 first at 24 °C and subsequently at 32 °C. Both SBRs exhibited high phosphorus removal efficiencies at all three temperatures and produced effluents with phosphorus concentrations less than 1.0 mg/L during the steady state of reactor operation. Real-time quantitative polymerase chain reaction (qPCR) revealed Accumulibacter-PAOs comprised 64% of the total bacterial population at 24 °C, 43% at 28 °C and 19% at 32 °C. Based on fluorescent in situ hybridisation (FISH), the abundance of Competibacter-GAOs at both 24 °C and 28 °C was rather low (<10%), while it accounted for 40% of the total bacterial population at 32 °C. However, the smaller Accumulibacter population and larger population of Competibacter at 32 °C did not deteriorate the phosphorus removal performance. A polyphosphate kinase 1 (ppk1)-based qPCR analysis on all studied EBPR processes detected only Accumulibacter clade IIF. The Accumulibacter population shown by 16S rRNA and ppk1 was not significantly different. This finding confirmed the existence of single clade IIF in the processes and the specificity of the clade IIF primer sets designed in this study. Habitat filtering related to temperature could have contributed to the presence of a unique clade. The clade IIF was hypothesised to be able to perform the EBPR activity at high temperatures. The clade's robustness most likely helps it to fit the high-temperature EBPR

  4. Multi-Scale Multi-Dimensional Ion Battery Performance Model

    Energy Science and Technology Software Center (ESTSC)

    2007-05-07

    The Multi-Scale Multi-Dimensional (MSMD) Lithium Ion Battery Model allows for computer prediction and engineering optimization of thermal, electrical, and electrochemical performance of lithium ion cells with realistic geometries. The model introduces separate simulation domains for different scale physics, achieving much higher computational efficiency compared to the single domain approach. It solves a one dimensional electrochemistry model in a micro sub-grid system, and captures the impacts of macro-scale battery design factors on cell performance and materialmore » usage by solving cell-level electron and heat transports in a macro grid system.« less

  5. Multi-Scale Simulation and Optimization of Lithium Battery Performance

    NASA Astrophysics Data System (ADS)

    Golmon, Stephanie L.

    The performance and degradation of lithium batteries strongly depends on electrochemical, mechanical, and thermal phenomena. While a large volume of work has focused on thermal management, mechanical phenomena relevant to battery design are not fully understood. Mechanical degradation of electrode particles has been experimentally linked to capacity fade and failure of batteries; an understanding of the interplay between mechanics and electrochemistry in the battery is necessary in order to improve the overall performance of the battery. A multi-scale model to simulate the coupled electrochemical and mechanical behavior of Li batteries has been developed, which models the porous electrode and separator regions of the battery. The porous electrode includes a liquid electrolyte and solid active materials. A multi-scale finite element approach is used to analyze the electrochemical and mechanical performance. The multi-scale model includes a macro- and micro-scale with analytical volume-averaging methods to relate the scales. The macro-scale model describes Li-ion transport through the electrolyte, electric potentials, and displacements throughout the battery. The micro-scale considers the surface kinetics and electrochemical and mechanical response of a single particle of active material evaluated locally within the cathode region. Both scales are non-linear and dependent on the other. The electrochemical and mechanical response of the battery are highly dependent on the porosity in the electrode, the active material particle size, and discharge rate. Balancing these parameters can improve the overall performance of the battery. A formal design optimization approach with multi-scale adjoint sensitivity analysis is developed to find optimal designs to improve the performance of the battery model. Optimal electrode designs are presented which maximize the capacity of the battery while mitigating stress levels during discharge over a range of discharge rates.

  6. A comparison of the performance of rating scales used in the diagnosis of postnatal depression.

    PubMed

    Thompson, W M; Harris, B; Lazarus, J; Richards, C

    1998-09-01

    The results of a study looking into the association between thyroid status and depression in the postpartum period were reanalysed to explore the psychometric properties of the rating scales employed. The performance of the Edinburgh Postnatal Depression Scale was found to be superior to that of the Hospital Anxiety and Depression Scale in identifying RDC-defined depression, and on a par with the observer-rated Hamilton Rating Scale for Depression, which it also matched for sensitivity to change in mood state over time. The anxiety subscale of the Hospital Anxiety and Depression Scale performed well, reflecting the fact that anxiety represents a prominent symptom in postnatal depression. PMID:9761410

  7. Flow structure, performance and scaling of acoustic jets

    NASA Astrophysics Data System (ADS)

    Muller, Michael Oliver

    Acoustic jets are studied, with an emphasis on their flow structure, performance, and scaling. The ultimate goal is the development of a micromachined acoustic jet for propulsion of a micromachined airborne platform, as well as integrated cooling and pumping applications. Scaling suggests an increase in performance with decreasing size, motivating the use of micro-technology. Experimental studies are conducted at three different orders of magnitude in size, each closely following analytic expectations. The jet creates a periodic vortical structure, the details of which are a function of amplitude. At small actuation amplitude, but still well above the linear acoustic regime, the flow structure consists of individual vortex rings, propagating away from the nozzle, formed during the outstroke of the acoustic cavity. At large amplitude, a trail of vorticity forms between the periodic vortex rings. Approximately corresponding to these two flow regions are two performance regimes. At low amplitude, the jet thrust increases with the fourth power of the amplitude; and at large amplitude, the thrust equals the momentum flux ejected during the output stroke, and increases as the square of the amplitude. Resonance of the cavity, at Reynolds numbers greater than approximately 10, enhances the jet performance beyond the incompressible behavior. Gains of an order of magnitude in the jet velocity occur at Reynolds numbers of approximately 100, and the data suggest further gains with increasing Reynolds number. The smallest geometries tested are micromachined acoustic jets, manufactured using MEMS technology. The throat dimensions are 50 by 200 mum, and the overall device size is approximately 1 mm 2, with eight throats per device. Several jets are manufactured in an array, to suit any given application. The performance is very dependent on frequency, with a sharp peak at the system resonance, occurring at approximately 70 kHz (inaudible). The mean jet velocity of these devices

  8. Hybrid Wing Body Configuration Scaling Study

    NASA Technical Reports Server (NTRS)

    Nickol, Craig L.

    2012-01-01

    The Hybrid Wing Body (HWB) configuration is a subsonic transport aircraft concept with the potential to simultaneously reduce fuel burn, noise and emissions compared to conventional concepts. Initial studies focused on very large applications with capacities for up to 800 passengers. More recent studies have focused on the large, twin-aisle class with passenger capacities in the 300-450 range. Efficiently scaling this concept down to the single aisle or smaller size is challenging due to geometric constraints, potentially reducing the desirability of this concept for applications in the 100-200 passenger capacity range or less. In order to quantify this scaling challenge, five advanced conventional (tube-and-wing layout) concepts were developed, along with equivalent (payload/range/technology) HWB concepts, and their fuel burn performance compared. The comparison showed that the HWB concepts have fuel burn advantages over advanced tube-and-wing concepts in the larger payload/range classes (roughly 767-sized and larger). Although noise performance was not quantified in this study, the HWB concept has distinct noise advantages over the conventional tube-and-wing configuration due to the inherent noise shielding features of the HWB. NASA s Environmentally Responsible Aviation (ERA) project will continue to investigate advanced configurations, such as the HWB, due to their potential to simultaneously reduce fuel burn, noise and emissions.

  9. Scales affect performance of Monarch butterfly forewings in autorotational flight

    NASA Astrophysics Data System (ADS)

    Demko, Anya; Lang, Amy

    2012-11-01

    Butterfly wings are characterized by rows of scales (approximately 100 microns in length) that create a shingle-like pattern of cavities over the entire surface. It is hypothesized that these cavities influence the airflow around the wing and increase aerodynamic performance. A forewing of the Monarch butterfly (Danus plexippus) naturally undergoes autorotational flight in the laminar regime. Autorotational flight is an accurate representation of insect flight because the rotation induces a velocity gradient similar to that found over a flapping wing. Drop test flights of 22 forewings before and after scale removal were recorded with a high-speed camera and flight behavior was quantified. It was found that removing the scales increased the descent speed and decreased the descent factor, a measure of aerodynamic efficacy, suggesting that scales increased the performance of the forewings. Funded by NSF REU Grant 1062611.

  10. Improving the Performance of the Extreme-scale Simulator

    SciTech Connect

    Engelmann, Christian; Naughton III, Thomas J

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  11. LAMMPS strong scaling performance optimization on Blue Gene/Q

    SciTech Connect

    Coffman, Paul; Jiang, Wei; Romero, Nichols A.

    2014-11-12

    LAMMPS "Large-scale Atomic/Molecular Massively Parallel Simulator" is an open-source molecular dynamics package from Sandia National Laboratories. Significant performance improvements in strong-scaling and time-to-solution for this application on IBM's Blue Gene/Q have been achieved through computational optimizations of the OpenMP versions of the short-range Lennard-Jones term of the CHARMM force field and the long-range Coulombic interaction implemented with the PPPM (particle-particle-particle mesh) algorithm, enhanced by runtime parameter settings controlling thread utilization. Additionally, MPI communication performance improvements were made to the PPPM calculation by re-engineering the parallel 3D FFT to use MPICH collectives instead of point-to-point. Performance testing was done using an 8.4-million atom simulation scaling up to 16 racks on the Mira system at Argonne Leadership Computing Facility (ALCF). Speedups resulting from this effort were in some cases over 2x.

  12. Scaling studies of solar pumped lasers

    NASA Technical Reports Server (NTRS)

    Christiansen, W. H.; Chang, J.

    1985-01-01

    A progress report of scaling studies of solar pumped lasers is presented. Conversion of blackbody radiation into laser light has been demonstrated in this study. Parametric studies of the variation of laser mixture composition and laser gas temperature were carried out for CO2 and N2O gases. Theoretical analysis and modeling of the system have been performed. Reasonable agreement between predictions in the parameter variation and the experimental results have been obtained. Almost 200 mW of laser output at 10.6 micron was achieved by placing a small sapphire laser tube inside an oven at 1500 K the tube was filled with CO2 laser gas mixture and cooled by longitudinal nitrogen gas flow.

  13. Scaling studies of solar pumped lasers

    NASA Astrophysics Data System (ADS)

    Christiansen, W. H.; Chang, J.

    1985-08-01

    A progress report of scaling studies of solar pumped lasers is presented. Conversion of blackbody radiation into laser light has been demonstrated in this study. Parametric studies of the variation of laser mixture composition and laser gas temperature were carried out for CO2 and N2O gases. Theoretical analysis and modeling of the system have been performed. Reasonable agreement between predictions in the parameter variation and the experimental results have been obtained. Almost 200 mW of laser output at 10.6 micron was achieved by placing a small sapphire laser tube inside an oven at 1500 K the tube was filled with CO2 laser gas mixture and cooled by longitudinal nitrogen gas flow.

  14. Scaled control moment gyroscope dynamics effects on performance

    NASA Astrophysics Data System (ADS)

    Leve, Frederick A.

    2015-05-01

    The majority of the literature that discusses the dynamics of control moment gyroscopes (CMG) contains formulations that are not derived from first principles and make simplifying assumptions early in the derivation, possibly neglecting important contributions. For small satellites, additional dynamics that are no longer negligible are shown to cause an increase in torque error and loss of torque amplification. The goal of the analysis presented here is to provide the reader with a complete and general analytical derivation of the equations for dynamics of a spacecraft with n-CMG and to discuss the performance degradation imposed to CMG actuators when scaling them for small satellites. The paper first derives the equations of motion from first principles for a very general case of a spacecraft with n-CMG. Each contribution of the dynamics is described with its effect on the performance of CMG and its significance on scaled CMG performance is addressed. It is shown analytically and verified numerically, that CMG do not scale properly with performance and care must be taken in their design to trade performance, size, mass, and power when reducing their scale.

  15. Full-scale tilt-rotor hover performance

    NASA Technical Reports Server (NTRS)

    Felker, F. F.; Maisel, M. D.; Betzina, M. D.

    1986-01-01

    The hover performance of three full-scale rotors was measured at the Ames Outdoor Aerodynamic Research Facility. The rotors, all designed for tilt-rotor aircraft, were the original metal blades for the XV-15 Tilt Rotor Research Aircraft, a set of composite, advanced technology blades for the XV-15, and a 0.658-scale model of the proposed V-22A Osprey (JVX) rotor. The composite advanced technology blades for the XV-15 were tested with several alternate blade root and blade tip configurations. This paper presents the performance of these three rotors, shows the effects of tip Mach number and root and tip configuration changes on rotor performance, and presents data on rotor wake velocity distributions and tip vortex geometry. Measured rotor performance is compared with theoretical predictions, and the discrepancies are discussed.

  16. Critical Multicultural Education Competencies Scale: A Scale Development Study

    ERIC Educational Resources Information Center

    Acar-Ciftci, Yasemin

    2016-01-01

    The purpose of this study is to develop a scale in order to identify the critical mutlicultural education competencies of teachers. For this reason, first of all, drawing on the knowledge in the literature, a new conceptual framework was created with deductive method based on critical theory, critical race theory and critical multicultural…

  17. Reliable High Performance Peta- and Exa-Scale Computing

    SciTech Connect

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty

  18. Effects of Isometric Scaling on Vertical Jumping Performance

    PubMed Central

    Bobbert, Maarten F.

    2013-01-01

    Jump height, defined as vertical displacement in the airborne phase, depends on vertical takeoff velocity. For centuries, researchers have speculated on how jump height is affected by body size and many have adhered to what has come to be known as Borelli’s law, which states that jump height does not depend on body size per se. The underlying assumption is that the amount of work produced per kg body mass during the push-off is independent of size. However, if a big body is isometrically downscaled to a small body, the latter requires higher joint angular velocities to achieve a given takeoff velocity and work production will be more impaired by the force-velocity relationship of muscle. In the present study, the effects of pure isometric scaling on vertical jumping performance were investigated using a biologically realistic model of the human musculoskeletal system. The input of the model, muscle stimulation over time, was optimized using jump height as criterion. It was found that when the human model was miniaturized to the size of a mouse lemur, with a mass of about one-thousandth that of a human, jump height dropped from 40 cm to only 6 cm, mainly because of the force-velocity relationship. In reality, mouse lemurs achieve jump heights of about 33 cm. By implication, the unfavourable effects of the small body size of mouse lemurs on jumping performance must be counteracted by favourable effects of morphological and physiological adaptations. The same holds true for other small jumping animals. The simulations for the first time expose and explain the sheer magnitude of the isolated effects of isometric downscaling on jumping performance, to be counteracted by morphological and physiological adaptations. PMID:23936494

  19. Development and performance of a large-scale, transonic turbine blade cascade facility for aerodynamic studies of merging coolant-mainstream flows

    NASA Astrophysics Data System (ADS)

    Al-Sayeh, Amjad Isaaf

    1998-11-01

    A new, large scale, linear cascade facility of turbine blades has been developed for the experimental exploration of the aerodynamic aspects of film cooling technology. Primary interest is in the mixing of the ejected coolant with the mainstream, at both subsonic and supersonic mainstream Mach numbers at the cascade exit. In order to achieve a spatial resolution adequate for the exploration of details on the scale of the coolant ejection holes, the cascade dimensions were maximized, within the limitations of the air supply system. The cascade contains four blades (three passages) with 14.05 cm axial chord, 17.56 cm span and a design total turning angle of 130.6 degrees. Exit Mach numbers range from 0.6 to 1.5 and Reynolds numbers from 0.5 to 1.5 million. The air supply system capacity allows run times up to five minutes at maximum flow rates. A coolant supply system has been built to deliver mixtures of SFsb6 and air to simulate coolant/mainstream density ratios up to 2. The cascade contains several novel features. A full-perimeter bleed slot upstream of the blades is used to remove the approach boundary layer from all four walls, to improve the degree of two-dimensionality. The exit flow is bounded by two adjustable tailboards that are hinged at the trailing edges and actuated to set the exit flow direction according to the imposed pressure ratio. The boards are perforated and subjected to mass removal near the blades, to minimize the undesirable reflection of shocks and expansion waves. A probe actuator is incorporated that allows continuous positioning of probes in the exhaust stream, in both the streamwise and pitchwise directions. Diagnostic methods include extensive surface pressure taps on the approach and exhaust ducts and on the blade surfaces. The large size permitted as many as 19 taps on the trailing edge itself. Shadowgraph and schlieren are available. A three-prong wake probe has been constructed to simultaneously measure total and static pressures

  20. An Application of the Facet-Factorial Approach to Scale Construction in Development of a Rating Scale for High School Marching Band Performance

    ERIC Educational Resources Information Center

    Greene, Travis

    2012-01-01

    The purpose of this study was to develop and validate an instrument through facet-factorial analysis to assess high school marching band performance. Forty-one items were chosen to define subscales for the Marching Band Performance Rating Scale - Music and 31 items for the Marching Band Performance Rating Scale - Visual. To examine the stability…

  1. Performance of a full-scale biofilter with peat and ash as a medium for treating industrial landfill leachate: a 3-year study of pollutant removal efficiency.

    PubMed

    Kängsepp, Pille; Mathiasson, Lennart

    2009-03-01

    Shredder residues of end-of-life vehicles and white goods are a complex waste stream, which nowadays most often is disposed of at industrial landfills. This paper describes the most important findings concerning the complex composition of the landfill leachate and its on-site, year-round treatment under cold-climate conditions. A 3-year investigation has confirmed that concentrations of different types of pollutants, most of them at low initial concentrations, can be simultaneously reduced in vertical-flow biofilters consisting of a mixture of peat and carbon-containing ash. For metals such as Mn, Cu, Sn, Cd, Pb, Fe and Ni the average removal was 73, 72, 66, 60, 55, 55 and 37%, respectively. An average reduction of NH(4)-N (45%), N(tot) (25%), total organic carbon (30%), dissolved organic carbon (28%) and suspended solids (38%) was also obtained. A good reduction was achieved for phenols (between 75 and 95%), polychlorinated biphenyls (between 22 and 99%), and gas chromatography-mass spectrometry amenable pollutants, considered at initial concentration above 50 microg L( -1) (between 80 and 100%). The performance of the biofilter system was good in spite of large variations of inlet concentration during the considered period. PMID:19244414

  2. The Influence of Extrinsic Motivation on Student Performance on Large-Scale Assessments

    ERIC Educational Resources Information Center

    McGee, Carl Dean

    2013-01-01

    The purposes of this mixed method study were to examine the relationship between student motivation and performance on large-scale, low- and high-stakes examinations and identify the types of incentive programs used by principals to promote test performance among high school students. The study took take place in California's Southern San Joaquin…

  3. Virus removal retention challenge tests performed at lab scale and pilot scale during operation of membrane units.

    PubMed

    Humbert, H; Machinal, C; Labaye, Ivan; Schrotter, J C

    2011-01-01

    The determination of the virus retention capabilities of UF units during operation is essential for the operators of drinking water treatment facilities in order to guarantee an efficient and stable removal of viruses through time. In previous studies, an effective method (MS2-phage challenge tests) was developed by the Water Research Center of Veolia Environnement for the measurement of the virus retention rates (Log Removal Rate, LRV) of commercially available hollow fiber membranes at lab scale. In the present work, the protocol for monitoring membrane performance was transferred from lab scale to pilot scale. Membrane performances were evaluated during pilot trial and compared to the results obtained at lab scale with fibers taken from the pilot plant modules. PFU culture method was compared to RT-PCR method for the calculation of LRV in both cases. Preliminary tests at lab scale showed that both methods can be used interchangeably. For tests conducted on virgin membrane, a good consistency was observed between lab and pilot scale results with the two analytical methods used. This work intends to show that a reliable determination of the membranes performances based on RT-PCR analytical method can be achieved during the operation of the UF units. PMID:21252428

  4. Happiness Scale Interval Study. Methodological Considerations

    ERIC Educational Resources Information Center

    Kalmijn, W. M.; Arends, L. R.; Veenhoven, R.

    2011-01-01

    The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as "very happy" and "pretty happy". The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are…

  5. A simulation infrastructure for examining the performance of resilience strategies at scale.

    SciTech Connect

    Ferreira, Kurt Brian; Levy, Scott N.; Bridges, Patrick G.

    2013-04-01

    Fault-tolerance is a major challenge for many current and future extreme-scale systems, with many studies showing it to be the key limiter to application scalability. While there are a number of studies investigating the performance of various resilience mechanisms, these are typically limited to scales orders of magnitude smaller than expected for next-generation systems and simple benchmark problems. In this paper we show how, with very minor changes, a previously published and validated simulation framework for investigating appli- cation performance of OS noise can be used to simulate the overheads of various resilience mechanisms at scale. Using this framework, we compare the failure-free performance of this simulator against an analytic model to validate its performance and demonstrate its ability to simulate the performance of two popular rollback recovery methods on traces from real

  6. LABORATORY SCALE STEAM INJECTION TREATABILITY STUDIES

    EPA Science Inventory

    Laboratory scale steam injection treatability studies were first developed at The University of California-Berkeley. A comparable testing facility has been developed at USEPA's Robert S. Kerr Environmental Research Center. Experience has already shown that many volatile organic...

  7. Landscape Scale Hydrologic Performance Measures for the South Florida Everglades

    NASA Astrophysics Data System (ADS)

    Johnson, R. A.; Kotun, K.; Engel, V.

    2008-05-01

    Large scale drainage and land reclamation activities began in the south Florida Everglades around 1905. By 1920 four large canals were constructed across the Everglades to drain Lake Okeechobee to the Atlantic Ocean. In 1930, following two major hurricanes, construction began on a levee system around Lake Okeechobee, and two additional coastal outlets were created to the St. Lucie and Caloosahatchee Rivers. These activities significantly lowered water levels in the lake and reduced natural surface water flows to the downstream Everglades. Throughout the 1930s and early 1940s, a network of uncontrolled canals were excavated along the Atlantic Coastal Ridge that penetrated the permeable Biscayne Aquifer, further draining the Everglades and local groundwater to the ocean. Early hydrologic studies documented the detrimental affects of this over-drainage on urban and agricultural water supply, including the abandonment of wellfields because of saltwater intrusion. In the interior marshes the loss of soil moisture in the Everglades organic soils also caused widespread soil subsidence and increased fire frequency. Following a third major hurricane in 1947, which resulted in loss of life and widespread economic losses, the U.S. Congress authorized the Army Corps of Engineers to begin construction of the Central and Southern Florida Project. The C&SF Project was designed to correct the flooding and water supply problems in south Florida, as well as providing adequate water supply to protect fish and wildlife resources of the Everglades. By 1953 most of the major drainage canals had control structures added to prevent excessive drainage, and an East Coast Protective Levee was constructed from Lake Okeechobee to Everglades National Park, to reduce flooding along the Atlantic Coastal Ridge and retain water in the Everglades. By the late 1950's most of the northern Everglades was diked and drained to form the Everglades Agricultural Area, and by 1963 the central Everglades were

  8. Corrosion performance of alumina scales in coal gasification environments

    SciTech Connect

    Natesan, K.

    1997-02-01

    Corrosion of metallic structural materials in complex gas environments of coal gasification is a potential problem. The corrosion process is dictated by concentrations of two key constituents: sulfur as H{sub 2}S and Cl as HCl. This paper examines the corrosion performance of alumina scales that are thermally grown on Fe-base alloys during exposure to O/S mixed-gas environments. The results are compared with the performance of chromia-forming alloys in similar environments. The paper also discusses the available information on corrosion performance of alloys whose surfaces were enriched with Al by the pack-diffusion process, by the electrospark deposition process, or by weld overlay techniques.

  9. Performance Engineering: Understanding and Improving thePerformance of Large-Scale Codes

    SciTech Connect

    Bailey, David H.; Lucas, Robert; Hovland, Paul; Norris, Boyana; Yelick, Kathy; Gunter, Dan; de Supinski, Bronis; Quinlan, Dan; Worley,Pat; Vetter, Jeff; Roth, Phil; Mellor-Crummey, John; Snavely, Allan; Hollingsworth, Jeff; Reed, Dan; Fowler, Rob; Zhang, Ying; Hall, Mary; Chame, Jacque; Dongarra, Jack; Moore, Shirley

    2007-10-01

    Achieving good performance on high-end computing systems is growing ever more challenging due to enormous scale, increasing architectural complexity, and increasing application complexity. To address these challenges in DOE's SciDAC-2 program, the Performance Engineering Research Institute (PERI) has embarked on an ambitious research plan encompassing performance modeling and prediction, automatic performance optimization and performance engineering of high profile applications. The principal new component is a research activity in automatic tuning software, which is spurred by the strong user preference for automatic tools.

  10. The Development, Test, and Evaluation of Three Pilot Performance Reference Scales.

    ERIC Educational Resources Information Center

    Horner, Walter R.; And Others

    A set of pilot performance reference scales was developed based upon airborne Audio-Video Recording (AVR) of student performance in T-37 undergraduate Pilot Training. After selection of the training maneuvers to be studied, video tape recordings of the maneuvers were selected from video tape recordings already available from a previous research…

  11. A comprehensive field and laboratory study of scale control and scale squeezes in Sumatra, Indonesia

    SciTech Connect

    Oddo, J.E.; Reizer, J.M.; Sitz, C.D.; Setia, D.E.A.; Hinrichsen, C.J.; Sujana, W.

    1999-11-01

    Scale squeezes were performed on thirteen wells in the Duri Field, Sumatra. At the time the squeezes were completed, seven were designed to be `Acid Squeezes` and six were designed to be `Neutral Squeezes.` In the course of preparing for the scale squeezes, produced waters were collected and analyzed. In addition, scale inhibitor evaluations, and inhibitor compatibility studies were completed. Simulated squeezes were done in the laboratory to predict field performance. The methodologies and results of the background work are reported. In addition, the relative effectiveness of the two sets of squeezes is discussed. The inhibitor flowback concentrations alter the squeezes, in all cases, can be explained using speciation chemistry and the amorphous and crystalline phase solubilities of the inhibitor used. The wells squeezed with a more acidic inhibitor have more predictable and uniform inhibitor return concentration curves than the wells squeezed with a more neutral scale inhibitor.

  12. HACC: Extreme Scaling and Performance Across Diverse Architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Morozov, Vitali; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Heitmann, Katrin

    2013-11-01

    Supercomputing is evolving towards hybrid and accelerator-based architectures with millions of cores. The HACC (Hardware/Hybrid Accelerated Cosmology Code) framework exploits this diverse landscape at the largest scales of problem size, obtaining high scalability and sustained performance. Developed to satisfy the science requirements of cosmological surveys, HACC melds particle and grid methods using a novel algorithmic structure that flexibly maps across architectures, including CPU/GPU, multi/many-core, and Blue Gene systems. We demonstrate the success of HACC on two very different machines, the CPU/GPU system Titan and the BG/Q systems Sequoia and Mira, attaining unprecedented levels of scalable performance. We demonstrate strong and weak scaling on Titan, obtaining up to 99.2% parallel efficiency, evolving 1.1 trillion particles. On Sequoia, we reach 13.94 PFlops (69.2% of peak) and 90% parallel efficiency on 1,572,864 cores, with 3.6 trillion particles, the largest cosmological benchmark yet performed. HACC design concepts are applicable to several other supercomputer applications.

  13. Parametric scaling study of a magnetically insulated thermionic vacuum switch

    SciTech Connect

    Vanderberg, B.H.; Eninger, J.E.

    1996-02-01

    A parametric scaling study is performed on MINOS (Magnetically INsulated Opening Switch), a novel fast ({approximately}100 ns) high-power opening switch concept based on a magnetically insulated thermionic vacuum diode. Principal scaling parameters are the switch dimensions, voltage, current, applied magnetic field, and switching time. The scaling range of interest covers voltages up to 100 kV and currents of several kA. Fundamental scaling properties are derived from models of space-charge flow and magnetic cutoff. The scaling is completed with empirical results from the experimental MX-1 switch operated in an inductive storage pulsed power generator. Results are presented in diagrams showing voltage, current, power, and efficiency relationships and their limitations. The scaling is illustrated by the design of a megawatt average power opening switch for pulsed power applications. Trade-offs in the engineering of this type of switch are discussed.

  14. Development and Validation of a Clarinet Performance Adjudication Scale

    ERIC Educational Resources Information Center

    Abeles, Harold F.

    1973-01-01

    A basic assumption of this study is that there are generally agreed upon performance standards as evidenced by the use of adjudicators for evaluations at contests and festivals. An evaluation instrument was developed to enable raters to measure effectively those aspects of performance that have common standards of proficiency. (Author/RK)

  15. Capturing field-scale variability in crop performance across a regional-scale climosequence

    NASA Astrophysics Data System (ADS)

    Brooks, E. S.; Poggio, M.; Anderson, T. R.; Gasch, C.; Yourek, M. A.; Ward, N. K.; Magney, T. S.; Brown, D. J.; Huggins, D. R.

    2014-12-01

    With the increasing availability of variable rate technology for applying fertilizers and other agrichemicals in dryland agricultural production systems there is a growing need to better capture and understand the processes driving field scale variability in crop yield and soil water. This need for a better understanding of field scale variability has led to the recent designation of the R. J. Cook Agronomy Farm (CAF) (Pullman, WA, USA) as a United States Department of Agriculture Long-Term Agro-Ecosystem Research (LTAR) site. Field scale variability at the CAF is closely monitored using extensive environmental sensor networks and intensive hand sampling. As investigating land-soil-water dynamics at CAF is essential for improving precision agriculture, transferring this knowledge across the regional-scale climosequence is challenging. In this study we describe the hydropedologic functioning of the CAF in relation to five extensively instrumented field sites located within 50 km in the same climatic region. The formation of restrictive argillic soil horizons in the wetter, cooler eastern edge of the region results in the development of extensive perched water tables, surface saturation, and surface runoff, whereas excess water is not an issue in the warmer, drier, western edge of the region. Similarly, crop and tillage management varies across the region as well. We discuss the implications of these regional differences on field scale management decisions and demonstrate how we are using proximal soil sensing and remote sensing imagery to better understand and capture field scale variability at a particular field site.

  16. BENCH SCALE SALTSTONE PROCESS DEVELOPMENT MIXING STUDY

    SciTech Connect

    Cozzi, A.; Hansen, E.

    2011-08-03

    The Savannah River National Laboratory (SRNL) was requested to develop a bench scale test facility, using a mixer, transfer pump, and transfer line to determine the impact of conveying the grout through the transfer lines to the vault on grout properties. Bench scale testing focused on the effect the transfer line has on the rheological property of the grout as it was processed through the transfer line. Rheological and other physical properties of grout samples were obtained prior to and after pumping through a transfer line. The Bench Scale Mixing Rig (BSMR) consisted of two mixing tanks, grout feed tank, transfer pump and transfer hose. The mixing tanks were used to batch the grout which was then transferred into the grout feed tank. The contents of the feed tank were then pumped through the transfer line (hose) using a progressive cavity pump. The grout flow rate and pump discharge pressure were monitored. Four sampling stations were located along the length of the transfer line at the 5, 105 and 205 feet past the transfer pump and at 305 feet, the discharge of the hose. Scaling between the full scale piping at Saltstone to bench scale testing at SRNL was performed by maintaining the same shear rate and total shear at the wall of the transfer line. The results of scaling down resulted in a shorter transfer line, a lower average velocity, the same transfer time and similar pressure drops. The condition of flow in the bench scale transfer line is laminar. The flow in the full scale pipe is in the transition region, but is more laminar than turbulent. The resulting plug in laminar flow in the bench scale results in a region of no-mixing. Hence mixing, or shearing, at the bench scale should be less than that observed in the full scale, where this plug is non existent due to the turbulent flow. The bench scale tests should be considered to be conservative due to the highly laminar condition of flow that exists. Two BSMR runs were performed. In both cases, wall

  17. Hover performance tests of full scale variable geometry rotors

    NASA Technical Reports Server (NTRS)

    Rorke, J. B.

    1976-01-01

    Full scale whirl tests were conducted to determine the effects of interblade spatial relationships and pitch variations on the hover performance and acoustic signature of a 6-blade main rotor system. The variable geometry rotor (VGR) variations from the conventional baseline were accomplished by: (1) shifting the axial position of alternate blades by one chord-length to form two tip path planes; and (2) varying the relative azimuthal spacing from the upper rotor to the lagging hover rotor in four increments from 25.2 degrees to 62.1 degrees. For each of these four configurations, the differential collective pitch between upper and lower rotors was set at + or - 1 deg, 0 deg and -1 deg. Hover performance data for all configurations were acquired at blade tip Mach numbers of 0.523 and 0.45. Acoustic data were recorded at all test conditions, but analyzed only at 0 deg differential pitch at the higher rotor speed. The VGR configurations tested demonstrated improvements in thrust at constant power as high as 6 percent. Reductions of 3 PNdb in perceived noise level and of 4 db in blade passage frequency noise level were achieved at the higher thrust levels. Consistent correlation exists between performance and acoustic improvements. For any given azimuth spacing, performance was consistently better for the differential pitch condition of + or - 1 degree, i.e. with the upper rotor pitch one degree higher than the lower rotor.

  18. Investigation of Scaling Effects on Fish Pectoral Fin Performance

    NASA Astrophysics Data System (ADS)

    Bozkurttas, Meliha; Dong, Haibo; Mittal, Rajat; Madden, Peter; Lauder, George

    2006-11-01

    Reynolds and Strouhal numbers are two key parameters that can potentially affect the performance of rigid and deformable flapping foils. Flow past a deformable pectoral fin of a fish in steady forward motion (speed of 1 BL/s) is simulated using a Cartesian grid immersed boundary solver. Investigation of the scaling of the performance with these two parameters allows us to gain better insight into the fundamental mechanisms of the thrust production as well as address the practical question of how the performance of a fin is expected to change with changes in size, speed and frequency. It is found that the essential fluid dynamic mechanisms are unchanged with Reynolds number. We observe that although the vortex structures get more complicated with increasing Re, the key features (like the strong tip vortex, leading and trailing edge vortices) are similar in all the cases. On the other hand, the hydrodynamic performance of the fin is found to be quite sensitive to the Strouhal number. A set of numerical simulations of fin gaits synthesized from the POD modes are also carried out. This approach allows us to connect specific features in the fin gait with the observed vortex dynamics and hydrodynamic force production.

  19. V/STOL tilt rotor aircraft study. Volume 10: Performance and stability test of A 1-14.622 Froude scaled Boeing Vertol Model 222 tilt rotor aircraft (Phase 1)

    NASA Technical Reports Server (NTRS)

    Mchugh, F. J.; Eason, W.; Alexander, H. R.; Mutter, H.

    1973-01-01

    Wind tunnel test data obtained from a 1/4.622 Froude scale Boeing Model 222 with a full span, two prop, tilt rotor, powered model in the Boeing V/STOL wind tunnel are reported. Data were taken in transition and cruise flight conditions and include performance, stability and control and blade loads information. The effects of the rotors, tail surfaces and airframe on the performance and stability are isolated as are the effects of the airframe on the rotors.

  20. SUPERFUND TREATABILITY CLEARINGHOUSE: BENGART AND MEMEL (BENCH-SCALE), GULFPORT (BENCH AND PILOT-SCALE), MONTANA POLE (BENCH-SCALE), AND WESTERN PROCESSING (BENCH-SCALE) TREATABILITY STUDIES

    EPA Science Inventory

    This document presents summary data on the results of various treatability studies (bench and pilot scale), conducted at three different sites where soils were contaminated with dioxins or PCBs. The synopsis is meant to show rough performance levels under a variety of differen...

  1. Scaling Semantic Graph Databases in Size and Performance

    SciTech Connect

    Morari, Alessandro; Castellana, Vito G.; Villa, Oreste; Tumeo, Antonino; Weaver, Jesse R.; Haglin, David J.; Choudhury, Sutanay; Feo, John T.

    2014-08-06

    In this paper we present SGEM, a full software system for accelerating large-scale semantic graph databases on commodity clusters. Unlike current approaches, SGEM addresses semantic graph databases by only employing graph methods at all the levels of the stack. On one hand, this allows exploiting the space efficiency of graph data structures and the inherent parallelism of graph algorithms. These features adapt well to the increasing system memory and core counts of modern commodity clusters. On the other hand, however, these systems are optimized for regular computation and batched data transfers, while graph methods usually are irregular and generate fine-grained data accesses with poor spatial and temporal locality. Our framework comprises a SPARQL to data parallel C compiler, a library of parallel graph methods and a custom, multithreaded runtime system. We introduce our stack, motivate its advantages with respect to other solutions and show how we solved the challenges posed by irregular behaviors. We present the result of our software stack on the Berlin SPARQL benchmarks with datasets up to 10 billion triples (a triple corresponds to a graph edge), demonstrating scaling in dataset size and in performance as more nodes are added to the cluster.

  2. A Study on Emotional Literacy Scale Development

    ERIC Educational Resources Information Center

    Akbag, Müge; Küçüktepe, Seval Eminoglu; Özmercan, Esra Eminoglu

    2016-01-01

    Emotional literacy is described as being aware of our own feelings in order to improve our personal power and life quality as well as people's life quality around us. In this study, the aim is to develop a Likert scale which measures people's emotional literacy in order to be used both in descriptive and experimental researches. Related literature…

  3. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  4. A Normative Study of the Wechsler Memory Scale

    ERIC Educational Resources Information Center

    Kear-Colwell, J. J.; Heller, Mary

    1978-01-01

    Aims of this study were to determine whether the factor structure produced in earlier research by Kear-Colwell (1973, 1977) on the Wechsler Memory Scale could be replicated in a non-patient population (most research uses patient populations) and also to examine the effects of age, sex, and social class on the performance of normal adults on this…

  5. Research on the synthesis and scale inhibition performance of a new terpolymer scale inhibitor.

    PubMed

    Bao, Yufei; Li, Meng; Zhang, Yanqing

    2016-01-01

    A new terpolymer named β-CD-MA-SSS was produced using free-radical polymerization of β-cyclodextrin (β-CD), maleic-anhydride (MA) and sodium-styrene-sulfonate (SSS) as monomers, with potassium persulfate (KPS) as initiator. Its performance as a scale inhibitor to prevent deposition of calcium carbonate (CaCO3) has been investigated. Experimental results demonstrated that β-CD-MA-SSS performed excellent scale inhibition and exhibited a high conversion rate under the following conditions: initiator consisting of 6%, molar ratio of reaction monomers SSS:MA = 0.8:1, MA:β-CD = 6:1, reaction temperature of 80 °C, reaction time of 6 h, and dropping time of 40 min when MA was dosed as a substrate, and SSS and KPS were dosed as dropping reactants simultaneously. Use of a Fourier transform infrared spectrometer for this inhibitor showed that the polymerization reaction had taken place with the reaction monomers under the above specified conditions. Scanning electron microscopy indicated that the β-CD-MA-SSS had a strong chelating ability for calcium (Ca(2+)) and a good dispersion ability for calcium carbonate (CaCO3). PMID:27054733

  6. Speed Scaling for Energy and Performance with Instantaneous Parallelism

    NASA Astrophysics Data System (ADS)

    Sun, Hongyang; He, Yuxiong; Hsu, Wen-Jing

    We consider energy-performance tradeoff for scheduling parallel jobs on multiprocessors using dynamic speed scaling. The objective is to minimize the sum of energy consumption and certain performance metric, including makespan and total flow time. We focus on designing algorithms that are aware of the jobs' instantaneous parallelism but not their characteristics in the future. For total flow time plus energy, it is known that any algorithm that does not rely on instantaneous parallelism is Ω(ln 1/α P)-competitive, where P is the total number of processors. In this paper, we demonstrate the benefits of knowing instantaneous parallelism by presenting an O(1)-competitive algorithm. In the case of makespan plus energy, which is considered in the literature for the first time, we present an O(ln 1 - 1/α P)-competitive algorithm for batched jobs consisting of fully-parallel and sequential phases. We show that this algorithm is asymptotically optimal by providing a matching lower bound.

  7. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. PMID:21232918

  8. The Development of Hyper-MNP: Hyper-Media Navigational Performance Scale

    ERIC Educational Resources Information Center

    Firat, Mehmet; Yurdakul, Isil Kabakci

    2016-01-01

    The present study aimed at developing a scale to evaluate navigational performance as a whole, which is one of the factors influencing learning in hyper media. In line with this purpose, depending on the related literature, an item pool of 15 factors was prepared, and these variables were decreased to 5 based on the views of 38 field experts. In…

  9. Performance scaling of magnetic nozzles for electric propulsion

    NASA Astrophysics Data System (ADS)

    Little, Justin M.

    The use of magnetic nozzles (MNs) in electric propulsion (EP) systems is investigated analytically and experimentally. MNs have the potential to efficiently accelerate propellant without the restrictions of electrodes, however, their measured performance has been poor compared to existing EP technology. A theoretical model was developed to understand the requirements for efficient operation. Analytical scaling laws were derived for the mass utilization efficiency, channel efficiency, and MN thermal and divergence efficiencies, in terms of dimensionless parameters that describe the relevant collisional processes in the channel and the radial plasma structure at the MN throat. In comparison to previous MN thrusters, performance levels comparable to state of the art EP systems are only possible if three conditions are met: (1) the thruster operates in a high confinement mode, (2) the plume divergence is significantly reduced, and (3) electron temperatures are increased by an order of magnitude. The final requirement implies these thrusters should be operated with heavy propellants such as xenon to limit the specific impulse to reasonable values. An experiment was designed to investigate the fundamental dynamics of plasma flow through a MN. The experiment consists of a helicon plasma source and two electromagnetic coils. The plasma parameters are determined at a variety of locations using electric probes mounted on a positioning system. The existence of a critical magnetic field strength for high confinement and the predicted scaling of the mass utilization efficiency were verified. Electron cooling in the magnetically expanding plasma was observed to follow a polytropic law with an exponent that agrees with theory. With decreasing magnetic field, a transition from a collimated plume to an under-collimated plume was found, where an under-collimated plume is defined such that the plume divergence is greater than the magnetic field divergence. This transition was

  10. Rating Scale Impact on EFL Essay Marking: A Mixed-Method Study

    ERIC Educational Resources Information Center

    Barkaoui, Khaled

    2007-01-01

    Educators often have to choose among different types of rating scales to assess second-language (L2) writing performance. There is little research, however, on how different rating scales affect rater performance. This study employed a mixed-method approach to investigate the effects of two different rating scales on EFL essay scores, rating…

  11. Canadian Occupational Performance Measure performance scale: validity and responsiveness in chronic pain.

    PubMed

    Nieuwenhuizen, Mieke G; de Groot, Sonja; Janssen, Thomas W J; van der Maas, Lia C C; Beckerman, Heleen

    2014-01-01

    The construct validity and construct responsiveness of the performance scale of the Canadian Occupational Performance Measure (COPM) was measured in 87 newly admitted patients with chronic pain attending an outpatient rehabilitation clinic. At admission and after 12 wk, patients completed a COPM interview, the Pain Disability Index (PDI), and the RAND 36-Item Health Survey (RAND-36). We determined the construct validity of the COPM by correlations between the COPM performance scale (COPM-P), the PDI, and the RAND-36 at admission. Construct responsiveness was assessed by calculating the correlations between the change scores (n = 57). The COPM-P did not significantly correlate with the PDI (r = -0.260) or with any subscale of the RAND-36 (r = -0.007 to 0.248). Only a moderate correlation was found between change scores of the COPM-P and PDI (r = -0.380) and weak to moderate correlations were found between change scores of the COPM-P and the RAND-36 (r = -0.031 to 0.388), with the higher correlations for the physical functioning, social functioning, and role limitations (physical) subscales. In patients with chronic pain attending our rehabilitation program, the COPM-P measures something different than the RAND-36 or PDI. Therefore, construct validity of the COPM-P was not confirmed by our data. We were not able to find support for the COPM-P to detect changes in occupational performance. PMID:25357091

  12. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DOE PAGESBeta

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian; He, Mingzhu; Law, Beverly E.; Black, T. Andrew; Margolis, Hank A.; Cescatti, Alessandro; Gu, Lianhong; Montagni, Leonardo; et al

    2015-02-25

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourlymore » and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.« less

  13. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    SciTech Connect

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian; He, Mingzhu; Law, Beverly E.; Black, T. Andrew; Margolis, Hank A.; Cescatti, Alessandro; Gu, Lianhong; Montagni, Leonardo; Noormets, Asko; Griffis, Timothy J.; Pilegaard, Kim; Varlagin, Andrej; Valentini, Riccardo; Blanken, Peter D.; Wang, Shaoquiang; Wang, Huimin; Han, Shijie; Yan, Junhau; Li, Yingnian; Zhou, Bingbing; Liu, Yibo

    2015-02-25

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  14. Evaluating Mediated Perception of Narrative Health Messages: The Perception of Narrative Performance Scale

    PubMed Central

    Lee, Jeong Kyu; Hecht, Michael L.; Miller-Day, Michelle; Elek, Elvira

    2011-01-01

    Narrative media health messages have proven effective in preventing adolescents’ substance use but as yet few measures exist to assess perceptions of them. Without such a measure it is difficult to evaluate the role these messages play in health promotion or to differentiate them from other message forms. In response to this need, a study was conducted to evaluate the Perception of Narrative Performance Scale that assesses perceptions of narrative health messages. A sample of 1185 fifth graders in public schools at Phoenix, Arizona completed a questionnaire rating of two videos presenting narrative substance use prevention messages. Confirmatory factor analyses were computed to identify the factor structure of the scale. Consistent with prior studies, results suggest a 3 factor structure for the Perception of Narrative Performance Scale: interest, realism, and identification (with characters). In addition, a path analysis was performed to test the predictive power of the scale. The analysis shows that the scale proves useful in predicting intent to use substances. Finally, practical implications and limitations are discussed. PMID:21822459

  15. Estimation of Crop Gross Primary Production (GPP). 2; Do Scaled (MODIS) Vegetation Indices Improve Performance?

    NASA Technical Reports Server (NTRS)

    Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Zhang, Xiaoyang; Suyker, Andrew; Verma, Shashi; Shuai, Yanmin; Middleton, Elizabeth M.

    2015-01-01

    Satellite remote sensing estimates of Gross Primary Production (GPP) have routinely been made using spectral Vegetation Indices (VIs) over the past two decades. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the green band Wide Dynamic Range Vegetation Index (WDRVIgreen), and the green band Chlorophyll Index (CIgreen) have been employed to estimate GPP under the assumption that GPP is proportional to the product of VI and photosynthetically active radiation (PAR) (where VI is one of four VIs: NDVI, EVI, WDRVIgreen, or CIgreen). However, the empirical regressions between VI*PAR and GPP measured locally at flux towers do not pass through the origin (i.e., the zero X-Y value for regressions). Therefore they are somewhat difficult to interpret and apply. This study investigates (1) what are the scaling factors and offsets (i.e., regression slopes and intercepts) between the fraction of PAR absorbed by chlorophyll of a canopy (fAPARchl) and the VIs, and (2) whether the scaled VIs developed in (1) can eliminate the deficiency and improve the accuracy of GPP estimates. Three AmeriFlux maize and soybean fields were selected for this study, two of which are irrigated and one is rainfed. The four VIs and fAPARchl of the fields were computed with the MODerate resolution Imaging Spectroradiometer (MODIS) satellite images. The GPP estimation performance for the scaled VIs was compared to results obtained with the original VIs and evaluated with standard statistics: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of variation (CV). Overall, the scaled EVI obtained the best performance. The performance of the scaled NDVI, EVI and WDRVIgreen was improved across sites, crop types and soil/background wetness conditions. The scaled CIgreen did not improve results, compared to the original CIgreen. The scaled green band indices (WDRVIgreen, CIgreen) did not exhibit superior performance to either the

  16. A Laboratory Study of Heterogeneity and Scaling in Geologic Media

    NASA Astrophysics Data System (ADS)

    Brown, S.; Boitnott, G.; Bussod, G.; Hagan, P.

    2004-05-01

    In rocks and soils, the bulk geophysical and transport properties of the matrix and of fracture systems are determined by the juxtaposition of geometric features at many length scales. For sedimentary materials the length scales are: the pore scale (irregularities in grain surface roughness and cementation), the scale of grain packing faults (and the resulting correlated porosity structures), the scale dominated by sorting or winnowing due to depositional processes, and the scale of geomorphology at the time of deposition. We are studying the heterogeneity and anisotropy in geometry, permeability, and geophysical response from the pore (microscopic), laboratory (mesoscopic), and backyard field (macroscopic) scales. In turn these data are being described and synthesized for development of mathematical models. Eventually, we will perform parameter studies to explore these models in the context of transport in the vadose and saturated zones. We have developed a multi-probe physical properties scanner which allows for the mapping of geophysical properties on a slabbed sample or core. This device allows for detailed study of heterogeneity at those length scales most difficult to quantify using standard field and laboratory practices. The measurement head consists of a variety of probes designed to make local measurements of various properties, including: gas permeability, acoustic velocities (compressional and shear), complex electrical impedance (4 electrode, wide frequency coverage), and ultrasonic reflection (ultrasonic impedance and permeability). We can thus routinely generate detailed geophysical maps of a particular sample. We are testing and modifying these probes as necessary for use on soil samples. As a baseline study we have been characterizing the heterogeneity of a bench-size Berea sandstone block. Berea Sandstone has long been regarded as a laboratory standard in rock properties studies, owing to its uniformity and ``typical'' physical properties. We find

  17. Sensitivity of School-Performance Ratings to Scaling Decisions

    ERIC Educational Resources Information Center

    Ng, Hui Leng; Koretz, Daniel

    2015-01-01

    Policymakers usually leave decisions about scaling the scores used for accountability to their appointed technical advisory committees and the testing contractors. However, scaling decisions can have an appreciable impact on school ratings. Using middle-school data from New York State, we examined the consistency of school ratings based on two…

  18. Do Plot Scale Studies Yield Useful Data When Assessing Field Scale Practices?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Plot scale data has been used to develop models used to assess field and watershed scale nutrient losses. The objective of this study was to determine if phosphorus (P) loss results from plot scale rainfall simulation studies are “directionally correct” when compared to field scale P losses. Two fie...

  19. Evaluation of full-scale biofilter media performance

    SciTech Connect

    Cardenas-Gonzalez, B.; Ergas, S.J.; Switzenbaum, M.S.; Phillibert, N.

    1999-09-30

    The objective of this study was to characterize the key physical, chemical and biological properties of compost media from a full-scale biofiltration system used to control VOC emissions. Results of media characterization were used to assess the need for operational changes and media replacement. Biofilter media properties evaluated included: moisture content, pH, total organic carbon (TOC) and nitrogen content in water extracts and solid matrix, oxygen uptake rates, and microbial plate counts including total heterotrophs, oligotrophs, actinomycetes and fungi. Samples were taken from various locations and depths in the biofilter after three and five years of system operation. Media moisture content was highly variable, with samples from deeper in the bed dryer than surface samples. Low moisture contents were associated with low pH values and low oxygen uptake rates. Total organic carbon contents in water extracts were higher than typical biosolids compost in samples near the inlet to the biofilter, possibly due to extracellular polysaccharides. After five years of use, total nitrogen and organic carbon contents in the solid matrix did not significantly differ from initial levels or those in typical biosolids compost.

  20. Happiness Scale Interval Study. Methodological Considerations.

    PubMed

    Kalmijn, W M; Arends, L R; Veenhoven, R

    2011-07-01

    The Happiness Scale Interval Study deals with survey questions on happiness, using verbal response options, such as 'very happy' and 'pretty happy'. The aim is to estimate what degrees of happiness are denoted by such terms in different questions and languages. These degrees are expressed in numerical values on a continuous [0,10] scale, which are then used to compute 'transformed' means and standard deviations. Transforming scores on different questions to the same scale allows to broadening the World Database of Happiness considerably. The central purpose of the Happiness Scale Interval Study is to identify the happiness values at which respondents change their judgment from e.g. 'very happy' to 'pretty happy' or the reverse. This paper deals with the methodological/statistical aspects of this approach. The central question is always how to convert the frequencies at which the different possible responses to the same question given by a sample into information on the happiness distribution in the relevant population. The primary (cl)aim of this approach is to achieve this in a (more) valid way. To this end, a model is introduced that allows for dealing with happiness as a latent continuous random variable, in spite of the fact that it is measured as a discrete one. The [0,10] scale is partitioned in as many contiguous parts as the number of possible ratings in the primary scale sums up to. Any subject with a (self-perceived) happiness in the same subinterval is assumed to select the same response. For the probability density function of this happiness random variable, two options are discussed. The first one postulates a uniform distribution within each of the different subintervals of the [0,10] scale. On the basis of these results, the mean value and variance of the complete distribution can be estimated. The method is described, including the precision of the estimates obtained in this way. The second option assumes the happiness distribution to be described

  1. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  2. Working memory performance inversely predicts spontaneous delta and theta-band scaling relations.

    PubMed

    Euler, Matthew J; Wiltshire, Travis J; Niermeyer, Madison A; Butner, Jonathan E

    2016-04-15

    Electrophysiological studies have strongly implicated theta-band activity in human working memory processes. Concurrently, work on spontaneous, non-task-related oscillations has revealed the presence of long-range temporal correlations (LRTCs) within sub-bands of the ongoing EEG, and has begun to demonstrate their functional significance. However, few studies have yet assessed the relation of LRTCs (also called scaling relations) to individual differences in cognitive abilities. The present study addressed the intersection of these two literatures by investigating the relation of narrow-band EEG scaling relations to individual differences in working memory ability, with a particular focus on the theta band. Fifty-four healthy adults completed standardized assessments of working memory and separate recordings of their spontaneous, non-task-related EEG. Scaling relations were quantified in each of the five classical EEG frequency bands via the estimation of the Hurst exponent obtained from detrended fluctuation analysis. A multilevel modeling framework was used to characterize the relation of working memory performance to scaling relations as a function of general scalp location in Cartesian space. Overall, results indicated an inverse relationship between both delta and theta scaling relations and working memory ability, which was most prominent at posterior sensors, and was independent of either spatial or individual variability in band-specific power. These findings add to the growing literature demonstrating the relevance of neural LRTCs for understanding brain functioning, and support a construct- and state-dependent view of their functional implications. PMID:26872594

  3. Performance studies of electrochromic displays

    NASA Astrophysics Data System (ADS)

    Ionescu, Ciprian; Dobre, Robert Alexandru

    2015-02-01

    The idea of having flexible, very thin, light, low power and even low cost display devices implemented using new materials and technologies is very exciting. Nowadays we can talk about more than just concepts, such devices exist, and they are part of an emerging concept: FOLAE (Flexible Organic and Large Area Electronics). Among the advantages of electrochromic devices are the low power consumption (they are non-emissive, i.e. passive) and the aspect like ink on paper with good viewing angle. Some studies are still necessary for further development, before proper performances are met and the functional behavior can be predicted. This paper presents the results of the research activity conducted to develop electric characterization platform for the organic electronics display devices, especially electrochromic displays, to permit a thorough study. The hardware part of platform permits the measuring of different electric and optical parameters. Charging/discharging a display element presents high interest for optimal driving circuitry. In this sense, the corresponding waveforms are presented. The contrast of the display is also measured for different operation conditions as driving voltage levels and duration. The effect of temperature on electrical and optical parameters (contrast) of the display will be also presented.

  4. Performance of Young People with Down Syndrome on the Leiter-R and British Picture Vocabulary Scales

    ERIC Educational Resources Information Center

    Glenn, S.; Cunningham, C.

    2005-01-01

    The British picture vocabulary scales (BPVS-II) and the Leiter international performance scales (Leiter-R), both restandardised in 1997, are often used in experimental studies to match individuals with intellectual impairment. Both provide a brief measure of mental age, and cover a wide ability range using a simple format. The BPVS-II assesses…

  5. Emotional Presence in Online Learning Scale: A Scale Development Study

    ERIC Educational Resources Information Center

    Sarsar, Firat; Kisla, Tarik

    2016-01-01

    Although emotions are not a new topic in learning environments, the emerging technologies have changed not only the type of learning environments but also the perspectives of emotions in learning environments. This study designed to develop a survey to assist online instructors to understand students' emotional statement in online learning…

  6. Contemporary Daughter/Son Adult Social Role Performance Rating Scale and Interview Protocol: Development, Content Validation, and Exploratory Investigation

    ERIC Educational Resources Information Center

    Cozad, Dana Everett

    2009-01-01

    The purpose of this study was to develop and content validate a Performance Rating Scale and Interview Protocol, enabling study of the social role performance of adult daughters and sons as they fulfill the societal norms and expectations of adult children. This exploratory investigation was one of 13 contemporary adult social roles completed by…

  7. Impact of technology scaling on analog and RF performance of SOI-TFET

    NASA Astrophysics Data System (ADS)

    Kumari, P.; Dash, S.; Mishra, G. P.

    2015-12-01

    This paper presents both the analytical and simulation study of analog and RF performance for single gate semiconductor on insulator tunnel field effect transistor in an extensive manner. Here 2D drain current model has been developed using initial and final tunneling length of band-to-band process. The investigation is further extended to the quantitative and comprehensive analysis of analog parameters such as surface potential, electric field, tunneling path, and transfer characteristics of the device. The impact of scaling of gate oxide thickness and silicon body thickness on the electrostatic and RF performance of the device is discussed. The analytical model results are validated with TCAD sentaurus device simulation results.

  8. Scaling performance of Ga2O3/GaN nanowire field effect transistor

    NASA Astrophysics Data System (ADS)

    Li, Chi-Kang; Yeh, Po-Chun; Yu, Jeng-Wei; Peng, Lung-Han; Wu, Yuh-Renn

    2013-10-01

    A three-dimensional finite element solver is applied to investigate the performance of Ga2O3/GaN nanowire transistors. Experimental nanowire results of 50 nm gate length are provided to compare with the simulation, and they show good agreement. The performance of a shorter gate length (<50 nm) is studied and scaling issues of the short-channel effect are analyzed. With a better surrounding gate design and a recessed gate approach, the optimal conditions for a 20 nm gate length are explored in this paper.

  9. Assessing the performance of multi-purpose channel management measures at increasing scales

    NASA Astrophysics Data System (ADS)

    Wilkinson, Mark; Addy, Steve

    2016-04-01

    highlights the importance of structure design (porosity and degree of channel blockage) and placement in zones of high sediment transport to optimise performance. At the large scale, well designed flood embankment lowering can improve connectivity to the floodplain during low to medium return period events. However, ancillary works to stabilise the bank failed thus emphasising the importance of letting natural processes readjust channel morphology and hydrological connections to the floodplain. Although these trial measures demonstrated limited effects, this may be in part owing to restrictions in the range of hydroclimatological conditions during the study period and further work is needed to assess the performance under more extreme conditions. This work will contribute to refining guidance for managing channel coarse sediment problems in the future which in turn could help mitigate flooding using natural approaches.

  10. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    SciTech Connect

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin; Pasccci, Valerio; Gamblin, Todd; Brunst, Holger

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  11. Continuous bench-scale tests to assess METHOXYCOAL process performance

    SciTech Connect

    Knight, R.A.

    1991-01-01

    Laboratory-scale research conducted at Southern Illinois University at Carbondale (SIUC) has shown that coal pyrolysis in the presence of CH{sub 4} and small quantities of O{sub 2} (the METHOXYCOAL process) can produce high yields of liquids and valuable chemicals compared to conventional pyrolysis. The addition of MgO, coal ash, and clays have been shown to further enhance coal conversion. The goal of this two-year project is to build upon that laboratory research by conducting continuous bench-scale tests at IGT. Tests are being conducted with IBC-101 coal under CH{sub 4}/O{sub 2} blends with and without added coal ash, MgO, and/or clays, at temperatures and pressures up to 1000{degrees}F and 200 psig. These tests will provide data to select preferred operating conditions for chemicals production from high-sulfur Illinois coals.

  12. STATISTICAL EVALUATION OF SMALL SCALE MIXING DEMONSTRATION SAMPLING AND BATCH TRANSFER PERFORMANCE - 12093

    SciTech Connect

    GREER DA; THIEN MG

    2012-01-12

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS) has previously presented the results of mixing performance in two different sizes of small scale DSTs to support scale up estimates of full scale DST mixing performance. Currently, sufficient sampling of DSTs is one of the largest programmatic risks that could prevent timely delivery of high level waste to the WTP. WRPS has performed small scale mixing and sampling demonstrations to study the ability to sufficiently sample the tanks. The statistical evaluation of the demonstration results which lead to the conclusion that the two scales of small DST are behaving similarly and that full scale performance is predictable will be presented. This work is essential to reduce the risk of requiring a new dedicated feed sampling facility and will guide future optimization work to ensure the waste feed delivery mission will be accomplished successfully. This paper will focus on the analytical data collected from mixing, sampling, and batch transfer testing from the small scale mixing demonstration tanks and how those data are being interpreted to begin to understand the relationship between samples taken prior to transfer and samples from the subsequent batches transferred. An overview of the types of data collected and examples of typical raw data will be provided. The paper will then discuss the processing and manipulation of the data which is necessary to begin evaluating sampling and batch transfer performance. This discussion will also include the evaluation of the analytical measurement capability with regard to the simulant material used in the demonstration tests. The

  13. Performance of Extended Local Clustering Organization (LCO) for Large Scale Job-Shop Scheduling Problem (JSP)

    NASA Astrophysics Data System (ADS)

    Konno, Yohko; Suzuki, Keiji

    This paper describes an approach to development of a solution algorithm of a general-purpose for large scale problems using “Local Clustering Organization (LCO)” as a new solution for Job-shop scheduling problem (JSP). Using a performance effective large scale scheduling in the study of usual LCO, a solving JSP keep stability induced better solution is examined. In this study for an improvement of a performance of a solution for JSP, processes to a optimization by LCO is examined, and a scheduling solution-structure is extended to a new solution-structure based on machine-division. A solving method introduced into effective local clustering for the solution-structure is proposed as an extended LCO. An extended LCO has an algorithm which improves scheduling evaluation efficiently by clustering of parallel search which extends over plural machines. A result verified by an application of extended LCO on various scale of problems proved to conduce to minimizing make-span and improving on the stable performance.

  14. The Role of Performance-Based Assessments in Large-Scale Accountability Systems: Lessons Learned from the Inside. Technical Guidelines for Performance Assessment.

    ERIC Educational Resources Information Center

    Pearson, P. David; Calfee, Robert; Webb, Patricia L. Walker; Fleischer, Steve

    In 1996, a subcommittee of the State Collaborative on Assessment and Student Standards commissioned a study of the use of performance-based assessments in large-scale accountability systems. The idea was to look into current state assessment work on performance-based assessments to see what has been learned, but not widely reported, by those who…

  15. A Comfortability Level Scale for Performance of Cardiopulmonary Resuscitation.

    ERIC Educational Resources Information Center

    Otten, Robert Drew

    1984-01-01

    This article discusses the development of an instrument to appraise the comfortability level of college students in performing cardiopulmonary resuscitation. Methodology and findings of data collection are given. (Author/DF)

  16. Evaluating Feed Delivery Performance in Scaled Double-Shell Tanks

    SciTech Connect

    Lee, Kearn P.; Thien, Michael G.

    2013-11-07

    The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capability using simulated Hanford High-Level Waste (HLW) formulations. This work represents one of the remaining technical issues with the high-level waste treatment mission at Hanford. The TOCs' ability to adequately mix and sample high-level waste feed to meet the WTP WAC Data Quality Objectives must be demonstrated. The tank mixing and feed delivery must support both TOC and WTP operations. The tank mixing method must be able to remove settled solids from the tank and provide consistent feed to the WTP to facilitate waste treatment operations. Two geometrically scaled tanks were used with a broad spectrum of tank waste simulants to demonstrate that mixing using two rotating mixer jet pumps yields consistent slurry compositions as the tank is emptied in a series of sequential batch transfers. Testing showed that the concentration of slow settling solids in each transfer batch was consistent over a wide range of tank operating conditions. Although testing demonstrated that the concentration of fast settling solids decreased by up to 25% as the tank was emptied, batch-to-batch consistency improved as mixer jet nozzle velocity in the scaled tanks increased.

  17. Scaling study for SP-100 reactor technology

    NASA Astrophysics Data System (ADS)

    Marshall, A. C.; McKissock, B.

    Several ways were explored of extending SP-100 reactor technology to higher power levels. One approach was to use the reference SP-100 pin design and increase the fuel pin length and the number of fuel pins as needed to provide higher capability. The impact on scaling of a modified and advanced SP-100 reactor technology was also explored. Finally, the effect of using alternative power conversion subsystems, with SP-100 reactor technology was investigated. One of the principal concerns for any space based system is mass; consequently, this study focused on estimating reactor, shield, and total system mass. The RSMASS code (Marshall 1986) was used to estimate reactor and shield mass. Simple algorithms developed at NASA-Lewis were used to estimate the balance of system mass. Power ranges from 100 kWe to 10 MWe were explored assuming both one year and seven years of operation. Thermoelectric, Stirling, Rankine, and Brayton power conversion systems were investigated. The impact on safety, reliability, and other system attributes, caused by extending the technology to higher power levels, was also investigated.

  18. Two-phase performance of scale models of a primary coolant pump. Final report

    SciTech Connect

    Kamath, P.S.; Swift, W.L.

    1982-09-01

    Scale models of PWR primary coolant pumps were tested in steady and transient two-phase flows in order to generate a data base to aid in the development and assessment of pump performance models for use in computer codes for the analysis of postulated Loss-of-Coolant Accidents (LOCA). This report summarizes and unifies the single and two-phase air/water and steam/water performance data on the relatively high specific speed pumps (4200 rpm (US gpm) /sup 1/2//ft /sup 3/4/) tested in these programs. These data are compared with those acquired from tests on the lower specific speed Semiscale pump (926 rpm (US gpm)/sup 1/2//ft/sup 3/4/) to better understand the mechanism of performance degradation with increasing void fraction. The study revealed that scaling down the size of the pump while maintaining the same design specific speed produces very similar performance characteristics both in single and two-phase flows. Effects due to size and operating speed were not discernible within the range of test conditions and within experimental uncertainties. System pressure appears to affect the rate of degradation as a function of void fraction. The report includes a survey of the existing two-phase pump performance correlations. A correlation synthesized from the B and W, C-E and Creare two-phase data is also presented.

  19. Aeroacoustic and Performance Simulations of a Test Scale Open Rotor

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.

    2013-01-01

    This paper explores a comparison between experimental data and numerical simulations of the historical baseline F31/A31 open rotor geometry. The experimental data were obtained at the NASA Glenn Research Center s Aeroacoustic facility and include performance and noise information for a variety of flow speeds (matching take-off and cruise). The numerical simulations provide both performance and aeroacoustic results using the NUMECA s Fine-Turbo analysis code. A non-linear harmonic method is used to capture the rotor/rotor interaction.

  20. On-Line Performance Assessment Using Rating Scales.

    ERIC Educational Resources Information Center

    Stahl, John; Shumway, Rebecca; Bergstrom, Betty; Fisher, Anne

    1997-01-01

    The development of an online performance assessment instrument, the Assessment of Motor and Process Skills, is reported. Issues addressed include development, implementation, and validation of the scoring rubric in an extended Rasch model, rater training, and implementation of the assessment in a computerized program. (SLD)

  1. Developing and Testing the Guitar Songleading Performance Scale (GSPS)

    ERIC Educational Resources Information Center

    Silverman, Michael J.

    2011-01-01

    Guitar songleading is a critical component in music education and music therapy training curricula. However, at present, there is no standardized instrument to evaluate guitar songleading performance that is both valid and reliable. The purpose of this article is to describe the construction, development, and testing of a guitar songleading…

  2. PILOT SCALE EXPERIMENTS TO IMPROVE PERFORMANCE OF ELECTROSTATIC PRECIPITATORS

    EPA Science Inventory

    The paper describes pilot plant experience with techniques with a potential for improving the performance of electrostatic precipitators (ESPs) by using a novel rapping reentrainment collector and flexible steel cable (in place of solid large-diameter discharge electrodes) for bo...

  3. Referred Students' Performance on the Reynolds Intellectual Assessment Scales and the Wechsler Intelligence Scale for Children--Fourth Edition

    ERIC Educational Resources Information Center

    Edwards, Oliver W.; Paulin, Rachel V.

    2007-01-01

    This study investigates the convergent relations of the Reynolds Intellectual Assessment Scales (RIAS) and the Wechsler Intelligence Scale for Children--Fourth Edition (WISC-IV). Data from counterbalanced administrations of each instrument to 48 elementary school students referred for psychoeducational testing were examined. Analysis of the 96…

  4. Actuarial Assessment of Wechsler Verbal-Performance Scale Differences as Signs of Lateralized Cerebral Impairment.

    ERIC Educational Resources Information Center

    Leli, Dano A.; Filskov, Susan B.

    1981-01-01

    The long-standing clinical lore which holds that a discrepancy between Wechsler-Bellevue Verbal-Performance Scale weighted scores is a more sensitive sign of lateralized brain damage than a discrepancy between Verbal-Performance Scale IQ is investigated. The results do not support the clinical lore. (Author/AL)

  5. The Effects of Scaling Tennis Equipment on the Forehand Groundstroke Performance of Children

    PubMed Central

    Larson, Emma J.; Guggenheimer, Joshua D.

    2013-01-01

    The modifications that have taken place within youth sports have made games, such as basketball, soccer, or tennis, easier for children to play. The purpose of this study was to determine the effects low compression (LC) tennis balls and scaled tennis courts had on the forehand groundstroke performance of children. The forehand groundstroke performances of eight subjects’ (8.10 ± 0.74 yrs) using LC tennis balls were measured on a scaled tennis court and standard compression balls (SC) on a standard court. Forehand groundstroke performance was assessed by the ForeGround test which measures Velocity Precision Success Index (VPS) and Velocity Precision Index (VP). Participants attempted three different forehand rally patterns on two successive days, using LC balls on the 18.3m court one day and SC balls on the 23.8m court the other. When using LC balls, participants’ recorded higher overall VPS performance scores (p < 0.001) for each non-error stroke as well as higher VP scores (p = 0.01). The results of this study confirmed that the use of modified balls and modified court size may increase the control, velocity and overall success rate of the tennis forehand groundstroke of children. Key Points This study observed the effects of modified tennis balls and court had on the forehand groundstroke performance in children. Modified ball compression and modified court size can increase control, velocity and overall success of tennis performance. Children will have more success learning the game of tennis using modified equipment than using standard equipment. PMID:24149812

  6. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  7. Some observations on hyperuniform disordered photonic bandgap materials, from microwave scale study to infrared scale study

    NASA Astrophysics Data System (ADS)

    Tsitrin, Sam; Nahal, Geev; Florescu, Marian; Man, Weining; San Francisco State University Team; University of Surrey Team

    2015-03-01

    A novel class of disordered photonic materials, hyperuniform disordered solids (HUDS), attracted more attention. Recently they have been experimentally proven to provide complete photonic band gap (PBG) when made with Alumina or Si; as well as single-polarization PBG when made with plastic with refract index of 1.6. These PBGs were shown to be real energy gaps with zero density of photonic states, instead of mobility gaps of low transmission due to scattering, etc. Using cm-scale samples and microwave experiments, we reveal the nature of photonic modes existing in these disordered materials by analyzing phase delay and mapping field distribution profile inside them. We also show how to extend the proof-of-concept microwave studies of these materials to proof-of-scale studies for real applications, by designing and fabricating these disordered photonic materials at submicron-scale with functional devices for 1.55 micron wavelength. The intrinsic isotropy of the disordered structure is an inherent advantage associated with the absence of limitations of orientational order, which is shown to provide valuable freedom in defect architecture design impossible in periodical structures. NSF Award DMR-1308084, the University of Surrey's FRSF and Santander awards.

  8. System characteristics and performance evaluation of a trailer-scale downdraft gasifier with different feedstock.

    PubMed

    Balu, Elango; Chung, J N

    2012-03-01

    The main objective of this study is to investigate the thermal profiles of a trailer-scale gasifier in different zones during the course of gasification and also to elaborate on the design, characteristics and performance of the gasification system using different biomass feedstock. The purpose is to emphasize on the effectiveness of distributed power generation systems and demonstrate the feasibility of such gasification systems in real world scenarios, where the lingo-cellulosic biomass resources are widely available and distributed across the board. Experimental data on the thermal profiles with respect to five different zones in the gasifier and a comprehensive thermal-chemical equilibrium model to predict the syngas composition are presented in detail. Four different feedstock-pine wood, horse manure, red oak, and cardboard were evaluated. The effects of C, H, O content variations in the feedstock on the thermal profiles, and the efficiency and viability of the trailer-scale gasifier are also discussed. PMID:22265984

  9. Diagnostics and performance of a 1/4-scale MPD thruster

    NASA Technical Reports Server (NTRS)

    York, T. M.; Zakrzwski, C.; Soulas, G.

    1990-01-01

    The primary purpose of this study is to evaluate the performance and scaling characteristics of a 1/4-scale magnetoplasmadynamic (MPD) thruster operating with and without applied magnetic nozzle fields. The experiment was carried out with separate pulse forming networks for the thruster and the applied field solenoidal coil. A strong correlation of impact pressure signal with thruster current was noted. Also striking was the larger impact signal when the magnetic nozzle field was applied. Measurements of N(e) and T(e) from Langmuir probes have been made. Compatible interpretation of pressure with N(e), T(e), allow local velocity to be mapped, thus enhancing understanding of the acceleration process.

  10. Development and performance evaluation of frustum cone shaped churn for small scale production of butter.

    PubMed

    Kalla, Adarsh M; Sahu, C; Agrawal, A K; Bisen, P; Chavhan, B B; Sinha, Geetesh

    2016-05-01

    The present research was intended to develop a small scale butter churn and its performance by altering churning temperature and churn speed during butter making. In the present study, the cream was churned at different temperatures (8, 10 and 12 °C) and churn speeds (35, 60 and 85 rpm). The optimum parameters of churning time (40 min), moisture content (16 %) and overrun (19.42 %) were obtained when cream was churned at churning temperature of 10 °C and churn speed of 60 rpm. Using appropriate conditions of churning temperature and churn speed, high quality butter can be produced at cottage scale. PMID:27407187

  11. SEASAT SAR performance evaluation study

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The performance of the SEASAT synthetic aperture radar (SAR) sensor was evaluated using data processed by the MDA digital processor. Two particular aspects are considered the location accuracy of image data, and the calibration of the measured backscatter amplitude of a set of corner reflectors. The image location accuracy was assessed by selecting identifiable targets in several scenes, converting their image location to UTM coordinates, and comparing the results to map sheets. The error standard deviation is measured to be approximately 30 meters. The amplitude was calibrated by measuring the responses of the Goldstone corner reflector array and comparing the results to theoretical values. A linear regression of the measured against theoretical values results in a slope of 0.954 with a correlation coefficient of 0.970.

  12. Performance evaluation of bimodal thermite composites : nano- vs miron-scale particles

    SciTech Connect

    Moore, K. M.; Pantoya, M.; Son, S. F.

    2004-01-01

    In recent years many studies of metastable interstitial composites (MIC) have shown vast combustion improvements over traditional thermite materials. The main difference between these two materials is the size of the fuel particles in the mixture. Decreasing the fuel size from the micron to nanometer range significantly increases the combustion wave speed and ignition sensitivity. Little is known, however, about the critical level of nano-sized fuel particles needed to enhance the performance of the traditional thermite. Ignition sensitivity experiments were performed using Al/MoO{sub 3} pellets at a theoretical maximum density of 50% (2 g/cm{sup 3}). The Al fuel particles were prepared as bi-modal size distributions with micron (i.e., 4 and 20 {micro}m diameter) and nano-scale Al particles. The micron-scale Al was replaced in 10% increments by 80 nm Al particles until the fuel was 100% 80 nm Al. These bi-modal distributions allow the unique characteristics of nano-scale materials to be better understood. The pellets were ignited using a 50-W CO{sub 2} laser. High speed imaging diagnostics were used to measure ignition delay times, and micro-thermocouples were used to measure ignition temperatures. Combustion wave speeds were also examined.

  13. Psychomotor vigilance performance predicted by Epworth Sleepiness Scale scores in an operational setting with the United States Navy.

    PubMed

    Shattuck, Nita Lewis; Matsangas, Panagiotis

    2015-04-01

    It is critical in operational environments to identify individuals who are at higher risk of psychomotor performance impairments. This study assesses the utility of the Epworth Sleepiness Scale for predicting degraded psychomotor vigilance performance in an operational environment. Active duty crewmembers of a USA Navy destroyer (N = 69, age 21-54 years) completed the Epworth Sleepiness Scale at the beginning of the data collection period. Participants wore actigraphs and completed sleep diaries for 11 days. Psychomotor vigilance tests were administered throughout the data collection period using a 3-min version of the psychomotor vigilance test on the actigraphs. Crewmembers with elevated scores on the Epworth Sleepiness Scale (i.e. Epworth Sleepiness Scale >10) had 60% slower reaction times on average, and experienced at least 60% more lapses and false starts compared with individuals with normal Epworth Sleepiness Scale scores (i.e. Epworth Sleepiness Scale ≤ 10). Epworth Sleepiness Scale scores were correlated with daily time in bed (P < 0.01), sleep (P < 0.05), mean reaction time (P < 0.001), response speed 1/reaction time (P < 0.05), slowest 10% of response speed (P < 0.001), lapses (P < 0.01), and the sum of lapses and false starts (P < 0.001). In this chronically sleep-deprived population, elevated Epworth Sleepiness Scale scores identified that subset of the population who experienced degraded psychomotor vigilance performance. We theorize that Epworth Sleepiness Scale scores are an indication of personal sleep debt that varies depending on one's individual sleep requirement. In the absence of direct performance metrics, we also advocate that the Epworth Sleepiness Scale can be used to determine the prevalence of excessive sleepiness (and thereby assess the risk of performance decrements). PMID:25273376

  14. Full-scale studies of alum recovery

    SciTech Connect

    1988-01-01

    Full-scale testing was conducted at the Williams Water Treatment Plant to evaluate alum recovery. Two tests were conducted, one in August and one is September. The objective was to determine the dewaterability of the solids remaining after alum recovery on sand drying beds and to evaluate the effectiveness of the recovered alum as a coagulant in the water plant and for phosphorus removal at the wastewater plant.

  15. Experimental Studies of the Effects of Anode Composition and Process Parameters on Anode Slime Adhesion and Cathode Copper Purity by Performing Copper Electrorefining in a Pilot-Scale Cell

    NASA Astrophysics Data System (ADS)

    Zeng, Weizhi; Wang, Shijie; Free, Michael L.

    2016-06-01

    Copper electrorefining tests were conducted in a pilot-scale cell under commercial tankhouse environment to study the effects of anode compositions, current density, cathode blank width, and flow rate on anode slime behavior and cathode copper purity. Three different types of anodes (high, mid, and low impurity levels) were used in the tests and were analyzed under SEM/EDS. The harvested copper cathodes were weighed and analyzed for impurities concentrations using DC Arc. The adhered slimes and released slimes were collected, weighed, and analyzed for compositions using ICP. It was shown that the lead-to-arsenic ratio in the anodes affects the sintering and coalescence of slime particles. High current density condition can improve anode slime adhesion and cathode purity by intensifying slime particles' coalescence and dissolving part of the particles. Wide cathode blanks can raise the anodic current densities significantly and result in massive release of large slime particle aggregates, which are not likely to contaminate the cathode copper. Low flow rate can cause anode passivation and increase local temperatures in front of the anode, which leads to very intense sintering and coalescence of slime particles. The results and analyses of the tests present potential solutions for industrial copper electrorefining process.

  16. Reflective Thinking Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Basol, Gulsah; Evin Gencel, Ilke

    2013-01-01

    The purpose of this study was to adapt Reflective Thinking Scale to Turkish and investigate its validity and reliability over a Turkish university students' sample. Reflective Thinking Scale (RTS) is a 5 point Likert scale (ranging from 1 corresponding Agree Completely, 3 to Neutral, and 5 to Not Agree Completely), purposed to measure…

  17. Hydrologic and pollutant removal performance of stormwater biofiltration systems at the field scale

    NASA Astrophysics Data System (ADS)

    Hatt, Belinda E.; Fletcher, Tim D.; Deletic, Ana

    2009-02-01

    SummaryBiofiltration systems are a recommended and increasingly popular technology for stormwater management; however there is a general lack of performance data for these systems, particularly at the field scale. The objective of this study was to investigate the hydrologic and pollutant removal performance of three field-scale biofiltration systems in two different climates. Biofilters were shown to effectively attenuate peak runoff flow rates by at least 80%. Performance assessment of a lined biofilter demonstrated that retention of inflow volumes by the filter media, for subsequent loss via evapotranspiration, reduced runoff volumes by 33% on average. Retention of water was found to be most influenced by inflow volumes, although only small to medium storms could be assessed. Vegetation was shown to be important for maintaining hydraulic capacity, because root growth and senescence countered compaction and clogging. Suspended solids and heavy metals were effectively removed, irrespective of the design configuration, with load reductions generally in excess of 90%. In contrast, nutrient retention was variable, and ranged from consistent leaching to effective and reliable removal, depending on the design. To ensure effective removal of phosphorus, a filter medium with a low phosphorus content should be selected. Nitrogen is more difficult to remove because it is highly soluble and strongly influenced by the variable wetting and drying regime that is inherent in biofilter operation. The results of this research suggest that reconfiguration of biofilter design to manage the deleterious effects of drying on biological activity is necessary to ensure long term nitrogen removal.

  18. The reliability of the Personal and Social Performance scale - informing its training and use.

    PubMed

    White, Sarah; Dominise, Christianne; Naik, Dhruv; Killaspy, Helen

    2016-09-30

    Social functioning is as an important outcome in studies of people with schizophrenia. Most measures of social function include a person's ability to manage everyday activities as well as their abilities to engage in leisure and occupational activities. The Personal Social Performance (PSP) scale assesses functioning across four dimensions (socially useful activities, personal and social relationships, self-care, disturbing and aggressive behaviours) rather than one global score and thus has been reported to be easier to use. In a pan-European study of people with severe mental illness a team of 26 researchers received training in rating the scale, after which the inter-rater reliability (IRR) was assessed and found to be not sufficiently high. A brief survey of the researchers elicited information with which to explore the low IRR and their experience of using the PSP. Clinicians were found to have higher IRR, in particular, psychologists. Patients' employment status was found to be the most important predictor of PSP. Researchers used multiple sources of information when rating the scale. Sufficient training is required to ensure IRR, particularly for non-clinical researchers, if the PSP is to be established as a reliable research tool. PMID:27428085

  19. Factor analysis of two versions of the Oral Impacts on Daily Performance scale.

    PubMed

    Pilotto, Luciane M; Scalco, Giovana P C; Abegg, Claides; Celeste, Roger K

    2016-06-01

    The aim of this study was to explore the factorial structure and agreement of two scoring versions of the Oral Impacts on Daily Performance (OIDP) scale, and to compare the fit of the originally proposed factorial structure, as opposed to an alternative model. Exploratory factor analyses (EFA) were conducted to explore the dimensional structure of the OIDP on a convenience sample of 200 adults (S1). Confirmatory factor analyses (CFA) were performed on a random sample of 720 adults (S2). The Cronbach's alpha coefficients for the total and frequency versions of the OIDP scale were, respectively, 0.81 and 0.70 for S1, and 0.82 and 0.79 for S2, with a quadratic Kappa κ = 0.83 (95% CI: 0.75-0.89) in S1 and κ = 0.92 (95% CI: 0.89-0.94) in S2. Exploratory factor analyses showed one factor for the total version and three factors (non-interpretable) for the frequency version. Confirmatory factor analyses showed that the frequency version for the one-factor model (Model 1) had the best fit [Root Mean Square Error of Approximation (RMSEA) = 0.04; Comparative Fit Index (CFI) = 0.98; Tucker-Lewis index (TLI) = 0.97, χ(2) P-value < 0.01]. The one-factor model was not significantly different from the original three-factor model. These findings suggest that the scale captures only one overall quality of life dimension, and that the frequency version was the most parsimonious model of the OIDP scale. PMID:26935779

  20. Scale-dependent performances of CMIP5 earth system models in simulating terrestrial vegetation carbon

    NASA Astrophysics Data System (ADS)

    Jiang, L.; Luo, Y.; Yan, Y.; Hararuk, O.

    2013-12-01

    Mitigation of global changes will depend on reliable projection for the future situation. As the major tools to predict future climate, Earth System Models (ESMs) used in Coupled Model Intercomparison Project Phase 5 (CMIP5) for the IPCC Fifth Assessment Report have incorporated carbon cycle components, which account for the important fluxes of carbon between the ocean, atmosphere, and terrestrial biosphere carbon reservoirs; and therefore are expected to provide more detailed and more certain projections. However, ESMs are never perfect; and evaluating the ESMs can help us to identify uncertainties in prediction and give the priorities for model development. In this study, we benchmarked carbon in live vegetation in the terrestrial ecosystems simulated by 19 ESMs models from CMIP5 with an observationally estimated data set of global carbon vegetation pool 'Olson's Major World Ecosystem Complexes Ranked by Carbon in Live Vegetation: An Updated Database Using the GLC2000 Land Cover Product' by Gibbs (2006). Our aim is to evaluate the ability of ESMs to reproduce the global vegetation carbon pool at different scales and what are the possible causes for the bias. We found that the performance CMIP5 ESMs is very scale-dependent. While CESM1-BGC, CESM1-CAM5, CESM1-FASTCHEM and CESM1-WACCM, and NorESM1-M and NorESM1-ME (they share the same model structure) have very similar global sums with the observation data but they usually perform poorly at grid cell and biome scale. In contrast, MIROC-ESM and MIROC-ESM-CHEM simulate the best on at grid cell and biome scale but have larger differences in global sums than others. Our results will help improve CMIP5 ESMs for more reliable prediction.

  1. Habitat–performance relationships: finding the right metric at a given spatial scale

    PubMed Central

    Gaillard, Jean-Michel; Hebblewhite, Mark; Loison, Anne; Fuller, Mark; Powell, Roger; Basille, Mathieu; Van Moorter, Bram

    2010-01-01

    The field of habitat ecology has been muddled by imprecise terminology regarding what constitutes habitat, and how importance is measured through use, selection, avoidance and other bio-statistical terminology. Added to the confusion is the idea that habitat is scale-specific. Despite these conceptual difficulties, ecologists have made advances in understanding ‘how habitats are important to animals’, and data from animal-borne global positioning system (GPS) units have the potential to help this clarification. Here, we propose a new conceptual framework to connect habitats with measures of animal performance itself—towards assessing habitat–performance relationship (HPR). Long-term studies will be needed to estimate consequences of habitat selection for animal performance. GPS data from wildlife can provide new approaches for studying useful correlates of performance that we review. Recent examples include merging traditional resource selection studies with information about resources used at different critical life-history events (e.g. nesting, calving, migration), uncovering habitats that facilitate movement or foraging and, ultimately, comparing resources used through different life-history strategies with those resulting in death. By integrating data from GPS receivers with other animal-borne technologies and combining those data with additional life-history information, we believe understanding the drivers of HPRs will inform animal ecology and improve conservation. PMID:20566502

  2. Sex Differences in Performance over 7 Years on the Wechsler Intelligence Scale for Children Revised among Adults with Intellectual Disability

    ERIC Educational Resources Information Center

    Kittler, P.; Krinsky-McHale, S. J.; Devenny, D. A.

    2004-01-01

    The aim of this study was to explore changes related to sex differences on the Wechsler Intelligence Scale for Children Revised (WISC-R) subtest performance over a 7-year interval in middle-aged adults with intellectual disability (ID). Cognitive sex differences have been extensively studied in the general population, but there are few reports…

  3. ALTERNATIVE BIOLOGICAL TREATMENT PROCESSES FOR REMEDIATION OF CREOSOTE-CONTAMINATED MATERIALS: BENCH-SCALE TREATABILITY STUDIES

    EPA Science Inventory

    Bench-scale biotreatability studies were performed to determine the most effective of two bioremediation application strategies to ameliorate creosote and pentachlorophenol (PCP) contaminated soils present at the American Creosote Works Superfund site, Pensacola, Florida: olid-ph...

  4. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    PubMed

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME. PMID:24514859

  5. Systematic Land-Surface-Model Performance Evaluation on different time scales

    NASA Astrophysics Data System (ADS)

    Mahecha, M. D.; Jung, M.; Reichstein, M.; Beer, C.; Braakhekke, M.; Carvalhais, N.; Lange, H.; Lasslop, G.; Le Maire, G.; Seneviratne, S. I.; Vetter, M.

    2008-12-01

    Keeping track of the space--time evolution of CO2--, and H2O--fluxes between the terrestrial biosphere and atmosphere is essential to our understanding of current climate. Monitoring fluxes at site level is one option to characterize the temporal development of ecosystem--atmosphere interactions. Nevertheless, many aspects of ecosystem--atmosphere fluxes become meaningful only when interpreted in time over larger geographical regions. Empirical and process based models play a key role in spatial and temporal upscaling exercises. In this context, comparative model performance evaluations at site level are indispensable. We present a model evaluation scheme which investigates the model-data agreement separately on different time scales. Observed and modeled time series were decomposed by essentially non parametric techniques into subsignals (time scales) of characteristic fluctuations. By evaluating the extracted subsignals of observed and modeled C--fluxes (gross and net ecosystem exchange, GEE and NEE, and terrestrial ecosystem respiration, TER) separately, we obtain scale--dependent performances for the different evaluation measures. Our diagnostic model comparison allows uncovering time scales of model-data agreement and fundamental mismatch. We focus on the systematic evaluation of three land--surface models: Biome--BGC, ORCHIDEE, and LPJ. For the first time all models were driven by consistent site meteorology and compared to respective Eddy-Covariance flux observations. The results show that correct net C--fluxes may result from systematic (simultaneous) biases in TER and GEE on specific time scales of variation. We localize significant model-data mismatches of the annual-seasonal cycles in time and illustrate the recurrence characteristics of such problems. For example LPJ underestimates GEE during winter months and over estimates it in early summer at specific sites. Contrary, ORCHIDEE over-estimates the flux from July to September at these sites. Finally

  6. Measuring Psychosocial Aspects of Well-Being in Older Community Residents: Performance of Four Short Scales.

    ERIC Educational Resources Information Center

    Steiner, Andrea; And Others

    1996-01-01

    Uses Cronbach's alpha and correlational methods, including factor analysis, to evaluate the performance of four short scales measuring psychosocial aspects of well-being (depression, quality of life, sense of coherence, social support) in two samples of community-dwelling persons ages 75 and over. All scales exhibited good range, high internal…

  7. Performance Measurements of the Injection Laser System Configured for Picosecond Scale Advanced Radiographic Capability

    SciTech Connect

    Haefner, L C; Heebner, J E; Dawson, J W; Fochs, S N; Shverdin, M Y; Crane, J K; Kanz, K V; Halpin, J M; Phan, H H; Sigurdsson, R J; Brewer, S W; Britten, J A; Brunton, G K; Clark, W J; Messerly, M J; Nissen, J D; Shaw, B H; Hackel, R P; Hermann, M R; Tietbohl, G L; Siders, C W; Barty, C J

    2009-10-23

    We have characterized the Advanced Radiographic Capability injection laser system and demonstrated that it meets performance requirements for upcoming National Ignition Facility fusion experiments. Pulse compression was achieved with a scaled down replica of the meter-scale grating ARC compressor and sub-ps pulse duration was demonstrated at the Joule-level.

  8. Performance of Ultra-Scale Applications on Leading Vector andScalar HPC Platforms

    SciTech Connect

    Oliker, Leonid; Canning, Andrew; Carter, Jonathan Carter; Shalf,John; Simon, Horst; Ethier, Stephane; Parks, David; Kitawaki, Shigemune; Tsuda, Yoshinori; Sato, Tetsuya

    2005-01-01

    The last decade has witnessed a rapid proliferation of superscalar cache-based microprocessors to build high-end capability and capacity computers primarily because of their generality, scalability, and cost effectiveness. However, the constant degradation of superscalar sustained performance, has become a well-known problem in the scientific computing community. This trend has been widely attributed to the use of superscalar-based commodity components who's architectural designs offer a balance between memory performance, network capability, and execution rate that is poorly matched to the requirements of large-scale numerical computations. The recent development of massively parallel vector systems offers the potential to increase the performance gap for many important classes of algorithms. In this study we examine four diverse scientific applications with the potential to run at ultrascale, from the areas of plasma physics, material science, astrophysics, and magnetic fusion. We compare performance between the vector-based Earth Simulator (ES) and Cray X1, with leading superscalar-based platforms: the IBM Power3/4 and the SGI Altix. Results demonstrate that the ES vector systems achieve excellent performance on our application suite - the highest of any architecture tested to date.

  9. Scale-up studies on high shear wet granulation process from mini-scale to commercial scale.

    PubMed

    Aikawa, Shouhei; Fujita, Naomi; Myojo, Hidetoshi; Hayashi, Takashi; Tanino, Tadatsugu

    2008-10-01

    A newly developed mini-scale high shear granulator was used for scale-up study of wet granulation process from 0.2 to 200 L scales. Under various operation conditions and granulation bowl sizes, powder mixture composed of anhydrous caffeine, D-mannitol, dibasic calcium phosphate, pregelatinized starch and corn starch was granulated by adding water. The granules were tabletted, and disintegration time and hardness of the tablets were evaluated to seek correlations of granulation conditions and tablet properties. As the granulation proceeded, disintegration time was prolonged and hardness decreased. When granulation processes were operated under the condition that agitator tip speed was the same, similar relationship between granulation time and tablet properties, such as disintegration time and hardness, between 0.2 L and 11 L scales were observed. Likewise, between 11 L and 200 L scales similar relationship was observed when operated under the condition that the force to the granulation mass was the same. From the above results, the mini-scale high shear granulator should be useful tool to predict operation conditions of large-scale granulation from its mini-scale operation conditions, where similar tablet properties should be obtained. PMID:18827384

  10. A Confirmatory Study of Rating Scale Category Effectiveness for the Coaching Efficacy Scale

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Feltz, Deborah L.; Wolfe, Edward W.

    2008-01-01

    This study extended validity evidence for measures of coaching efficacy derived from the Coaching Efficacy Scale (CES) by testing the rating scale categorizations suggested in previous research. Previous research provided evidence for the effectiveness of a four-category (4-CAT) structure for high school and collegiate sports coaches; it also…

  11. Anaerobic co-digestion of kitchen waste and fruit/vegetable waste: lab-scale and pilot-scale studies.

    PubMed

    Wang, Long; Shen, Fei; Yuan, Hairong; Zou, Dexun; Liu, Yanping; Zhu, Baoning; Li, Xiujin

    2014-12-01

    The anaerobic digestion performances of kitchen waste (KW) and fruit/vegetable waste (FVW) were investigated for establishing engineering digestion system. The study was conducted from lab-scale to pilot-scale, including batch, single-phase and two-phase experiments. The lab-scale experimental results showed that the ratio of FVW to KW at 5:8 presented higher methane productivity (0.725 L CH4/g VS), and thereby was recommended. Two-phase digestion appeared to have higher treatment capacity and better buffer ability for high organic loading rate (OLR) (up to 5.0 g(VS) L(-1) d(-1)), compared with the low OLR of 3.5 g(VS) L(-1) d(-1) for single-phase system. For two-phase digestion, the pilot-scale system showed similar performances to those of lab-scale one, except slightly lower maximum OLR of 4.5 g(VS) L(-1) d(-1) was allowed. The pilot-scale system proved to be profitable with a net profit of 10.173$/ton as higher OLR (⩾ 3.0 g(VS) L(-1) d(-1)) was used. PMID:25192798

  12. Performance Support Case Studies from IBM.

    ERIC Educational Resources Information Center

    Duke-Moran, Celia; Swope, Ginger; Morariu, Janis; deKam, Peter

    1999-01-01

    Presents two case studies that show how IBM addressed performance support solutions and electronic learning. The first developed a performance support and expert coaching solution; the second applied performance support to reducing implementation time and total cost of ownership of enterprise resource planning systems. (Author/LRW)

  13. Metabolic versatility in full-scale wastewater treatment plants performing enhanced biological phosphorus removal.

    PubMed

    Lanham, Ana B; Oehmen, Adrian; Saunders, Aaron M; Carvalho, Gilda; Nielsen, Per H; Reis, Maria A M

    2013-12-01

    This study analysed the enhanced biological phosphorus removal (EBPR) microbial community and metabolic performance of five full-scale EBPR systems by using fluorescence in situ hybridisation combined with off-line batch tests fed with acetate under anaerobic-aerobic conditions. The phosphorus accumulating organisms (PAOs) in all systems were stable and showed little variability between each plant, while glycogen accumulating organisms (GAOs) were present in two of the plants. The metabolic activity of each sludge showed the frequent involvement of the anaerobic tricarboxylic acid cycle (TCA) in PAO metabolism for the anaerobic generation of reducing equivalents, in addition to the more frequently reported glycolysis pathway. Metabolic variability in the use of the two pathways was also observed, between different systems and in the same system over time. The metabolic dynamics was linked to the availability of glycogen, where a higher utilisation of the glycolysis pathway was observed in the two systems employing side-stream hydrolysis, and the TCA cycle was more active in the A(2)O systems. Full-scale plants that showed higher glycolysis activity also exhibited superior P removal performance, suggesting that promotion of the glycolysis pathway over the TCA cycle could be beneficial towards the optimisation of EBPR systems. PMID:24210547

  14. Century Scale Evaporation Trend: An Observational Study

    NASA Technical Reports Server (NTRS)

    Bounoui, Lahouari

    2012-01-01

    Several climate models with different complexity indicate that under increased CO2 forcing, runoff would increase faster than precipitation overland. However, observations over large U.S watersheds indicate otherwise. This inconsistency between models and observations suggests that there may be important feedbacks between climate and land surface unaccounted for in the present generation of models. We have analyzed century-scale observed annual runoff and precipitation time-series over several United States Geological Survey hydrological units covering large forested regions of the Eastern United States not affected by irrigation. Both time-series exhibit a positive long-term trend; however, in contrast to model results, these historic data records show that the rate of precipitation increases at roughly double the rate of runoff increase. We considered several hydrological processes to close the water budget and found that none of these processes acting alone could account for the total water excess generated by the observed difference between precipitation and runoff. We conclude that evaporation has increased over the period of observations and show that the increasing trend in precipitation minus runoff is correlated to observed increase in vegetation density based on the longest available global satellite record. The increase in vegetation density has important implications for climate; it slows but does not alleviate the projected warming associated with greenhouse gases emission.

  15. Feed process studies: Research-Scale Melter

    SciTech Connect

    Whittington, K.F.; Seiler, D.K.; Luey, J.; Vienna, J.D.; Sliger, W.A.

    1996-09-01

    In support of a two-phase approach to privatizing the processing of hazardous and radioactive waste at Hanford, research-scale melter (RSM) experiments were conducted to determine feed processing characteristics of two potential privatization Phase 1 high-level waste glass formulations and to determine if increased Ag, Te, and noble metal amounts would have bad effects. Effects of feed compositions and process conditions were examined for processing rate, cold cap behavior, off-gas, and glass properties. The 2 glass formulations used were: NOM-2 with adjusted waste loading (all components except silica and soda) of 25 wt%, and NOM-3 (max waste loaded glass) with adjusted waste loading of 30 wt%. The 25 wt% figure is the minimum required in the privatization Request for Proposal. RSM operated for 19 days (5 runs). 1010 kg feed was processed, producing 362 kg glass. Parts of runs 2 and 3 were run at 10 to 30 degrees above the nominal temperature 1150 C, with the most significant processing rate increase in run 3. Processing observations led to the choice of NOM-3 for noble metal testing in runs 4 and 5. During noble metal testing, processing rates fell 50% from baseline. Destructive analysis showed that a layer of noble metals and noble metal oxides settled on the floor of the melter, leading to current ``channeling`` which allowed the top section to cool, reducing production rates.

  16. Performance analysis of landslide early warning systems at regional scale: the EDuMaP method

    NASA Astrophysics Data System (ADS)

    Piciullo, Luca; Calvello, Michele

    2016-04-01

    Landslide early warning systems (LEWSs) reduce landslide risk by disseminating timely and meaningful warnings when the level of risk is judged intolerably high. Two categories of LEWSs, can be defined on the basis of their scale of analysis: "local" systems and "regional" systems. LEWSs at regional scale (ReLEWSs) are used to assess the probability of occurrence of landslides over appropriately-defined homogeneous warning zones of relevant extension, typically through the prediction and monitoring of meteorological variables, in order to give generalized warnings to the public. Despite many studies on ReLEWSs, no standard requirements exist for assessing their performance. Empirical evaluations are often carried out by simply analysing the time frames during which significant high-consequence landslides occurred in the test area. Alternatively, the performance evaluation is based on 2x2 contingency tables computed for the joint frequency distribution of landslides and alerts, both considered as dichotomous variables. In all these cases, model performance is assessed neglecting some important aspects which are peculiar to ReLEWSs, among which: the possible occurrence of multiple landslides in the warning zone; the duration of the warnings in relation to the time of occurrence of the landslides; the level of the warning issued in relation to the landslide spatial density in the warning zone; the relative importance system managers attribute to different types of errors. An original approach, called EDuMaP method, is proposed to assess the performance of landslide early warning models operating at regional scale. The method is composed by three main phases: Events analysis, Duration Matrix, Performance analysis. The events analysis phase focuses on the definition of landslide (LEs) and warning events (WEs), which are derived from available landslides and warnings databases according to their spatial and temporal characteristics by means of ten input parameters. The

  17. Extreme Postnatal Scaling in Bat Feeding Performance: A View of Ecomorphology from Ontogenetic and Macroevolutionary Perspectives.

    PubMed

    Santana, Sharlene E; Miller, Kimberly E

    2016-09-01

    Ecomorphology studies focus on understanding how anatomical and behavioral diversity result in differences in performance, ecology, and fitness. In mammals, the determinate growth of the skeleton entails that bite performance should change throughout ontogeny until the feeding apparatus attains its adult size and morphology. Then, interspecific differences in adult phenotypes are expected to drive food resource partitioning and patterns of lineage diversification. However, Formal tests of these predictions are lacking for the majority of mammal groups, and thus our understanding of mammalian ecomorphology remains incomplete. By focusing on a fundamental measure of feeding performance, bite force, and capitalizing on the extraordinary morphological and dietary diversity of bats, we discuss how the intersection of ontogenetic and macroevolutionary changes in feeding performance may impact ecological diversity in these mammals. We integrate data on cranial morphology and bite force gathered through longitudinal studies of captive animals and comparative studies of free-ranging individuals. We demonstrate that ontogenetic trajectories and evolutionary changes in bite force are highly dependent on changes in body and head size, and that bats exhibit dramatic, allometric increases in bite force during ontogeny. Interspecific variation in bite force is highly dependent on differences in cranial morphology and function, highlighting selection for ecological specialization. While more research is needed to determine how ontogenetic changes in size and bite force specifically impact food resource use and fitness in bats, interspecific diversity in cranial morphology and bite performance seem to closely match functional differences in diet. Altogether, these results suggest direct ecomorphological relationships at ontogenetic and macroevolutionary scales in bats. PMID:27371380

  18. A Factor Analytic Study of the Internet Usage Scale

    ERIC Educational Resources Information Center

    Monetti, David M.; Whatley, Mark A.; Hinkle, Kerry T.; Cunningham, Kerry T.; Breneiser, Jennifer E.; Kisling, Rhea

    2011-01-01

    This study developed an Internet Usage Scale (IUS) for use with adolescent populations. The IUS is a 26-item scale that measures participants' beliefs about how their Internet usage impacts their behavior. The sample for this study consisted of 947 middle school students. An exploratory factor analysis with varimax rotation was conducted on the…

  19. Developing the Educational Belief Scale: The Validity and Reliability Study

    ERIC Educational Resources Information Center

    Yilmaz, Kursad; Altinkurt, Yahya; Cokluk, Omay

    2011-01-01

    The aim of this study is to develop a valid and reliable scale that can be used in determining educational beliefs of teachers and prospective teachers. After studies such as scale expert views and the evaluation of intelligibility, the measure is administered to a sample consisting of 154 teachers and 305 prospective teachers with a total number…

  20. Multisite Studies and Scaling up in Educational Research

    ERIC Educational Resources Information Center

    Harwell, Michael

    2012-01-01

    A scale-up study in education typically expands the sample of students, schools, districts, and/or practices or materials used in smaller studies in ways that build in heterogeneity. Yet surprisingly little is known about the factors that promote successful scaling up efforts in education, in large part due to the absence of empirically supported…

  1. Scale-up and advanced performance analysis of boiler combustion chambers

    SciTech Connect

    Richter, W.

    1985-12-31

    This paper discusses methods for evaluation of thermal performance of large boiler furnaces. Merits and limitations of pilot-scale testing and mathematical modeling are pointed out. Available computer models for furnace performance predictions are reviewed according to their classification into finite-difference methods and zone methods. Current state of the art models for industrial application are predominantly zone methods based on advanced Monte-Carlo type techniques for calculation of radiation heat transfer. A representation of this model type is described in more detail together with examples of its practical application. It is also shown, how pilot-scale results can be scaled-up with help of the model to predict full-scale performance of particular boiler furnaces.

  2. The ontogenetic scaling of hydrodynamics and swimming performance in jellyfish (Aurelia aurita).

    PubMed

    McHenry, Matthew J; Jed, Jason

    2003-11-01

    It is not well understood how ontogenetic changes in the motion and morphology of aquatic animals influence the performance of swimming. The goals of the present study were to understand how changes in size, shape and behavior affect the hydrodynamics of jet propulsion in the jellyfish Aurelia aurita and to explore how such changes affect the ontogenetic scaling of swimming speed and cost of transport. We measured the kinematics of jellyfish swimming from video recordings and simulated the hydrodynamics of swimming with two computational models that calculated thrust generation by paddle and jet mechanisms. Our results suggest that thrust is generated primarily by jetting and that there is negligible thrust generation by paddling. We examined how fluid forces scaled with body mass using the jet model. Despite an ontogenetic increase in the range of motion by the bell diameter and a decrease in the height-to-diameter ratio, we found that thrust and acceleration reaction scaled with body mass as predicted by kinematic similarity. However, jellyfish decreased their pulse frequency with growth, and speed consequently scaled at a lower exponential rate than predicted by kinematic similarity. Model simulations suggest that the allometric growth in Aurelia results in swimming that is slower, but more energetically economical, than isometric growth with a prolate bell shape. The decrease in pulse frequency over ontogeny allows large Aurelia medusae to avoid a high cost of transport but generates slower swimming than if they maintained a high pulse frequency. Our findings suggest that ontogenetic change in the height-to-diameter ratio and pulse frequency of Aurelia results in swimming that is relatively moderate in speed but is energetically economical. PMID:14555752

  3. Durability study of a vehicle-scale hydrogen storage system.

    SciTech Connect

    Johnson, Terry Alan; Dedrick, Daniel E.; Behrens, Richard, Jr.

    2010-11-01

    Sandia National Laboratories has developed a vehicle-scale demonstration hydrogen storage system as part of a Work for Others project funded by General Motors. This Demonstration System was developed based on the properties and characteristics of sodium alanates which are complex metal hydrides. The technology resulting from this program was developed to enable heat and mass management during refueling and hydrogen delivery to an automotive system. During this program the Demonstration System was subjected to repeated hydriding and dehydriding cycles to enable comparison of the vehicle-scale system performance to small-scale sample data. This paper describes the experimental results of life-cycle studies of the Demonstration System. Two of the four hydrogen storage modules of the Demonstration System were used for this study. A well-controlled and repeatable sorption cycle was defined for the repeated cycling, which began after the system had already been cycled forty-one times. After the first nine repeated cycles, a significant hydrogen storage capacity loss was observed. It was suspected that the sodium alanates had been affected either morphologically or by contamination. The mechanisms leading to this initial degradation were investigated and results indicated that water and/or air contamination of the hydrogen supply may have lead to oxidation of the hydride and possibly kinetic deactivation. Subsequent cycles showed continued capacity loss indicating that the mechanism of degradation was gradual and transport or kinetically limited. A materials analysis was then conducted using established methods including treatment with carbon dioxide to react with sodium oxides that may have formed. The module tubes were sectioned to examine chemical composition and morphology as a function of axial position. The results will be discussed.

  4. Evidences of Validity of a Scale for Mapping Professional as Defining Competences and Performance by Brazilian Tutors

    ERIC Educational Resources Information Center

    Coelho, Francisco Antonio, Jr.; Ferreira, Rodrigo Rezende; Paschoal, Tatiane; Faiad, Cristiane; Meneses, Paulo Murce

    2015-01-01

    The purpose of this study was twofold: to assess evidences of construct validity of the Brazilian Scale of Tutors Competences in the field of Open and Distance Learning and to examine if variables such as professional experience, perception of the student´s learning performance and prior experience influence the development of technical and…

  5. The Consequences of Perfectionism Scale: Factorial Structure and Relationships with Perfectionism, Performance Perfectionism, Affect, and Depressive Symptoms

    ERIC Educational Resources Information Center

    Stoeber, Joachim; Hoyle, Azina; Last, Freyja

    2013-01-01

    This study investigated the Consequences of Perfectionism Scale (COPS) and its relationships with perfectionism, performance perfectionism, affect, and depressive symptoms in 202 university students using confirmatory factor analysis, correlations, and regression analyses. Results suggest that the COPS is a reliable and valid measure of positive…

  6. Performance of an Abbreviated Version of the Lubben Social Network Scale among Three European Community-Dwelling Older Adult Populations

    ERIC Educational Resources Information Center

    Lubben, James; Blozik, Eva; Gillmann, Gerhard; Iliffe, Steve; von Renteln-Kruse, Wolfgang; Beck, John C.; Stuck, Andreas E.

    2006-01-01

    Purpose: There is a need for valid and reliable short scales that can be used to assess social networks and social supports and to screen for social isolation in older persons. Design and Methods: The present study is a cross-national and cross-cultural evaluation of the performance of an abbreviated version of the Lubben Social Network Scale…

  7. OrigenArp Primer: How to Perform Isotopic Depletion and Decay Calculations with SCALE/ORIGEN

    SciTech Connect

    Bowman, Stephen M; Gauld, Ian C

    2010-08-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for nuclear analyses. ORIGEN-ARP is a SCALE isotopic depletion and decay analysis sequence used to perform point-depletion calculations with the well-known ORIGEN-S code using problem-dependent cross sections. Problem-dependent cross-section libraries are generated using the ARP (Automatic Rapid Processing) module using an interpolation algorithm that operates on pre-generated libraries created for a range of fuel properties and operating conditions. Methods are provided in SCALE to generate these libraries using one-, two-, and three-dimensional transport codes. The interpolation of cross sections for uranium-based fuels may be performed for the variables burnup, enrichment, and water density. An option is also available to interpolate cross sections for mixed-oxide (MOX) fuels using the variables burnup, plutonium content, plutonium isotopic vector, and water moderator density. This primer is designed to help a new user understand and use ORIGEN-ARP with the OrigenArp Windows graphical user interface in SCALE. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with nuclear depletion codes in general or with SCALE/ORIGEN-ARP in particular. The primer is based on SCALE 6 but should be applicable to earlier or later versions of SCALE. Information is included to help new users, along with several sample problems that walk the user through the different input forms and menus and illustrate the basic features. References to related documentation are provided. The primer provides a starting point for the nuclear analyst who uses SCALE/ORIGEN-ARP. Complete descriptions are provided in the SCALE documentation. Although the primer is self-contained, it is intended as a companion volume to the SCALE documentation. The SCALE Manual is

  8. Genome-Scale Studies of Aging: Challenges and Opportunities

    PubMed Central

    McCormick, Mark A; Kennedy, Brian K

    2012-01-01

    Whole-genome studies involving a phenotype of interest are increasingly prevalent, in part due to a dramatic increase in speed at which many high throughput technologies can be performed coupled to simultaneous decreases in cost. This type of genome-scale methodology has been applied to the phenotype of lifespan, as well as to whole-transcriptome changes during the aging process or in mutants affecting aging. The value of high throughput discovery-based science in this field is clearly evident, but will it yield a true systems-level understanding of the aging process? Here we review some of this work to date, focusing on recent findings and the unanswered puzzles to which they point. In this context, we also discuss recent technological advances and some of the likely future directions that they portend. PMID:23633910

  9. Including Performance Assessments in Accountability Systems: A Review of Scale-Up Efforts

    ERIC Educational Resources Information Center

    Tung, Rosann

    2010-01-01

    The purpose of this literature and field review is to understand previous efforts at scaling up performance assessments for use across districts and states. Performance assessments benefit students and teachers by providing more opportunities for students to demonstrate their knowledge and complex skills, by providing teachers with better…

  10. Evaluating Performance Measurement Systems in Nonprofit Agencies: The Program Accountability Quality Scale (PAQS).

    ERIC Educational Resources Information Center

    Poole, Dennis L.; Nelson, Joan; Carnahan, Sharon; Chepenik, Nancy G.; Tubiak, Christine

    2000-01-01

    Developed and field tested the Performance Accountability Quality Scale (PAQS) on 191 program performance measurement systems developed by nonprofit agencies in central Florida. Preliminary findings indicate that the PAQS provides a structure for obtaining expert opinions based on a theory-driven model about the quality of proposed measurement…

  11. Factor- and Item-Level Analyses of the 38-Item Activities Scale for Kids-Performance

    ERIC Educational Resources Information Center

    Bagley, Anita M.; Gorton, George E.; Bjornson, Kristie; Bevans, Katherine; Stout, Jean L.; Narayanan, Unni; Tucker, Carole A.

    2011-01-01

    Aim: Children and adolescents highly value their ability to participate in relevant daily life and recreational activities. The Activities Scale for Kids-performance (ASKp) instrument measures the frequency of performance of 30 common childhood activities, and has been shown to be valid and reliable. A revised and expanded 38-item ASKp (ASKp38)…

  12. Performance of a pilot-scale, steam-blown, pressurized fluidized bed biomass gasifier

    NASA Astrophysics Data System (ADS)

    Sweeney, Daniel Joseph

    With the discovery of vast fossil resources, and the subsequent development of the fossil fuel and petrochemical industry, the role of biomass-based products has declined. However, concerns about the finite and decreasing amount of fossil and mineral resources, in addition to health and climate impacts of fossil resource use, have elevated interest in innovative methods for converting renewable biomass resources into products that fit our modern lifestyle. Thermal conversion through gasification is an appealing method for utilizing biomass due to its operability using a wide variety of feedstocks at a wide range of scales, the product has a variety of uses (e.g., transportation fuel production, electricity production, chemicals synthesis), and in many cases, results in significantly lower greenhouse gas emissions. In spite of the advantages of gasification, several technical hurdles have hindered its commercial development. A number of studies have focused on laboratory-scale and atmospheric biomass gasification. However, few studies have reported on pilot-scale, woody biomass gasification under pressurized conditions. The purpose of this research is an assessment of the performance of a pilot-scale, steam-blown, pressurized fluidized bed biomass gasifier. The 200 kWth fluidized bed gasifier is capable of operation using solid feedstocks at feedrates up to 65 lb/hr, bed temperatures up to 1600°F, and pressures up to 8 atm. Gasifier performance was assessed under various temperatures, pressure, and feedstock (untreated woody biomass, dark and medium torrefied biomass) conditions by measuring product gas yield and composition, residue (e.g., tar and char) production, and mass and energy conversion efficiencies. Elevated temperature and pressure, and feedstock pretreatment were shown to have a significant influence on gasifier operability, tar production, carbon conversion, and process efficiency. High-pressure and temperature gasification of dark torrefied biomass

  13. Hover and forward flight acoustics and performance of a small-scale helicopter rotor system

    NASA Technical Reports Server (NTRS)

    Kitaplioglu, C.; Shinoda, P.

    1985-01-01

    A 2.1-m diam., 1/6-scale model helicopter main rotor was tested in hover in the test section of the NASA Ames 40- by 80- Foot Wind Tunnel. Subsequently, it was tested in forward flight in the Ames 7- by 10-Foot Wind Tunnel. The primary objective of the tests was to obtain performance and noise data on a small-scale rotor at various thrust coefficients, tip Mach numbers, and, in the later case, various advance ratios, for comparisons with similar existing data on full-scale helicopter rotors. This comparison yielded a preliminary evaluation of the scaling of helicopter rotor performance and acoustic radiation in hover and in forward flight. Correlation between model-scale and full-scale performance and acoustics was quite good in hover. In forward flight, however, there were significant differences in both performance and acoustic characteristics. A secondary objective was to contribute to a data base that will permit the estimation of facility effects on acoustic testing.

  14. Parameter study of a vehicle-scale hydrogen storage system.

    SciTech Connect

    Johnson, Terry Alan; Kanouff, Michael P.

    2010-04-01

    Sandia National Laboratories has developed a vehicle-scale prototype hydrogen storage system as part of a Work For Others project funded by General Motors. This Demonstration System was developed using the complex metal hydride sodium alanate. For the current work, we have continued our evaluation of the GM Demonstration System to provide learning to DOE's hydrogen storage programs, specifically the new Hydrogen Storage Engineering Center of Excellence. Baseline refueling data during testing for GM was taken over a narrow range of optimized parameter values. Further testing was conducted over a broader range. Parameters considered included hydrogen pressure and coolant flow rate. This data confirmed the choice of design pressure of the Demonstration System, but indicated that the system was over-designed for cooling. Baseline hydrogen delivery data was insufficient to map out delivery rate as a function of temperature and capacity for the full-scale system. A more rigorous matrix of tests was performed to better define delivery capabilities. These studies were compared with 1-D and 2-D coupled multi-physics modeling results. The relative merits of these models are discussed along with opportunities for improved efficiency or reduced mass and volume.

  15. Prenatal predictors of performance on the Brazelton Neonatal Behavioral Assessment Scale.

    PubMed

    Oyemade, U J; Cole, O J; Johnson, A A; Knight, E M; Westney, O E; Laryea, H; Hill, G; Cannon, E; Fomufod, A; Westney, L S

    1994-06-01

    The present study presents a prospective analysis of the interrelationships among prenatal medical, nutritional (dietary and biochemical) and behavioral determinants of Brazelton performance. Previous researchers (Scanlon 1984, Lester and Brazelton 1984) have raised questions regarding the relative roles of medical factors, nutrition, ponderal index and other behavioral factors in neonatal performance on the BNBAS. Four hundred sixty-seven predominantly Black nulliparous women and their neonates in Washington, D.C. who were enrolled in the study by the 20th week of gestation were subjects. Results of univariate tests of significant (P < 0.01) association between independent variables and Brazelton clusters from scores measured on day 2 are presented. The 26 behavioral items were summarized into 6 clusters as done in similar studies by linearizing measures made on a curvilinear scale and taking the mean. The 6 behavioral clusters are habituation, motor, orientation, range of states, regulation of states, and autonomic. Results of 16 reflex tests are used to define a seventh reflex cluster. Independent variables included demographic, lifestyle, nutritional, medical, ponderal index, and psychosocial measures. Several psychosocial variables, including stress, anxiety and partner interaction were associated with the behavioral clusters. Nutritional variables were associated with BNBAS habituation, motor, orientation, reflex score and autonomic responses. An analysis of co-variance was performed to determine the joint effect of the above variables on the variation in the Brazelton performance on the seven cluster scores. Five of the seven models (orientation, motor, range of states, autonomic, and reflex scores) were significant predictors of the outcome variables. PMID:8201439

  16. A Review. A Criterion-related Study of Three Sets of Rating Scales Used for Measuring and Evaluating the Instrumental Achievement of First and Second Year Clarinet Students.

    ERIC Educational Resources Information Center

    Radocy, Rudolf E.

    1987-01-01

    Reviews a study that examined the validity of different rating scales for measuring instrumental achievement with the clarinet. Rejects the contention that the new rating scales are superior to the Clarinet Performance Rating Scale. (BSR)

  17. Small Scale Mixing Demonstration Batch Transfer and Sampling Performance of Simulated HLW - 12307

    SciTech Connect

    Jensen, Jesse; Townson, Paul; Vanatta, Matt

    2012-07-01

    The ability to effectively mix, sample, certify, and deliver consistent batches of High Level Waste (HLW) feed from the Hanford Double Shell Tanks (DST) to the Waste treatment Plant (WTP) has been recognized as a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. At the end of 2009 DOE's Tank Operations Contractor, Washington River Protection Solutions (WRPS), awarded a contract to EnergySolutions to design, fabricate and operate a demonstration platform called the Small Scale Mixing Demonstration (SSMD) to establish pre-transfer sampling capacity, and batch transfer performance data at two different scales. This data will be used to examine the baseline capacity for a tank mixed via rotational jet mixers to transfer consistent or bounding batches, and provide scale up information to predict full scale operational performance. This information will then in turn be used to define the baseline capacity of such a system to transfer and sample batches sent to WTP. The Small Scale Mixing Demonstration (SSMD) platform consists of 43'' and 120'' diameter clear acrylic test vessels, each equipped with two scaled jet mixer pump assemblies, and all supporting vessels, controls, services, and simulant make up facilities. All tank internals have been modeled including the air lift circulators (ALCs), the steam heating coil, and the radius between the wall and floor. The test vessels are set up to simulate the transfer of HLW out of a mixed tank, and collect a pre-transfer sample in a manner similar to the proposed baseline configuration. The collected material is submitted to an NQA-1 laboratory for chemical analysis. Previous work has been done to assess tank mixing performance at both scales. This work involved a combination of unique instruments to understand the three dimensional distribution of solids using a combination of Coriolis meter measurements, in situ chord length distribution measurements, and electro

  18. Wyoming Social Studies Content and Performance Standards.

    ERIC Educational Resources Information Center

    Wyoming State Dept. of Education, Cheyenne.

    The Wyoming Social Studies Content and Performance Standards were developed in the recognition that social studies is the integrated study of the social sciences and humanities to promote civic competence. The mission of social studies is to help young people develop the ability to make informed and reasoned decisions as citizens of a culturally…

  19. Influence of particle size on performance of a pilot-scale fixed-bed gasification system.

    PubMed

    Yin, Renzhan; Liu, Ronghou; Wu, Jinkai; Wu, Xiaowu; Sun, Chen; Wu, Ceng

    2012-09-01

    The effect of particle size on the gasification performance of a pilot-scale (25 kg/h) downdraft fixed bed gasification system was investigated using prunings from peach trees at five different size fractions (below 1, 1-2, 2-4, 4-6 and 6-8 cm). The gas and hydrocarbon compositions were analyzed by gas chromatography (GC) and gas chromatography/mass spectrometry (GC-MS), respectively. With increasing particle size, gas yield increased while tar and dust content decreased. The lower heating value of the gas decreased slightly with particle size. At a smaller particle size, more hydrocarbons were detected in the producer gas. Hydrogen and carbon dioxide contents increased with the decrease in particle size, reaching 16.09% and 14.36% at particle size below 1cm, respectively. Prunings with a particle size of 1-2 cm were favorable for gasification in the downdraft gasifier used in this study. PMID:22728176

  20. Experimental performance of a piston expander in a small- scale organic Rankine cycle

    NASA Astrophysics Data System (ADS)

    Oudkerk, J. F.; Dickes, R.; Dumont, O.; Lemort, V.

    2015-08-01

    Volumetric expanders are suitable for more and more applications in the field of micro- and small-scale power system as waster heat recovery or solar energy. This paper present an experimental study carried out on a swatch-plate piston expander. The expander was integrated into an ORC test-bench using R245fa. The performances are evaluated in term of isentropic efficiency and filling factor. The maximum efficiency and power reached are respectively 53% and 2 kW. Inside cylinder pressure measurements allow to compute mechanical efficiency and drown P-V diagram. A semi-empirical simulation model is then proposed, calibrated and used to analyse the different sources of losses.

  1. Multi-scale investigation of tensile creep of ultra-high performance concrete for bridge applications

    NASA Astrophysics Data System (ADS)

    Garas Yanni, Victor Youssef

    Ultra-high performance concrete (UHPC) is relatively a new generation of concretes optimized at the nano and micro-scales to provide superior mechanical and durability properties compared to conventional and high performance concretes. Improvements in UHPC are achieved through: limiting the water-to-cementitious materials ratio (i.e., w/cm ≤ 0.20), optimizing particle packing, eliminating coarse aggregate, using specialized materials, and implementing high temperature and high pressure curing regimes. In addition, and randomly dispersed and short fibers are typically added to enhance the material's tensile and flexural strength, ductility, and toughness. There is a specific interest in using UHPC for precast prestressed bridge girders because it has the potential to reduce maintenance costs associated with steel and conventional concrete girders, replace functionally obsolete or structurally deficient steel girders without increasing the weight or the depth of the girder, and increase bridge durability to between 75 and 100 years. UHPC girder construction differs from that of conventional reinforced concrete in that UHPC may not need transverse reinforcement due to the high tensile and shear strengths of the material. Before bridge designers specify such girders without using shear reinforcement, the long-term tensile performance of the material must be characterized. This multi-scale study provided new data and understanding of the long-term tensile performance of UHPC by assessing the effect of thermal treatment, fiber content, and stress level on the tensile creep in a large-scale study, and by characterizing the fiber-cementitious matrix interface at different curing regimes through nanoindentation and scanning electron microscopy (SEM) in a nano/micro-scale study. Tensile creep of UHPC was more sensitive to investigated parameters than tensile strength. Thermal treatment decreased tensile creep by about 60% after 1 year. Results suggested the possibility of

  2. Evaluation of the pressure ulcers risk scales with critically ill patients: a prospective cohort study 1

    PubMed Central

    Borghardt, Andressa Tomazini; do Prado, Thiago Nascimento; de Araújo, Thiago Moura; Rogenski, Noemi Marisa Brunet; Bringuente, Maria Edla de Oliveira

    2015-01-01

    AIMS: to evaluate the accuracy of the Braden and Waterlow risk assessment scales in critically ill inpatients. METHOD: this prospective cohort study, with 55 patients in intensive care units, was performed through evaluation of sociodemographic and clinical variables, through the application of the scales (Braden and Waterlow) upon admission and every 48 hours; and through the evaluation and classification of the ulcers into categories. RESULTS: the pressure ulcer incidence was 30.9%, with the Braden and Waterlow scales presenting high sensitivity (41% and 71%) and low specificity (21% and 47%) respectively in the three evaluations. The cut off scores found in the first, second and third evaluations were 12, 12 and 11 in the Braden scale, and 16, 15 and 14 in the Waterlow scale. CONCLUSION: the Braden scale was shown to be a good screening instrument, and the Waterlow scale proved to have better predictive power. PMID:25806628

  3. Effects of scaling on the performance of magnetoplasmadynamic thrusters. Engineer's thesis

    SciTech Connect

    Schmidt, W.M.

    1989-06-01

    A combined theoretical and empirical numerical model was developed which predicts the performance of continuous electrode coaxial magnetoplasmadynamic thrusters as a function of thruster dimensions, mass flow rate, and input current. This model was used to predict the effects of scaling on these thrusters. The model predicts that for scaling factors down to one-half, relations can be found relating the performance of one thruster to another. The model was used to examine these relationships for four different thruster configurations over a broad range of operating currents. The thrusters examined consisted of two geometries and their half scale counterparts. A conclusion from the analysis is that scaling down the size of the thruster by 50% can reduce the total power input by 30% to 40% at comparable efficiencies. However, this is at the cost of increasing the specific impulse by a factor of two which may render the thruster inappropriate for the intended missions.

  4. The Theoretical Orientation Profile Scale-Revised: A Validation Study.

    ERIC Educational Resources Information Center

    Worthington, Roger L.; Dillon, Frank R.

    2003-01-01

    This study supported evidence of reliability and validity of the Theoretical Orientation Profile Scale-Revised (TOPS-R) scores. The TOPS-R was designed to measure theoretical orientation among counselors and trainees. Factor analysis yielded a 6-factor solution accounting for 87.5% of the total variance in the scale. The 6 factors corresponded to…

  5. The Culturally Responsive Teacher Preparedness Scale: An Exploratory Study

    ERIC Educational Resources Information Center

    Hsiao, Yun-Ju

    2015-01-01

    The purpose of this study was to investigate the competencies of culturally responsive teaching and construct a Culturally Responsive Teacher Preparedness Scale (CRTPS) for the use of teacher preparation programs and preservice teachers. Competencies listed in the scale were identified through literature reviews and input from experts. The…

  6. Homework Purpose Scale for High School Students: A Validation Study

    ERIC Educational Resources Information Center

    Xu, Jianzhong

    2010-01-01

    The purpose of this study is to test the validity of scores on the Homework Purpose Scale using 681 rural and 306 urban high school students. First, confirmatory factor analysis was conducted on the rural sample. The results reveal that the Homework Purpose Scale comprises three separate yet related factors, including Learning-Oriented Reasons,…

  7. Surfactant studies for bench-scale operation

    NASA Technical Reports Server (NTRS)

    Hickey, Gregory S.; Sharma, Pramod K.

    1992-01-01

    A phase 2 study was initiated to investigate surfactant-assisted coal liquefaction, with the objective of quantifying the enhancement in liquid yields and product quality. This publication covers the first quarter of work. The major accomplishments were: the refurbishment of the high-pressure, high-temperature reactor autoclave, the completion of four coal liquefaction runs with Pittsburgh #8 coal, two each with and without sodium lignosulfonate surfactant, and the development of an analysis scheme for the product liquid filtrate and filter cake. Initial results at low reactor temperatures show that the addition of the surfactant produces an improvement in conversion yields and an increase in lighter boiling point fractions for the filtrate.

  8. Surfactant studies for bench-scale operation

    NASA Technical Reports Server (NTRS)

    Hickey, Gregory S.; Sharma, Pramod K.

    1993-01-01

    A phase 2 study has been initiated to investigate surfactant-assisted coal liquefaction, with the objective of quantifying the enhancement in liquid yields and product quality. This report covers the second quarter of work. The major accomplishments were: completion of coal liquefaction autoclave reactor runs with Illinois number 6 coal at processing temperatures of 300, 325, and 350 C, and pressures of 1800 psig; analysis of the filter cake and the filtrate obtained from the treated slurry in each run; and correlation of the coal conversions and the liquid yield quality to the surfactant concentration. An increase in coal conversions and upgrading of the liquid product quality due to surfactant addition was observed for all runs.

  9. Why does offspring size affect performance? Integrating metabolic scaling with life-history theory.

    PubMed

    Pettersen, Amanda K; White, Craig R; Marshall, Dustin J

    2015-11-22

    Within species, larger offspring typically outperform smaller offspring. While the relationship between offspring size and performance is ubiquitous, the cause of this relationship remains elusive. By linking metabolic and life-history theory, we provide a general explanation for why larger offspring perform better than smaller offspring. Using high-throughput respirometry arrays, we link metabolic rate to offspring size in two species of marine bryozoan. We found that metabolism scales allometrically with offspring size in both species: while larger offspring use absolutely more energy than smaller offspring, larger offspring use proportionally less of their maternally derived energy throughout the dependent, non-feeding phase. The increased metabolic efficiency of larger offspring while dependent on maternal investment may explain offspring size effects-larger offspring reach nutritional independence (feed for themselves) with a higher proportion of energy relative to structure than smaller offspring. These findings offer a potentially universal explanation for why larger offspring tend to perform better than smaller offspring but studies on other taxa are needed. PMID:26559952

  10. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  11. Measuring and tuning energy efficiency on large scale high performance computing platforms.

    SciTech Connect

    Laros, James H., III

    2011-08-01

    Recognition of the importance of power in the field of High Performance Computing, whether it be as an obstacle, expense or design consideration, has never been greater and more pervasive. While research has been conducted on many related aspects, there is a stark absence of work focused on large scale High Performance Computing. Part of the reason is the lack of measurement capability currently available on small or large platforms. Typically, research is conducted using coarse methods of measurement such as inserting a power meter between the power source and the platform, or fine grained measurements using custom instrumented boards (with obvious limitations in scale). To collect the measurements necessary to analyze real scientific computing applications at large scale, an in-situ measurement capability must exist on a large scale capability class platform. In response to this challenge, we exploit the unique power measurement capabilities of the Cray XT architecture to gain an understanding of power use and the effects of tuning. We apply these capabilities at the operating system level by deterministically halting cores when idle. At the application level, we gain an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale (thousands of nodes), while simultaneously collecting current and voltage measurements on the hosting nodes. We examine the effects of both CPU and network bandwidth tuning and demonstrate energy savings opportunities of up to 39% with little or no impact on run-time performance. Capturing scale effects in our experimental results was key. Our results provide strong evidence that next generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components, such as the network, to achieve energy efficient performance.

  12. Software for large scale tracking studies

    SciTech Connect

    Niederer, J.

    1984-05-01

    Over the past few years, Brookhaven accelerator physicists have been adapting particle tracking programs in planning local storage rings, and lately for SSC reference designs. In addition, the Laboratory is actively considering upgrades to its AGS capabilities aimed at higher proton intensity, polarized proton beams, and heavy ion acceleration. Further activity concerns heavy ion transfer, a proposed booster, and most recently design studies for a heavy ion collider to join to this complex. Circumstances have thus encouraged a search for common features among design and modeling programs and their data, and the corresponding controls efforts among present and tentative machines. Using a version of PATRICIA with nonlinear forces as a vehicle, we have experimented with formal ways to describe accelerator lattice problems to computers as well as to speed up the calculations for large storage ring models. Code treated by straightforward reorganization has served for SSC explorations. The representation work has led to a relational data base centered program, LILA, which has desirable properties for dealing with the many thousands of rapidly changing variables in tracking and other model programs. 13 references.

  13. Interpreting 12th-Graders' NAEP-Scaled Mathematics Performance Using High School Predictors and Postsecondary Outcomes from the National Education Longitudinal Study of 1988 (NELS:88). Statistical Analysis Report. NCES 2007-328

    ERIC Educational Resources Information Center

    Scott, Leslie A.; Ingels, Steven J.

    2007-01-01

    The search for an understandable reporting format has led the National Assessment Governing Board to explore the possibility of measuring and interpreting student performance on the 12th-grade National Assessment of Educational Progress (NAEP), the Nation's Report Card, in terms of readiness for college, the workplace, and the military. This…

  14. Determining the Influence of Groundwater Composition on the Performance of Arsenic Adsorption Columns Using Rapid Small-Scale Column Tests

    NASA Astrophysics Data System (ADS)

    Aragon, A. R.; Siegel, M.

    2004-12-01

    The USEPA has established a more stringent drinking water standard for arsenic, reducing the maximum contaminant level (MCL) from 50 μ g/L to 10 μ g/L. This will affect many small communities in the US that lack the appropriate treatment infrastructure and funding to reduce arsenic to such levels. For such communities, adsorption systems are the preferred technology based on ease of operation and relatively lower costs. The performance of adsorption media for the removal of arsenic from drinking water is dependent on site-specific water quality. At certain concentrations, co-occurring solutes will compete effectively with arsenic for sorption sites, potentially reducing the sorption capacity of the media. Due to the site-specific nature of water quality and variations in media properties, pilot scale studies are typically carried out to ensure that a proposed treatment technique is cost effective before installation of a full-scale system. Sandia National Laboratories is currently developing an approach to utilize rapid small-scale columns in lieu of pilot columns to test innovative technologies that could significantly reduce the cost of treatment in small communities. Rapid small-scale column tests (RSSCTs) were developed to predict full-scale treatment of organic contaminants by adsorption onto granular activated carbon (GAC). This process greatly reduced the time and costs required to verify performance of GAC adsorption columns. In this study, the RSSCT methodology is used to predict the removal of inorganic arsenic using mixed metal oxyhydroxide adsorption media. The media are engineered and synthesized from materials that control arsenic behavior in natural and disturbed systems. We describe the underlying theory and application of RSSCTs for the performance evaluation of novel media in several groundwater compositions. Results of small-scale laboratory columns are being used to predict the performance of pilot-scale systems and ultimately to design full-scale

  15. Performance studies of the parallel VIM code

    SciTech Connect

    Shi, B.; Blomquist, R.N.

    1996-05-01

    In this paper, the authors evaluate the performance of the parallel version of the VIM Monte Carlo code on the IBM SPx at the High Performance Computing Research Facility at ANL. Three test problems with contrasting computational characteristics were used to assess effects in performance. A statistical method for estimating the inefficiencies due to load imbalance and communication is also introduced. VIM is a large scale continuous energy Monte Carlo radiation transport program and was parallelized using history partitioning, the master/worker approach, and p4 message passing library. Dynamic load balancing is accomplished when the master processor assigns chunks of histories to workers that have completed a previously assigned task, accommodating variations in the lengths of histories, processor speeds, and worker loads. At the end of each batch (generation), the fission sites and tallies are sent from each worker to the master process, contributing to the parallel inefficiency. All communications are between master and workers, and are serial. The SPx is a scalable 128-node parallel supercomputer with high-performance Omega switches of 63 {micro}sec latency and 35 MBytes/sec bandwidth. For uniform and reproducible performance, they used only the 120 identical regular processors (IBM RS/6000) and excluded the remaining eight planet nodes, which may be loaded by other`s jobs.

  16. Performance Testing of Web Map Services tn three Dimensions - X, Y, Scale

    NASA Astrophysics Data System (ADS)

    Cibulka, Dušan

    2013-03-01

    The paper deals with the performance testing of web mapping services. The paper describes map service tests in which it is possible to determine the performance characteristics of a map service, depending on the location and scale of the map. The implementation of the test is tailored to the Web Map Service specifications provided by the Open Geospatial Consortium. The practical experiment consists of testing the map composition acquired from OpenStreetMap data for the area of southwestern Slovakia. These tests permit checking the performance of services in different positions, verifying the configuration of services, the composition of a map, and the visualization of geodata. The task of this paper is to also highlight the fact that it is not sufficient to only interpret a map service performance with conventional indicators. A map service's performance should be linked to information about the map's scale and location.

  17. Standardization Study of Internet Addiction Improvement Motivation Scale

    PubMed Central

    Park, Jae Woo; Park, Kee Hwan; Lee, In Jae; Kwon, Min

    2012-01-01

    Objective The purpose of this study was to develop a scale to measure motivation to improve Internet addiction. Motivation is known to be important to treat Internet addiction successfully. The reliability of the scale was assessed, and its concurrent validity was evaluated. Methods Ninety-two adolescents participated in this study. The basic demographic characteristics were recorded and the Korean version of the Stages of Readiness for Change and Eagerness for Treatment Scale for Internet Addiction (K-SOCRATES-I) was administered. Subsequently, the Internet Addiction Improvement Motivation Scale was developed using 10 questions based on the theory of motivation enhancement therapy and its precursor version designed for smoking cessation. Results The motivation scale was composed of three subscales through factor analysis; each subscale had an adequate degree of reliability. In addition, the motivation scale had a high degree of validity based on its significant correlation with the K-SOCRATES-I. A cut-off score, which can be used to screen out individuals with low motivation, was suggested. Conclusion The Internet Addiction Improvement Motivation Scale, composed of 10 questions developed in this study, was deemed a highly reliable and valid scale to measure a respondent's motivation to be treated for Internet addiction. PMID:23251202

  18. Influence of time scale on performance of a psychrometric energy balance method to estimate precipitation phase

    NASA Astrophysics Data System (ADS)

    Harder, P.; Pomeroy, J. W.

    2012-12-01

    Precipitation phase determination is fundamental to estimating catchment hydrological response to precipitation in cold regions and is especially variable over time and space in mountains. Hydrological methods to estimate phase are predominantly calibrated, depend on air temperature and use daily time steps. Air temperature is not physically related to phase and precipitation events are very dynamic, adding significant uncertainty to the use of daily air temperature indices to estimate phase. Data for this study comes from high quality, high temporal resolution precipitation phase and meteorological observations at multiple elevations in a small Canadian Rockies catchment, the Marmot Creek Research Basin, from 2005 to 2012. The psychrometric energy balance of a falling hydrometeor, requiring air temperature and humidity observations, was employed to examine precipitation phase with respect to meteorological conditions via calculation of a hydrometeor temperature. The hydrometeor temperature-precipitation phase relationship was used to quantify temporal scaling in phase observations and to develop a method to estimate precipitation phase. Temporal scaling results show that the transition range of the distribution of hydrometeor temperatures associated with mixed rainfall and snowfall decreases with decreasing time interval. The amount of precipitation also has an influence as larger events lead to smaller transition ranges across all time scales. The uncertainty of the relationship between the hydrometeor temperature and phase was quantified and degrades significantly with an increase in time interval. The errors associated with the 15 minute and hourly intervals are small. Comparisons with other methods indicate that the psychrometric energy balance method performs much better than air temperature methods and that this improvement increases with decreasing time interval. These findings suggest that the physically based psychrometric method, employed on sub

  19. Laboratory Performance Evaluation of Residential Scale Gas Engine Driven Heat Pump

    SciTech Connect

    Abu-Heiba, Ahmad; Mehdizadeh Momen, Ayyoub; Mahderekal, Dr. Isaac

    2016-01-01

    Building space cooling is, and until 2040 is expected to continue to be, the single largest use of electricity in the residential sector in the United States (EIA Energy Outlook 2015 .) Increases in electric-grid peak demand leads to higher electricity prices, system inefficiencies, power quality problems, and even failures. Thermally-activated systems, such as gas engine-driven heat pump (GHP), can reduce peak demand. This study describes the performance of a residential scale GHP. It was developed as part of a cooperative research and development agreement (CRADA) that was authorized by the Department of Energy (DOE) between OAK Ridge National Laboratory (ORNL) and Southwest Gas. Results showed the GHP produced 16.5 kW (4.7 RT) of cooling capacity at 35 C (95 F) rating condition with gas coefficient of performance (COP) of 0.99. In heating, the GHP produced 20.2 kW (5.75 RT) with a gas COP of 1.33. The study also discusses other benefits and challenges facing the GHP technology such as cost, reliability, and noise.

  20. Bioabsorbable fish scale for the internal fixation of fracture: a preliminary study.

    PubMed

    Chou, Cheng-Hung; Chen, Yong-Guei; Lin, Chien-Chen; Lin, Shang-Ming; Yang, Kai-Chiang; Chang, Shih-Hsin

    2014-09-01

    Fish scales, which consist of type I collagen and hydroxyapatite (HA), were used to fabricate a bioabsorbable bone pin in this study. Fresh fish scales were decellularized and characterized to provide higher biocompatibility. The mechanical properties of fish scales were tested, and the microstructure of an acellular fish scale was examined. The growth curve of a myoblastic cell line (C2C12), which was cultured on the acellular fish scales, implied biocompatibility in vitro, and the morphology of the cells cultured on the scales was observed using scanning electron microscopy (SEM). A bone pin made of decellularized fish scales was used for the internal fixation of femur fractures in New Zealand rabbits. Periodic X-ray evaluations were obtained, and histologic examinations were performed postoperatively. The present results show good cell growth on decellularized fish scales, implying great biocompatibility in vitro. Using SEM, the cell morphology revealed great adhesion on a native, layered collagen structure. The Young's modulus was 332 ± 50.4 MPa and the tensile strength was 34.4 ± 6.9 MPa for the decellularized fish scales. Animal studies revealed that a fish-scale-derived bone pin improved the healing of bone fractures and degraded with time. After an 8-week implantation, the bone pin integrated with the adjacent tissue, and new extracellular matrix was synthesized around the implant. Our results proved that fish-scale-derived bone pins are a promising implant material for bone healing and clinical applications. PMID:25211643

  1. Analysis on the detection performance of BOTDR in small-scale precision engineering

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Luan, Lijun

    2013-12-01

    In this thesis, the authors discuss the detection performance of the small-scale precision engineering with the Brillouin scattering light on the base of experiments. The authors made the measurements using the traditional Strain Distribution Gauge and optical fiber scattering light shift equipment AQ8603 and obtained two results. The authors compared and analyzed the data and made the conclusion that the BOTDR technology is not suitable for the small-scale Precision Engineering. The wiring methods and their effects to detection performance are also been discussed in this thesis.

  2. The development of a facility for full-scale testing of airfoil performance in simulated rain

    NASA Technical Reports Server (NTRS)

    Taylor, John T.; Moore, Cadd T., III; Campbell, Bryan A.; Melson, W. EDWARD., Jr.

    1988-01-01

    NASA Langley's Aircraft Landing Dynamics Facility has been adapted in order to test the performance of airfoils in a simulated rain environment, at rainfall rates of 2, 10, 30, and 40 inches/hour, and thereby derive the scaling laws associated with simulated rain in wind tunnel testing. A full-scale prototype of the rain-generation system has been constructed and tested for suitable rain intensity, uniformity, effects of crosswinds on uniformity, and drop size range. The results of a wind tunnel test aimed at ascertaining the minimum length of the simulated rain field required to yield an airfoil performance change due to the rain environment are presented.

  3. Control carrier recombination of multi-scale textured black silicon surface for high performance solar cells

    NASA Astrophysics Data System (ADS)

    Hong, M.; Yuan, G. D.; Peng, Y.; Chen, H. Y.; Zhang, Y.; Liu, Z. Q.; Wang, J. X.; Cai, B.; Zhu, Y. M.; Chen, Y.; Liu, J. H.; Li, J. M.

    2014-06-01

    We report an enhanced performance of multi-scale textured black silicon solar cell with power conversion efficiency of 15.5% by using anisotropic tetramethylammonium hydroxide etching to control the recombination. The multi-scale texture can effectively reduce the surface reflectance in a wide wavelength range, and both the surface and Auger recombination can be effectively suppressed by etching the samples after the n++ emitter formed. Our result shows that the reformed solar cell has higher conversion efficiency than that of conventional pyramid textured cell (15.3%). This work presents an effective method for improving the performance of nanostructured silicon solar cells.

  4. Scale-up considerations relevant to experimental studies of nuclear waste-package behavior

    SciTech Connect

    Coles, D.G.; Peters, R.D.

    1986-04-01

    Results from a study that investigated whether testing large-scale nuclear waste-package assemblages was technically warranted are reported. It was recognized that the majority of the investigations for predicting waste-package performance to date have relied primarily on laboratory-scale experimentation. However, methods for the successful extrapolation of the results from such experiments, both geometrically and over time, to actual repository conditions have not been well defined. Because a well-developed scaling technology exists in the chemical-engineering discipline, it was presupposed that much of this technology could be applicable to the prediction of waste-package performance. A review of existing literature documented numerous examples where a consideration of scaling technology was important. It was concluded that much of the existing scale-up technology is applicable to the prediction of waste-package performance for both size and time extrapolations and that conducting scale-up studies may be technically merited. However, the applicability for investigating the complex chemical interactions needs further development. It was recognized that the complexity of the system, and the long time periods involved, renders a completely theoretical approach to performance prediction almost hopeless. However, a theoretical and experimental study was defined for investigating heat and fluid flow. It was concluded that conducting scale-up modeling and experimentation for waste-package performance predictions is possible using existing technology. A sequential series of scaling studies, both theoretical and experimental, will be required to formulate size and time extrapolations of waste-package performance.

  5. High Thermoelectric Performance Lead Selenide Materials through All-scale Hierarchical Structuring

    NASA Astrophysics Data System (ADS)

    Lee, Yeseul

    Industries have paid increasing attention to power generation using waste heat through thermoelectrics, which convert heat to electric energy. This method can be used in renewable applications because of its environmentally friendly process. Large-scale production of bulk materials with high thermoelectric figure of merit (ZT) is the key to practical applications. PbTe-based materials have been mostly studied, but are facing a challenge regarding scarcity of Te. PbSe is a more abundant analog of PbTe that has been less frequently studied. This work presents a synthesis and characterization of bulk thermoelectric materials based on both n- and p-type PbSe with atomic-, nano-, meso-scale architectures. When PbSe is doped with Ga and In they efficiently generate electron carriers that are sufficient for high ZT. Thus, higher ZT of n-type PbSe can be achieved than that of optimized n-type PbTe at high temperatures. The study of the thermoelectric properties of p-type PbSe with Li, Na, and K indicates that the efficiency of Na in doping PbSe is found to be the highest. The additional spark plasma sintering (SPS) process allows samples to have increased carrier density and produce mesoscale grains that reduce lattice thermal conductivity, increasing ZT. Additional studies for reducing lattice thermal conductivity through nanostructuring were conducted. Adding (Ca/Sr/Ba)Se and EuSe to Na doped SPS PbSe generates nanoprecipitates. This study shows that the hierarchical architecture on the atomic scale (Na and Ca/Sr/Ba/Eu solid solution), nanoscale (MSe/EuSe nanoprecipitates), and mesoscale (grains) effectively increases ZT. MSe samples show no appreciable change in charge transport, while EuSe samples show decreased charge carriers. However, adding more Na optimizes properties. Continued investigating n-type dopants with Sb and Bi shows that Sb not only plays the role as a dopant but also is unexpectedly effective in generating nanostructuring. The Sb-rich precipitates

  6. Brazilian meningococcal C conjugate vaccine: Scaling up studies.

    PubMed

    Bastos, Renata Chagas; de Souza, Iaralice Medeiros; da Silva, Milton Neto; Silva, Flavia de Paiva; Figueira, Elza Scott; Leal, Maria de Lurdes; Jessouroun, Ellen; da Silva, José Godinho; Medronho, Ricardo de Andrade; da Silveira, Ivna Alana Freitas Brasileiro

    2015-08-20

    Several outbreaks caused by Neisseria meningitidis group C have been occurred in different regions of Brazil. A conjugate vaccine for Neisseria meningitidis was produced by chemical linkage between periodate-oxidized meningococcal C polysaccharide and hydrazide-activated monomeric tetanus toxoid via a modified reductive amination conjugation method. Vaccine safety and immunogenicity tested in Phase I and II trials showed satisfactory results. Before starting Phase III trials, vaccine production was scaled up to obtain industrial lots under Good Manufacture Practices (GMP). Comparative analysis between data obtained from industrial and pilot scales of the meningococcal C conjugate bulk showed similar execution times in the scaling up production process without significant losses or alterations in the quality attributes of purified compounds. In conclusion, scale up was considered satisfactory and the Brazilian meningococcal conjugate vaccine production aiming to perform Phase III trials is feasible. PMID:25865466

  7. A variability study on the ASTM thin slicing and scaling test method for evaluating the long-term performance of an extruded polystyrene foam blown with HCFC-142b

    SciTech Connect

    Fabian, B.A.; Graves, R.S.; Yarbrough, D.W.; Hofton, M.R.

    1997-11-01

    The ASTM accelerated aging test method for unfaced foamboard insulation that is based on slicing and scaling has been used on an extruded polystyrene product blown with HCFC-142b. The test method including specimen preparation was carried out at three laboratories. The participating laboratories used different means to prepare thin test specimens. Thin slices of foamboard with and without surface skins were tested in order to assess the effect of skins on the aging process. Measured values for the apparent thermal conductivity, k{sub a}, were used to calculate time average k{sub a} for life-times of 10, 20, and 40 years. The time-average k{sub a} from the three laboratories differed by less than 2.5% for 1.5-inch-thick product and less than 2% for 2.0-inch-thick product. The k{sub a} for slices with skin on one surface were less than k{sub a} of slices of the core foam.

  8. Feasibility Study for a Hopi Utility-Scale Wind Project

    SciTech Connect

    Kendrick Lomayestewa

    2011-05-31

    The goal of this project was to investigate the feasibility for the generation of energy from wind and to parallel this work with the development of a tribal utility organization capable of undertaking potential joint ventures in utility businesses and projects on the Hopi reservation. The goal of this project was to investigate the feasibility for the generation of energy from wind and to parallel this work with the development of a tribal utility organization capable of undertaking potential joint ventures in utility businesses and projects on the Hopi reservation. Wind resource assessments were conducted at two study sites on Hopi fee simple lands located south of the city of Winslow. Reports from the study were recently completed and have not been compared to any existing historical wind data nor have they been processed under any wind assessment models to determine the output performance and the project economics of turbines at the wind study sites. Ongoing analysis of the wind data and project modeling will determine the feasibility of a tribal utility-scale wind energy generation.

  9. Human Rights Attitude Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Ercan, Recep; Yaman, Tugba; Demir, Selcuk Besir

    2015-01-01

    The objective of this study is to develop a valid and reliable attitude scale having quality psychometric features that can measure secondary school students' attitudes towards human rights. The study group of the research is comprised by 710 6th, 7th and 8th grade students who study at 4 secondary schools in the centre of Sivas. The study group…

  10. Ute Mountain Ute Tribe Community-Scale Solar Feasibility Study

    SciTech Connect

    Rapp, Jim; Knight, Tawnie

    2014-01-30

    Parametrix Inc. conducted a feasibility study for the Ute Mountain Ute Tribe to determine whether or not a community-scale solar farm would be feasible for the community. The important part of the study was to find where the best fit for the solar farm could be. In the end, a 3MW community-scale solar farm was found best fit with the location of two hayfield sites.

  11. Simulating river discharges on a global scale - Identifying determinants of model performance

    NASA Astrophysics Data System (ADS)

    Eisner, S.; Flörke, M.; Kynast, E.

    2012-04-01

    Global hydrological models and land surface models are used to understand and simulate the global terrestrial water cycle. They, in particular, are applied to assess global scale impacts of global and climate change on water resources. While in recent years the growing availability of remote sensing products, e.g. evapotranspiration and soil moisture estimates, provide valuable information to validate simulated states and fluxes, however, the validation of simulated river discharges against observed time series is still widely-used. Thereby, most studies focus on: long-term mean monthly or annual discharges, discharge time series of the most downstream gauging stations of large-scale river basins (e.g. Amazon, Brahmaputra, etc.), or correlation-based metrics As global modeling approaches are constrained by simplified physical process representations and the implicit assumption that more or less the same model structure is globally valid, it is important to understand where and why these models perform good or poor in simulating 20th century river runoff and discharge fields. We present an extensive yet deliberately kept generic evaluation of the WaterGAP (Water - Global Assessment and Prognosis) Hydrology Model to simulate 20th century discharges. The model is designed as a conceptual water balance model, in the current version, WaterGAP3, operating on 5 arc minutes global grid. River runoff generated on the individual grid cells is routed along a global drainage direction map taking into account retention in natural surface water bodies, i.e. lakes and wetlands, as well as anthropogenic impacts, i.e. flow regulation and water abstraction for agriculture, industry and domestic purposes. Simulated discharges are evaluated against 1600 observed discharge records provided by the Global Runoff Data Centre (GRDC). Globally, the selected gauging stations differ substantially concerning their corresponding catchment areas, between 3000 and 3.6 mill sqkm, as well as

  12. Relationships between study skills and academic performance

    NASA Astrophysics Data System (ADS)

    Md Rahim, Nasrudin; Meon, Hasni

    2013-04-01

    Study skills play an important role in influencing academic performance of university students. These skills, which can be modified, can be used as an indicator on how a student would perform academically in his course of study. The purpose of the study is to determine the study skills profile among Universiti Selangor's (Unisel) students and to find the relationships of these skills with student's academic performance. A sample of seventy-eight (78) foundation studies and diploma students of Unisel were selected to participate in this study. Using Study Skills Inventory instrument, eight skills were measured. They are note taking; test taking; textbook study; concentration and memory; time management; analytical thinking and problem solving; nutrition; and vocabulary. Meanwhile, student's academic performance was measured through their current Grade Point Average (GPA). The result showed that vocabulary skill scored the highest mean with 3.01/4.00, followed by test taking (2.88), analytical thinking and problem solving (2.80), note taking (2.79), textbook study (2.58), concentration and memory (2.54), time management (2.25) and nutrition (2.21). Correlation analysis showed that test taking (r=0.286, p=0.011), note taking (r=0.224, p=0.048), and analytical thinking and problem solving (r=0.362, p=0.001) skills were positively correlated with GPA achievement.

  13. Performance of a pilot-scale compost biofilter treating gasoline vapor

    SciTech Connect

    Wright, W.F.; Schroeder, E.D.; Chang, D.P.Y.; Romstad, K.

    1997-06-01

    A pilot-scale compost biofiltration system was operated as gasoline soil vapor extraction site in Hayward, California for one year. The media was composed of equal volumes of compost and perlite, a bulking agent. Supplements added included nitrogen (as KNO{sub 3}), a gasoline degrading microbial inoculum, buffer (crushed oyster shell), and water. The biofiltration system was composed of four identical units with outside dimensions of 1.2 x 1.2 x 1.2 m (4 x 4 x 4 ft) operated in an up-flow mode. The units were configured in parallel during the first eight months and then reconfigured to two parallel systems of two units in series. Air flux values ranged from 0.29 to 1.0 m{sup 3}/m{sup 2} per min. Inlet total petroleum hydrogen hydrocarbon (TPH{sub gas}) concentrations ranged from 310 to 2,700 mg/m{sup 3}. The average empty bed contact time was 2.2 min. Following start-up, performance of the individual biofilters varied considerably for a seven-month period. The principal factor affecting performance appeared to be bed moisture content. Overall TPH{sub gas} removals reached 90% for short periods in one unit, and BTEX removals were typically above 90%. Drying resulted in channeling and loss of bed activity. Management of bed moisture content improved over the study period, and recovery of system performance was achieved without replacement of bed media. Overall TPH{sub gas} removals exceeded 90% during the final 50 days of the study.

  14. A new method for testing the scale-factor performance of fiber optical gyroscope

    NASA Astrophysics Data System (ADS)

    Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin

    2015-10-01

    Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by

  15. A large-scale validation study of the Medication Adherence Rating Scale (MARS).

    PubMed

    Fialko, Laura; Garety, Philippa A; Kuipers, Elizabeth; Dunn, Graham; Bebbington, Paul E; Fowler, David; Freeman, Daniel

    2008-03-01

    Adherence to medication is an important predictor of illness course and outcome in psychosis. The Medication Adherence Rating Scale (MARS) is a ten-item self-report measure of medication adherence in psychosis [Thompson, K., Kulkarni, J., Sergejew, A.A., 2000. Reliability and validity of a new Medication Adherence Rating Scale (MARS) for the psychoses. Schizophrenia Research. 42. 241-247]. Although initial results suggested that the scale has good reliability and validity, the development sample was small. The current study aimed to establish the psychometric properties of the MARS in a sample over four times larger. The scale was administered to 277 individuals with psychosis, along with measures of insight and psychopathology. Medication adherence was independently rated by each individual's keyworker. Results showed the internal consistency of the MARS to be lower than in the original sample, though adequate. MARS total score correlated weakly with keyworker-rated adherence, hence concurrent validity of the scale appeared only moderate to weak. The three factor structure of the MARS was replicated. Examination of the factor scores suggested that the factor 1 total score, which corresponds to the Medication Adherence Questionnaire [Morisky,D.E., Green,L.W. and Levine,D.M., 1986. Concurrent and predictive validity of a self-reported measure of medication adherence. Medical Care. 24, 67-74] may be a preferable measure of medication adherence behaviour to the total scale score. PMID:18083007

  16. Effect of nano-scale characteristics of graphene on electrochemical performance of activated carbon supercapacitor electrodes

    NASA Astrophysics Data System (ADS)

    Jasni, M. R. M.; Deraman, M.; Suleman, M.; Hamdan, E.; Sazali, N. E. S.; Nor, N. S. M.; Shamsudin, S. A.

    2016-02-01

    Graphene with its typical nano-scale characteristic properties has been widely used as an additive in activated carbon electrodes in order to enhance the performance of the electrodes for their use in high performance supercapacitors. Activated carbon monoliths (ACMs) electrodes have been prepared by carbonization and activation of green monoliths (GMs) of pre-carbonized fibers of oil palm empty fruit bunches or self-adhesive carbon grains (SACGs) and SACGs added with 6 wt% of KOH-treated multi-layer graphene. ACMs electrodes have been assembled in symmetrical supercapacitor cells that employed aqueous KOH electrolyte (6 M). The cells have been tested with cyclic voltammetry, electrochemical impedance spectroscopy and galvanostatic charge discharge methods to investigate the effect of graphene addition on the specific capacitance (Csp), specific energy (E), specific power (P), equivalent series resistance (ESR) and response time (τo) of the supercapacitor cells. The results show that the addition of graphene in the GMs change the values of Csp, Emax, Pmax, ESR and τo from (61-96) F/g, 2 Wh/kg, 104 W/kg, 2.6 Ω and 38 s, to the respective values of (110-124) F/g, 3 Wh/kg, 156 W/kg, 3.4 Ω and 63 s. This study demonstrates that the graphene addition in the GMs has a significant effect on the electrochemical behavior of the electrodes.

  17. High Performance Hydrogen/Bromine Redox Flow Battery for Grid-Scale Energy Storage

    SciTech Connect

    Cho, KT; Ridgway, P; Weber, AZ; Haussener, S; Battaglia, V; Srinivasan, V

    2012-01-01

    The electrochemical behavior of a promising hydrogen/bromine redox flow battery is investigated for grid-scale energy-storage application with some of the best redox-flow-battery performance results to date, including a peak power of 1.4 W/cm(2) and a 91% voltaic efficiency at 0.4 W/cm(2) constant-power operation. The kinetics of bromine on various materials is discussed, with both rotating-disk-electrode and cell studies demonstrating that a carbon porous electrode for the bromine reaction can conduct platinum-comparable performance as long as sufficient surface area is realized. The effect of flow-cell designs and operating temperature is examined, and ohmic and mass-transfer losses are decreased by utilizing a flow-through electrode design and increasing cell temperature. Charge/discharge and discharge-rate tests also reveal that this system has highly reversible behavior and good rate capability. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.018211jes] All rights reserved.

  18. The Video Suggestibility Scale for Children: how generalizable is children's performance to other measures of suggestibility?

    PubMed

    McFarlane, Felicity; Powell, Martine B

    2002-01-01

    This study explored the generalizability of the Video Suggestibility Scale for Children (VSSC), which was developed by Scullin and colleagues (Scullin & Ceci, 2001; Scullin & Hembrooke, 1998) as a tool for discriminating among children (aged three to five years) who have different levels of suggestibility. The VSSC consists of two subscales; Yield (a measure of children's willingness to acquiesce to misleading questions) and Shift (a measure of children's tendency to change their responses after feedback from the interviewer). Children's (N = 77) performance on each of the subscales was compared with their performance using several other measures of suggestibility. These measures included children's willingness to assent to a false event as well as the number of false interviewer suggestions and false new details that the children provided when responding to cued-recall questions about an independent true-biased and an independent false (non-experienced) event. An independent samples t-test revealed that those children who assented to the false event generated higher scores on the Yield measure. Hierarchical regression analyses revealed that Yield was a significant predictor of the number of false details reported about the false activity, but not the true-biased activity. There was no significant relationship between the Shift subscale and any of the dependent variables. The potential contribution of the VSSC for forensic researchers and practitioners is discussed. PMID:12465135

  19. Scales

    MedlinePlus

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Eczema , ringworm , and psoriasis ...

  20. Full scale performance of the aerobic granular sludge process for sewage treatment.

    PubMed

    Pronk, M; de Kreuk, M K; de Bruin, B; Kamminga, P; Kleerebezem, R; van Loosdrecht, M C M

    2015-11-01

    Recently, aerobic granular sludge technology has been scaled-up and implemented for industrial and municipal wastewater treatment under the trade name Nereda(®). With full-scale references for industrial treatment application since 2006 and domestic sewage since 2009 only limited operating data have been presented in scientific literature so far. In this study performance, granulation and design considerations of an aerobic granular sludge plant on domestic wastewater at the WWTP Garmerwolde, the Netherlands were analysed. After a start-up period of approximately 5 months, a robust and stable granule bed (>8 g L(-1)) was formed and could be maintained thereafter, with a sludge volume index after 5 min settling of 45 mL g(-1). The granular sludge consisted for more than 80% of granules larger than 0.2 mm and more than 60% larger than 1 mm. Effluent requirements (7 mg N L(-1) and 1 mg P L(-1)) were easily met during summer and winter. Maximum volumetric conversion rates for nitrogen and phosphorus were respectively 0.17 and 0.24 kg (m(3) d)(-1). The energy usage was 13.9 kWh (PE150·year)(-1) which is 58-63 % lower than the average conventional activated sludge treatment plant in the Netherlands. Finally, this study demonstrated that aerobic granular sludge technology can effectively be implemented for the treatment of domestic wastewater. PMID:26233660

  1. A pilot scale electrical infrared dry-peeling system for tomatoes: design and performance evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A pilot scale infrared dry-peeling system for tomatoes was designed and constructed. The system consisted of three major sections including the IR heating, vacuum, and pinch roller sections. The peeling performance of the system was examined under different operational conditions using tomatoes with...

  2. Concurrent Validity of the Universal Nonverbal Intelligence Test and the Leiter International Performance Scale-Revised

    ERIC Educational Resources Information Center

    Hooper, V. Scott; Bell, Sherry Mee

    2006-01-01

    One hundred elementary- and middle-school students were administered the Universal Nonverbal Intelligence Test (UNIT; B.A. Bracken & R.S. McCallum, 1998) and the Leiter International Performance Scale-Revised (Leiter-R; G.H. Roid & L.J. Miller, 1997). Correlations between UNIT and Leiter-R scores were statistically significant ( p less than…

  3. The Examination of Reliability According to Classical Test and Generalizability on a Job Performance Scale

    ERIC Educational Resources Information Center

    Yelboga, Atilla; Tavsancil, Ezel

    2010-01-01

    In this research, the classical test theory and generalizability theory analyses were carried out with the data obtained by a job performance scale for the years 2005 and 2006. The reliability coefficients obtained (estimated) from the classical test theory and generalizability theory analyses were compared. In classical test theory, test retest…

  4. Basing Performance Assessment on Behaviorally Anchored Rating Scales in Collegiate Organizations.

    ERIC Educational Resources Information Center

    Lyons, Paul R.

    The use of behaviorally anchored rating scales (BARS) as the basis of an assessment system that was designed to improve academic department chairpersons in a college of arts and sciences is described. Twenty-eight faculty members, two from each department, were asked to identify evaluative dimensions for assessing chairperson performance and to…

  5. Performance of Black and White Children on the Bracken Basic Concept Scale.

    ERIC Educational Resources Information Center

    Bracken, Bruce A.; And Others

    1987-01-01

    Administered Bracken Basic Concept Scale (BBCS) to 114 matched pairs of Black and White children. Scores of White children were nearly identical with national average while Black children scored approximately one-half standard deviation below their White counterparts. Blacks and Whites showed similar performance patterns on BBCS subtests,…

  6. Performance Technolgies for Peta-Scale Systems: A White Paper Prepared by the Performance Evaluation Research Center

    SciTech Connect

    Bailey, D H; de Supinski, B R; Dongarra, J; Dunigan, T; Gao, G; Hoisie, A; Hovland, J; Hollangsworth, J; Jeffferson, D R; Kamath, C; Malony, A; Norris, B; Quinlan, D; McKee, S A; Mendes, C; Moore, S; Reed, D; Snavely, A; Strohmaier, E; Vetter, J S; Worley, P

    2003-05-20

    Future-looking high end computing initiatives will deploy powerful, large-scale computing platforms that leverage novel component technologies for superior node performance in advanced system architectures with tens or even hundreds of thousands of nodes. Recent advances in performance tools and modeling methodologies suggest that it is feasible to acquire such systems intelligently and achieve excellent performance, while also significantly reducing the user time required to attain high performance. These developments are relevant to several aspects of future HEC technology outlined in the recent HECRTF white paper request, in particular items 5.4, 5.5, 5.6, and 5.8. We envision the following specific capabilities: (1) Performance modeling tools, available to researchers and vendors, will extrapolate performance from prototype systems to full-scale systems, and even accurately predict performance behavior before systems are manufactured, thus enabling both improved designs and more intelligent selection of systems in procurements. (2) System simulation facilities, implemented on highly parallel platforms and available to researchers and vendors, will for instance realistically model the performance of a specific interprocessor network design running a specific scientific application code. As with item 1, these facilities can lead both to improved designs and procurement decisions that yield significantly greater sustained performance for targeted scientific applications. (3) A program monitoring and analysis infrastructure, scalable to 100,000 processors and beyond, will provide performance information at every level of system's memory hierarchy and network. This infrastructure will build upon knowledge discovery and data mining techniques to be significantly more scalable and easier to use than the current infrastructure, and a standard version will be incorporated in most high-end systems. (4) Self-tuning software facilities, now available only for a few

  7. A small-scale study of magneto-rheological track vibration isolation system

    NASA Astrophysics Data System (ADS)

    Li, Rui; Mu, Wenjun; Zhang, Luyang; Wang, Xiaojie

    2016-04-01

    A magneto-rheological bearing (MRB) is proposed to improve the vibration isolation performance of a floating slab track system. However, it's difficult to carry out the test for the full-scale track vibration isolation system in the laboratory. In this paper, the research is based on scale analysis of the floating slab track system, from the point view of the dimensionless of the dynamic characteristics of physical quantity, to establish a small scale test bench system for the MRBs. A small scale MRB with squeeze mode using magneto-rheological grease is designed and its performance is tested. The major parameters of a small scale test bench are obtained according to the similarity theory. The force transmissibility ratio and the relative acceleration transmissibility ratio are selected as evaluation index of system similarity. Dynamics of these two similarity systems are calculated by MATLAB experiment. Simulation results show that the dynamics of the prototype and scale models have good similarity. Further, a test bench is built according to the small-scale model parameter analysis. The experiment shows that the bench testing results are consistency with that of theoretical model in evaluating the vibration force and acceleration. Therefore, the small-scale study of magneto-rheological track vibration isolation system based on similarity theory reveals the isolation performance of a real slab track prototype system.

  8. A Reliability Generalization Study of the Geriatric Depression Scale.

    ERIC Educational Resources Information Center

    Kieffer, Kevin M.; Reese, Robert J.

    2002-01-01

    Conducted a reliability generalization study of the Geriatric Depression Scale (T. Brink and others, 1982). Results from this investigation of 338 studies shows that the average score reliability across studies was 0.8482 and identifies the most important predictors of score reliability. (SLD)

  9. Cognition, study habits, test anxiety, and academic performance.

    PubMed

    Kleijn, W C; van der Ploeg, H M; Topman, R M

    1994-12-01

    The Study Management and Academic Results Test (SMART) was developed to measure study- and examination-related cognitions, time management, and study strategies. This questionnaire was used in three prospective studies, together with measures for optimism and test anxiety. In the first two studies, done among 253 first-year students enrolled in four different faculties, the highest significant correlations with academic performance were found for the SMART scales. In a replication study among first-year medical students (n = 156) at a different university, the same pattern of results was observed. A stepwise multiple regression analysis, with academic performance as a dependent variable, showed significant correlations only for the SMART Test Competence and Time Management (Multiple R = .61). Results give specific indications about the profile of successful students. PMID:7892384

  10. Module-scale analysis of pressure retarded osmosis: performance limitations and implications for full-scale operation.

    PubMed

    Straub, Anthony P; Lin, Shihong; Elimelech, Menachem

    2014-10-21

    We investigate the performance of pressure retarded osmosis (PRO) at the module scale, accounting for the detrimental effects of reverse salt flux, internal concentration polarization, and external concentration polarization. Our analysis offers insights on optimization of three critical operation and design parameters--applied hydraulic pressure, initial feed flow rate fraction, and membrane area--to maximize the specific energy and power density extractable in the system. For co- and counter-current flow modules, we determine that appropriate selection of the membrane area is critical to obtain a high specific energy. Furthermore, we find that the optimal operating conditions in a realistic module can be reasonably approximated using established optima for an ideal system (i.e., an applied hydraulic pressure equal to approximately half the osmotic pressure difference and an initial feed flow rate fraction that provides equal amounts of feed and draw solutions). For a system in counter-current operation with a river water (0.015 M NaCl) and seawater (0.6 M NaCl) solution pairing, the maximum specific energy obtainable using performance properties of commercially available membranes was determined to be 0.147 kWh per m(3) of total mixed solution, which is 57% of the Gibbs free energy of mixing. Operating to obtain a high specific energy, however, results in very low power densities (less than 2 W/m(2)), indicating that the trade-off between power density and specific energy is an inherent challenge to full-scale PRO systems. Finally, we quantify additional losses and energetic costs in the PRO system, which further reduce the net specific energy and indicate serious challenges in extracting net energy in PRO with river water and seawater solution pairings. PMID:25222561

  11. A study of large scale gust generation in a small scale atmospheric wind tunnel with applications to Micro Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Roadman, Jason Markos

    Modern technology operating in the atmospheric boundary layer can always benefit from more accurate wind tunnel testing. While scaled atmospheric boundary layer tunnels have been well developed, tunnels replicating portions of the atmospheric boundary layer turbulence at full scale are a comparatively new concept. Testing at full-scale Reynolds numbers with full-scale turbulence in an "atmospheric wind tunnel" is sought. Many programs could utilize such a tool including Micro Aerial Vehicle(MAV) development, the wind energy industry, fuel efficient vehicle design, and the study of bird and insect flight, to name just a few. The small scale of MAVs provide the somewhat unique capability of full scale Reynolds number testing in a wind tunnel. However, that same small scale creates interactions under real world flight conditions, atmospheric gusts for example, that lead to a need for testing under more complex flows than the standard uniform flow found in most wind tunnels. It is for these reasons that MAVs are used as the initial testing application for the atmospheric gust tunnel. An analytical model for both discrete gusts and a continuous spectrum of gusts is examined. Then, methods for generating gusts in agreement with that model are investigated. Previously used methods are reviewed and a gust generation apparatus is designed. Expected turbulence and gust characteristics of this apparatus are compared with atmospheric data. The construction of an active "gust generator" for a new atmospheric tunnel is reviewed and the turbulence it generates is measured utilizing single and cross hot wires. Results from this grid are compared to atmospheric turbulence and it is shown that various gust strengths can be produced corresponding to weather ranging from calm to quite gusty. An initial test is performed in the atmospheric wind tunnel whereby the effects of various turbulence conditions on transition and separation on the upper surface of a MAV wing is investigated

  12. Predictions of long-term performance of granular iron permeable reactive barriers: field-scale evaluation.

    PubMed

    Jeen, Sung-Wook; Gillham, Robert W; Przepiora, Andrzej

    2011-04-01

    Long-term performance is a key consideration for the granular iron permeable reactive barrier (PRB) technology because the economic benefit relies on sustainable operation for substantial periods of time. However, predictions on the long-term performance have been limited mainly because of the lack of reliable modeling tools. This study evaluated the predictive capability of a recently-developed reactive transport model at two field-scale PRBs, both having relatively high concentrations of dissolved carbonate in the native groundwater. The first site, with 8 years of available monitoring data, was a funnel-and-gate installation, with a low groundwater velocity through the gate (about 0.12 m d(-1)). The loss in iron reactivity caused by secondary mineral precipitation was small, maintaining relatively high removal rates for chlorinated organics. The simulated concentrations for most constituents in the groundwater were within the range of the monitoring data. The second site, with monitoring data available for 5 years, was a continuous wall PRB, designed for a groundwater velocity of 0.9 m d(-1). A comparison of measured and simulated aqueous concentrations suggested that the average groundwater velocity through the PRB could be lower than the design value by a factor of two or more. The distribution and amounts of carbonate minerals measured in core samples supported the decreased groundwater velocity used in the simulation. The generally good agreement between the simulated and measured aqueous and solid-phase data suggest that the model could be an effective tool for predicting long-term performance of granular iron PRBs, particularly in groundwater with high concentrations of carbonate. PMID:21237528

  13. Field-aligned currents' scale analysis performed with the Swarm constellation

    NASA Astrophysics Data System (ADS)

    Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter

    2015-01-01

    We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.

  14. Fluid flow measurements of Test Series A and B for the Small Scale Seal Performance Tests

    SciTech Connect

    Peterson, E.W.; Lagus, P.L.; Lie, K.

    1987-12-01

    The degree of waste isolation achieved by a repository seal system is dependent upon the fluid flow characteristics, or permeability, of the seals. In order to obtain meaningful, site-specific data on the performance of various possible seal system components, a series of in situ experiments called the Small Scale Seal Performance Tests (SSSPT) are being conducted at the Waste Isolation Pilot Plant (WIPP). This report contains the results of gas flow, tracer penetration, and brine flow tests conducted on concrete seals in vertical (Test Series A) and horizontal (Test Series B) configurations. The test objectives were to evaluate the seal performance and to determine if there existed scaling effects which could influence future SSSPT designs. 3 refs., 77 figs.

  15. Catalyst performance study using Taguchi methods

    SciTech Connect

    Sims, G.S.; Johri, S.

    1988-01-01

    A study was conducted to determine the effects of various factors on the performance characteristics of aged monolithic catalytic converters. The factors that were evaluated were catalyst volume, converter configuration (number of elements), catalyst supplier washcoat technology, rhodium loading, platinum loading, and palladium loading. This study was also designed to evaluate the interactions among the various factors. To improve the efficiency of the study a 2-level fractional experiment was designed using the Taguchi method. That made it possible to study the effects of the seven main factors and six interactions by evaluating only 16 different samples. The study helped sort the factors that had significant effects and helped quantify their effect on catalyst performance. This paper details there methodology used to design the experiment and analyze the results.

  16. Adaptation of Distributed Leadership Scale into Turkish: The Validity and Reliability Study

    ERIC Educational Resources Information Center

    Ersozlu, Alpay; Ulusoy, Tarik

    2016-01-01

    The purpose of this study was to adapt "Distributed Leadership Scale" originally developed by Davis into Turkish Language. A total of 386 participants including teachers employed in high schools in Tokat participated in the study. Explanatory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were performed to test the…

  17. Preliminary study of a scale measuring depression and somatic symptoms.

    PubMed

    Hung, Ching-I; Weng, Li-Jen; Su, Yi-Jen; Liu, Chia-Yih

    2006-10-01

    This description concerns the development of a scale measuring depression and somatic symptoms and the selection of its items for a Taiwanese sample. 102 Taiwanese outpatients (28 men, 74 women) with major depressive disorder completed a 44-item preliminary scale. All had experienced a major depressive episode but had not been treated by antidepressants within the prior two weeks. The Hamilton Depression Rating Scale was administered to evaluate the validity of the Depression and Somatic Symptoms Scale (DSSS). Items, 12 for the Depression Subscale and 10 for the Somatic Subscale, were selected for the Depression and Somatic Symptoms Scale according to their frequency and their association with rated severity of depression and clinical practices. The mean Hamilton Depression score was 23.9 (SD = 5.2) versus 38.4 (SD = 11.3) for the total DSSS; means for the Depression subscale were 23.5 +/- 6.0 and the Somatic subscale 14.9 +/- 6.8. Cronbach alpha was .88 for the total DSSS, .78 for the Depression subscale, and .86 for the Somatic subscale. The Pearson correlation coefficient for the two scales was .59 (p <.01). The new scale had adequate internal consistency reliability and convergent validity. Much study is required to assess its structure, item characteristics, and in judging its applicability and limitations, and sensitivity to cultural differences in clinical settings. PMID:17153806

  18. The performance of the Health of the Nation Outcome Scales as measures of clinical severity.

    PubMed

    Müller, Mario; Vandeleur, Caroline; Weniger, Godehard; Prinz, Susanne; Vetter, Stefan; Egger, Stephan T

    2016-05-30

    The aim of this study was to examine the performance of the Health of the Nation Outcome Scales (HoNOS) against other measures of functioning and mental health in a full three-year cohort of admissions to a psychiatric hospital. A sample of N=1719 patients (35.3% females, aged 17-78 years) was assessed using observer-rated measures and self-reports of psychopathology at admission. Self-reports were available from 51.7% of the sample (34.4% females, aged 17-76 years). Functioning and psychopathology were compared across five ICD-10 diagnostic groups: substance use disorders, schizophrenia and psychotic disorders, affective disorders, anxiety/somatoform disorders and personality disorders. Associations between the measures were examined, stratifying by diagnostic subgroup. The HoNOS were strongly linked to other measures primarily in psychotic disorders (except for the behavioral subscale), while those with substance use disorders showed rather poor links. Those with anxiety/somatoform disorders showed null or only small associations. This study raises questions about the overall validity of the HoNOS. It seems to entail different levels of validity when applied to different diagnostic groups. In clinical practice the HoNOS should not be used as a stand-alone instrument to assess outcome but rather as part of a more comprehensive battery including diagnosis-specific measures. PMID:27137958

  19. Academic Performance Antecedent Scale: validation with native and recent immigrant children.

    PubMed

    Wang, Ru-Jer; Kuo, Kung-Bin; Cheng, Chien-Ming; Hsieh, Pei-Jung; Wang, Han-Yu; Chang, Ya-Wen; Shen, Chia-Yi

    2013-06-01

    This study aims to assess the measurement invariance of the three subscales of the newly developed Academic Performance Antecedent Scale (APAS)--School Factors, Mother's Parenting Style, and Individual Factors--across native and new immigrant children in Taiwan. The study sample comprised 527 Grade 4 students (M age = 10.4 yr., SD = 0.6), 263 boys and 264 girls. The three groups were urban and rural children of Taiwanese natives (n = 343, 65.1%), and 184 children with non-Taiwanese mothers (34.9%). The four-factor structure of the School Factors Subscale, the three-factor structure of the Mother's Parenting Style Subscale, and the five-factor structure of the Individual Factors Subscale all showed at least acceptable fit for the groups. In addition, metric invariance was confirmed for the School Factors and Individual Factors Subscales. Metric invariance was partially obtained for the Mother's Parenting Style Subscale. The findings provide validity evidences for cross-cultural generalizability of the APAS. PMID:24245069

  20. Startup pattern and performance enhancement of pilot-scale biofilm process for raw water pretreatment.

    PubMed

    Yang, Guang-Feng; Feng, Li-Juan; Yang, Qi; Zhu, Liang; Xu, Jian; Xu, Xiang-Yang

    2014-11-01

    The quality of raw water is getting worse in developing countries because of the inadequate treatment of municipal sewage, industrial wastewater and agricultural runoff. Aiming at the biofilm enrichment and pollutant removal, two pilot-scale biofilm reactors were built with different biological carriers. Results showed that compared with the blank carrier, the biofilm was easily enriched on the biofilm precoated carrier and less nitrite accumulation occurred. The removal efficiencies of NH4(+)-N, DOC and UV254 increased under the aeration condition, and a optimum DO level for the adequate nitrification was 1.0-2.6mgL(-1) with the suitable temperature range of 21-22°C. Study on the trihalomethane prediction model indicated that the presentence of algae increased the risk of disinfection by-products production, which could be effectively controlled via manual algae removing and light shading. In this study, the performance of biofilm pretreatment process could be enhanced under the optimized condition of DO level and biofilm carrier. PMID:25233473

  1. The impact of component performance on the overall cycle performance of small-scale low temperature organic Rankine cycles

    NASA Astrophysics Data System (ADS)

    White, M.; Sayma, A. I.

    2015-08-01

    Low temperature organic Rankine cycles offer a promising technology for the generation of power from low temperature heat sources. Small-scale systems (∼10kW) are of significant interest, however there is a current lack of commercially viable expanders. For a potential expander to be economically viable for small-scale applications it is reasonable to assume that the same expander must have the ability to be implemented within a number of different ORC applications. It is therefore important to design and optimise the cycle considering the component performance, most notably the expander, both at different thermodynamic conditions, and using alternative organic fluids. This paper demonstrates a novel modelling methodology that combines a previously generated turbine performance map with cycle analysis to establish at what heat source conditions optimal system performance can be achieved using an existing turbine design. The results obtained show that the same turbine can be effectively utilised within a number of different ORC applications by changing the working fluid. By selecting suitable working fluids, this turbine can be used to convert pressurised hot water at temperatures between 360K and 400K, and mass flow rates between 0.45kg/s and 2.7kg/s, into useful power with outputs between 1.5kW and 27kW. This is a significant result since it allows the same turbine to be implemented into a variety of applications, improving the economy of scale. This work has also confirmed the suitability of the candidate turbine for a range of low temperature ORC applications.

  2. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  3. Improving the Rapid Refresh and High Resolution Rapid Refresh physics to better perform across a wide range of spatial scales

    NASA Astrophysics Data System (ADS)

    Olson, Joseph; Grell, Georg

    2014-05-01

    Model development at NOAA/GSD spans a wide range of spatial scales: global scale (Flow-following finite-volume Icosohedral Model, FIM; 10-250 km grid spacing), continental scale (RAP; 13 km grid spacing), CONUS scale (HRRR; 3 km grid spacing), and regional modeling (experimental nesting at 1 km grid spacing over complex terrain). As the model resolution changes, the proportion of resolved vs unresolved physical processes changes; therefore, physical parameterizations need to adapt to different model resolutions to more accurately handle the unresolved processes. The Limited Area Model (LAM) component of the Grey Zone Experiment was designed to assess the change in behavior of numerical weather prediction models between 16 and 1 km by simulating a cold-air outbreak over the North Atlantic and North Sea. The RAP and HRRR model physics were tested in this case study in order to examine the change in behavior of the model physics at 16, 8, 4, 2, and 1 km grid spacings with and without the use a convective parameterization. The primary purpose of these tests is to better understand the change in behavior of the boundary layer and convective schemes across the grey zone, such that further targeted modifications can then help improve general performance at various scales. The RAP currently employs a modified form of the Mellor-Yamada-Nakanishi-Niino (MYNN) PBL scheme, which is an improved TKE-based scheme tuned to match large-eddy simulations. Modifications have been performed to better match observations at 13 km (RAP) grid spacing but more multi-scale testing is required before modifications are introduced to make it scale-aware. A scale-aware convective parameterization, the Grell-Freitas scheme (both deep- and shallow-cumulus scheme), has been developed to better handle the transition in behavior of the sub-grid scale convective processes through the grey zone. This study examines the change in behavior of both schemes across the grey zone. Their transitional behavior

  4. Comparison of Internal-Blast Explosive Performance in Small- and Large-Scale Tests

    NASA Astrophysics Data System (ADS)

    Granholm, Richard

    2013-06-01

    Small-scale internal blast measurements were correlated with large-scale test data. Highly confined small explosive samples <0.5 g were subjected to the output from a PETN detonator while enclosed in a 3-liter chamber. Large-scale tests up to 22.7 kg were generally unconfined and shot in a 180-m3 chamber. When sample mass was expressed as total sample energy/chamber volume, theoretical peak quasi-static blast pressures for both small and large-scale tests fell on the same curve. Blast explosives may comprise high levels of fuels and reactive materials to enhance or control the release of energy, and may be insensitive and slow-reacting, with performance that may not scale well to small size tests. High confinement of a small sample can compensate for low sensitivity, but at the expense of heat loss to the metal confinement. This heat loss can be measured to improve the correlation between large and small-scale measurements, unless the released energy becomes too low to sustain complete reaction of the sample, either with itself or with air in the chamber.

  5. Planar wire array performance scaling at multi-MA levels on the Saturn generator.

    SciTech Connect

    Chuvatin, Alexander S.; Jones, Michael; Vesey, Roger Alan; Waisman, Eduardo M.; Esaulov, Andrey A.; Ampleford, David J.; Kantsyrev, Victor Leonidovich; Cuneo, Michael Edward; Rudakov, L. I.; Coverdale, Christine Anne; Jones, Brent Manley; Safronova, Alla S.

    2007-10-01

    A series of twelve shots were performed on the Saturn generator in order to conduct an initial evaluation of the planar wire array z-pinch concept at multi-MA current levels. Planar wire arrays, in which all wires lie in a single plane, could offer advantages over standard cylindrical wire arrays for driving hohlraums for inertial confinement fusion studies as the surface area of the electrodes in the load region (which serve as hohlraum walls) may be substantially reduced. In these experiments, mass and array width scans were performed using tungsten wires. A maximum total radiated x-ray power of 10 {+-} 2 TW was observed with 20 mm wide arrays imploding in {approx}100 ns at a load current of {approx}3 MA, limited by the high inductance. Decreased power in the 4-6 TW range was observed at the smallest width studied (8 mm). 10 kJ of Al K-shell x-rays were obtained in one Al planar array fielded. This report will discuss the zero-dimensional calculations used to design the loads, the results of the experiments, and potential future research to determine if planar wire arrays will continue to scale favorably at current levels typical of the Z machine. Implosion dynamics will be discussed, including x-ray self-emission imaging used to infer the velocity of the implosion front and the potential role of trailing mass. Resistive heating has been previously cited as the cause for enhanced yields observed in excess of jxB-coupled energy. The analysis presented in this report suggests that jxB-coupled energy may explain as much as the energy in the first x-ray pulse but not the total yield, which is similar to our present understanding of cylindrical wire array behavior.

  6. Performance of subgrid-scale models in coarse large eddy simulations of a laminar separation bubble

    NASA Astrophysics Data System (ADS)

    Cadieux, Francois; Domaradzki, Julian A.

    2015-04-01

    The flow over many blades and airfoils at moderate angles of attack and Reynolds numbers ranging from 104 to 105 undergoes separation due to the adverse pressure gradient generated by surface curvature. In many cases, the separated shear layer then transitions to turbulence and reattaches, closing off a recirculation region—the laminar separation bubble. An equivalent problem is formulated by imposing suitable boundary conditions for flow over a flat plate to avoid numerical and mesh generation issues. Recent work demonstrated that accurate large eddy simulation (LES) of such a flow is possible using only O(1%) of the direct numerical simulation (DNS) resolution but the performance of different subgrid-scale models could not be properly assessed because of the effects of unquantified numerical dissipation. LES of a laminar separation bubble flow over a flat plate is performed using a pseudo-spectral Navier-Stokes solver at resolutions corresponding to 3% and 1% of the chosen DNS benchmark by Spalart and Strelets (2000). The negligible numerical dissipation of the pseudo-spectral code allows an unambiguous assessment of the performance of subgrid-scale models. Three explicit subgrid-scale models—dynamic Smagorinsky, σ, and truncated Navier-Stokes (TNS)—are compared to a no-model simulation (under-resolved DNS) and evaluated against benchmark DNS data focusing on two quantities of critical importance to airfoil and blade designers: time-averaged pressure (Cp) and skin friction (Cf) predictions used in lift and drag calculations. Results obtained with explicit subgrid-scale models confirm that accurate LES of laminar separation bubble flows is attainable with as low as 1% of DNS resolution, and the poor performance of the no-model simulation underscores the necessity of subgrid-scale modeling in coarse LES with low numerical dissipation.

  7. A Validation Study of the Adolescent Dissociative Experiences Scale

    ERIC Educational Resources Information Center

    Keck Seeley, Susan. M.; Perosa, Sandra, L.; Perosa, Linda, M.

    2004-01-01

    Objective: The purpose of this study was to further the validation process of the Adolescent Dissociative Experiences Scale (A-DES). In this study, a 6-item Likert response format with descriptors was used when responding to the A-DES rather than the 11-item response format used in the original A-DES. Method: The internal reliability and construct…

  8. A Small Scale Experimental Study: Using Animations to Learn Vocabulary

    ERIC Educational Resources Information Center

    Kayaoglu, M. Naci; Dag Akbas, Raside; Ozturk, Zeynep

    2011-01-01

    This study attempts to investigate whether a difference exists between learning vocabulary via animation and via traditional paper-based method. This small scale study was conducted at Karadeniz Technical University in academic year 2009-2010. Two pre-intermediate classes were randomly selected as the experimental group (n = 17), and control group…

  9. A Study of Developing Attitude Scale for Piano Teaching

    ERIC Educational Resources Information Center

    Yazici, Tarkan

    2016-01-01

    In this study, the development phases of a measurement instrument, which can be used for measuring the piano teaching attitudes of piano teachers, are investigated. Validity and reliability studies of the scale, which was developed in Turkey's circumstances, were carried out with 196 piano teachers giving piano lectures in different districts of…

  10. A numerical study of scaling issues for trench power MOSFETs

    NASA Astrophysics Data System (ADS)

    Roig, J.; Cortés, I.; Jiménez, D.; Flores, D.; Iñiguez, B.; Hidalgo, S.; Rebollo, J.

    2005-06-01

    The effect of the scaling down on the electrical performance of trench power MOSFET structures is investigated in this work by means of numerical simulation tools. Layout dimensions of trench power MOSFETs have been continuously reduced in order to decrease the specific on-resistance, maintaining equal vertical dimensions. Nowadays, the last scaling efforts provide trench width and distance between two consecutive trenches in the submicron range. The resultant short distance between gates is expected to induce significant modifications in the device electrical performances, since the fully depletion condition will be feasible in the body region. Hence, the influence of the fully depleted body on the on-state resistance, threshold voltage, breakdown voltage, parasitic bipolar transistor and internal capacitances are features of particular interest. Furthermore, device reliability aspects, such as hot-carrier and self-heating effects, are evaluated by numerical simulation in trench power MOSFETs for the first time.

  11. Use of Boundary Layer Transition Detection to Validate Full-Scale Flight Performance Predictions

    NASA Technical Reports Server (NTRS)

    Hamner, Marvine; Owens, L. R., Jr.; Wahls, R. A.; Yeh, David

    1999-01-01

    Full-scale flight performance predictions can be made using CFD or a combination of CFD and analytical skin-friction predictions. However, no matter what method is used to obtain full-scale flight performance predictions knowledge of the boundary layer state is critical. The implementation of CFD codes solving the Navier-Stokes equations to obtain these predictions is still a time consuming, expensive process. In addition, to ultimately obtain accurate performance predictions the transition location must be fixed in the CFD model. An example, using the M2.4-7A geometry, of the change in Navier-Stokes solution with changes in transition and in turbulence model will be shown. Oil flow visualization using the M2.4-7A 4.0% scale model in the 14'x22' wind tunnel shows that fixing transition at 10% x/c in the CFD model best captures the flow physics of the wing flow field. A less costly method of obtaining full-scale performance predictions is the use of non-linear Euler codes or linear CFD codes, such as panel methods, combined with analytical skin-friction predictions. Again, knowledge of the boundary layer state is critical to the accurate determination of full-scale flight performance. Boundary layer transition detection has been performed at 0.3 and 0.9 Mach numbers over an extensive Reynolds number range using the 2.2% scale Reference H model in the NTF. A temperature sensitive paint system was used to determine the boundary layer state for these conditions. Data was obtained for three configurations: the baseline, undeflected flaps configuration; the transonic cruise configuration; and, the high-lift configuration. It was determined that at low Reynolds number conditions, in the 8 to 10 million Reynolds number range, the baseline configuration has extensive regions of laminar flow, in fact significantly more than analytical skin-friction methods predict. This configuration is fully turbulent at about 30 million Reynolds number for both 0.3 and 0.9, Mach numbers

  12. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    SciTech Connect

    Wang, Feiyi; Oral, H Sarp; Vazhkudai, Sudharshan S

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  13. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  14. Building Enclosure Hygrothermal Performance Study, Phase 1

    SciTech Connect

    Karagiozis, A.N.

    2002-08-08

    The moisture performance of three different classes of wall systems has been investigated in the context of the preliminary hygrothermal analysis of walls in Seattle. The results reported in this phase specifically address the moisture performance of walls designed with loads that have some unintentional water penetration. The results have been developed in a manner to present the relative performance of the walls in the same climate with similar water penetration effects. The analysis was performed with the best available input data. Several limitations should be recognized within the context of this study. Results showed that selection of wooden sheathing boards on interior vapor-tight assemblies does not significantly influence the performance of stucco-clad walls. A larger effect was observed when the interior vapor control is made vapor open. When continuous cavity ventilation is employed, the effect of the selection of the type of sheathing board on the hygrothermal performance of the wall was found to be negligible. When comparing oriented strand board sheathing performance against the performance of exterior grade gypsum, the differences are very significant in terms of the amount of moisture content present in the walls. Moisture content alone does not indicate their respective durability as durability is directly related to the combination of relative humidity and temperature, mechanical, chemical, and biological properties of the substrates. This study did not investigate the durability performance of either sheathing. In terms of interior vapor control, inhabitant behavior must be considered during the wall hygrothermal design stage. If interior relative humidity is maintained below 60%, then a latex primer and paint may perform better than the use of PVA or even a polyethylene sheet. When the interior environment is maintained at a higher relative humidity, then stricter vapor control is needed. Multilayered building paper was experimentally shown to

  15. Allometric scaling relationships of jumping performance in the striped marsh frog Limnodynastes peronii.

    PubMed

    Wilson, R S; Franklin, C E; James, R S

    2000-06-01

    We constructed a force platform to investigate the scaling relationships of the detailed dynamics of jumping performance in striped marsh frogs (Limnodynastes peronii). Data were used to test between two alternative models that describe the scaling of anuran jumping performance; Hill's model, which predicts mass- independence of jump distance, and Marsh's model, which predicts that jump distance increases as M(0.2), where M is body mass. From the force platform, scaling relationships were calculated for maximum jumping force (F(max)), acceleration, take-off velocity (U(max)), mass-specific jumping power (P(max)), total jumping distance (D(J)) and total contact time for 75 L. peronii weighing between 2.9 and 38. 4 g. F(max) was positively correlated with body mass and was described by the equation F(max)=0.16M(0.61), while P(max) decreased significantly with body mass and was described by the equation P(max)=347M(-)(0.46). Both D(J) and U(max) were mass-independent over the post-metamorph size range, and thus more closely resembled Hill's model for the scaling of locomotion. We also examined the scaling relationships of jumping performance in metamorph L. peronii by recording the maximum jump distance of 39 animals weighing between 0.19 and 0.58 g. In contrast to the post-metamorphic L. peronii, D(J) and U(max) were highly dependent on body mass in metamorphs and were described by the equations D(J)=38M(0.53) and U(max)=1.82M(0.23), respectively. Neither model for the scaling of anuran jumping performance resembled data from metamorph L. peronii. Although the hindlimbs of post-metamorphic L. peronii scaled geometrically (body mass exponent approximately 0.33), the hindlimbs of metamorphs showed greater proportional increases with body mass (mass exponents of 0.41-0.42). PMID:10821750

  16. Magnetosheath Turbulence at MHD Scales: A Statistical Study

    NASA Astrophysics Data System (ADS)

    Huang, Shiyong; Sahraoui, Fouad; Hadid, Lina; Yuan, Zhigang

    2015-04-01

    Turbulence is ubiquitous in space plasmas, such as terrestrial magnetotail and magnetosheath, solar wind, or the interstellar medium. In the solar wind, it is well established that at MHD scales, the magnetic energy spectra generally follow the so-called Kolmogorov's spectrum f-5/3. In the magnetosheath, Alexandrova et al. [2006] observed a Kolmogorov-like inertial range in the frequency range f < fci. In this study, we used three years data from the Cluster mission to statistically investigate the existence of the Kolmogorov inertial range in the whole magnetosheath, including flanks and subsolar regions. Statistical results show that most spectra are shallower than the Kolmogorov one, and have a scaling ~ f-1recalling the energy containing scales of solarwind turbulence. These spectra were found to be populated by uncorrelated fluctuations. The Kolmogorov scaling is observed only away from the bock shock and in the flanks region. These results suggest that random-like fluctuations are generated behind the shock, which reach a fully developed turbulence state only after some time corresponding to their propagation (or advection) away from the shock. At kinetic scales no dependence of the turbulence scaling on the location in the magnetosheath was found.

  17. Flutter performance of bend-twist coupled large-scale wind turbine blades

    NASA Astrophysics Data System (ADS)

    Hayat, Khazar; de Lecea, Alvaro Gorostidi Martinez; Moriones, Carlos Donazar; Ha, Sung Kyu

    2016-05-01

    The bend-twist coupling (BTC) is proven to be effective in mitigating the fatigue loads for large-scale wind turbine blades, but at the same time it may cause the risk of flutter instability. The BTC is defined as a feature of twisting of the blade induced by the primary bending deformation. In the classical flutter, the BTC arises from the aerodynamic loads changing with the angle of attack. In this study, the effects of the structural BTC on the flutter are investigated by considering the layup unbalances (ply angle, material and thickness of the composite laminates) in the NREL 5-MW wind turbine rotor blade of glass fiber/epoxy [02/+45/-45]S laminates. It is numerically shown that the flutter speed may decrease by about 5 percent with unbalanced ply-angle only (one side angle, from 45° to 25°). It was then demonstrated that the flutter performance of the wind turbine blade can be increased by using lighter and stiffer carbon fibers which ensures the higher structural BTC at the same time.

  18. Thermal Performance Evaluation of Attic Radiant Barrier Systems Using the Large Scale Climate Simulator (LSCS)

    SciTech Connect

    Shrestha, Som S; Miller, William A; Desjarlais, Andre Omer

    2013-01-01

    Application of radiant barriers and low-emittance surface coatings in residential building attics can significantly reduce conditioning loads from heat flow through attic floors. The roofing industry has been developing and using various radiant barrier systems and low-emittance surface coatings to increase energy efficiency in buildings; however, minimal data are available that quantifies the effectiveness of these technologies. This study evaluates performance of various attic radiant barrier systems under simulated summer daytime conditions and nighttime or low solar gain daytime winter conditions using the large scale climate simulator (LSCS). The four attic configurations that were evaluated are 1) no radiant barrier (control), 2) perforated low-e foil laminated oriented strand board (OSB) deck, 3) low-e foil stapled on rafters, and 4) liquid applied low-emittance coating on roof deck and rafters. All test attics used nominal RUS 13 h-ft2- F/Btu (RSI 2.29 m2-K/W) fiberglass batt insulation on attic floor. Results indicate that the three systems with radiant barriers had heat flows through the attic floor during summer daytime condition that were 33%, 50%, and 19% lower than the control, respectively.

  19. Effect of seasonal changes in quantities of biowaste on full scale anaerobic digester performance

    SciTech Connect

    Illmer, P. Gstraunthaler, G.

    2009-01-15

    A 750,000 l digester located in Roppen/Austria was studied over a 2-year period. The concentrations and amounts of CH{sub 4}, H{sub 2}, CO{sub 2} and H{sub 2}S and several other process parameters like temperature, retention time, dry weight and input of substrate were registered continuously. On a weekly scale the pH and the concentrations of NH{sub 4}{sup +}-N and volatile fatty acids (acetic, butyric, iso-butyric, propionic, valeric and iso-valeric acid) were measured. The data show a similar pattern of seasonal gas production over 2 years of monitoring. The consumption of VFA and not the hydrogenotrophic CH{sub 4} production appeared to be the limiting factor for the investigated digestion process. Whereas the changes in pH and the concentrations of most VFA did not correspond with changes in biogas production, the ratio of acetic to propionic acid and the concentration of H{sub 2} appeared to be useful indicators for reactor performance. However, the most influential factors for the anaerobic digestion process were the amount and the quality of input material, which distinctly changed throughout the year.

  20. Real-scale miscible grout injection experiment and performance of advection-dispersion-filtration model

    NASA Astrophysics Data System (ADS)

    Bouchelaghem, F.; Vulliet, L.; Leroy, D.; Laloui, L.; Descoeudres, F.

    2001-10-01

    A model was developed, to describe miscible grout propagation in a saturated deformable porous medium, based on Bear's statistical model with spatial volume averaging. In a previous paper, the model was first successfully confronted to one-dimensional laboratory experiments.In the present paper, the numerical model is used to simulate practical grouting operation in a cylindrical injection model. The cylindrical injection model lends itself to study main flow and propagation character istics for a dispersed suspension-type grout, under axisymmetric conditions close to real scale conditions.Comparison between numerical solutions and experimental results is essential to confirm the validity and accuracy of the proposed model from a phenomenological standpoint. The numerical model performances show that the underlying mathematical model constitutes a realistic predictive model reproducing most prominent features during injection of a suspension-type grout into a deformable porous medium. The basic mechanism by which injected miscible grout permeates a soil mass is discussed in detail. Such a tool leads to quality control criteria for grouting on a theoretical basis, which complements existing criteria acquired through engineering practice.

  1. Desensitizing Agent Reduces Dentin Hypersensitivity During Ultrasonic Scaling: A Pilot Study

    PubMed Central

    Suda, Tomonari; Akiyama, Toshiharu; Takano, Takuya; Gokyu, Misa; Sudo, Takeaki; Khemwong, Thatawee; Izumi, Yuichi

    2015-01-01

    Background Dentin hypersensitivity can interfere with optimal periodontal care by dentists and patients. The pain associated with dentin hypersensitivity during ultrasonic scaling is intolerable for patient and interferes with the procedure, particularly during supportive periodontal therapy (SPT) for patients with gingival recession. Aim This study proposed to evaluate the desensitizing effect of the oxalic acid agent on pain caused by dentin hypersensitivity during ultrasonic scaling. Materials and Methods This study involved 12 patients who were incorporated in SPT program and complained of dentin hypersensitivity during ultrasonic scaling. We examined the availability of the oxalic acid agent to compare the degree of pain during ultrasonic scaling with or without the application of the dentin hypersensitivity agent. Evaluation of effects on dentin hypersensitivity was determined by a questionnaire and visual analog scale (VAS) pain scores after ultrasonic scaling. The statistical analysis was performed using the paired Student t-test and Spearman rank correlation coefficient. Results The desensitizing agent reduced the mean VAS pain score from 69.33 ± 16.02 at baseline to 26.08 ± 27.99 after application. The questionnaire revealed that >80% patients were satisfied and requested the application of the desensitizing agent for future ultrasonic scaling sessions. Conclusion This study shows that the application of the oxalic acid agent considerably reduces pain associated with dentin hypersensitivity experienced during ultrasonic scaling. This pain control treatment may improve patient participation and treatment efficiency. PMID:26501012

  2. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    SciTech Connect

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  3. A scale-down mimic for mapping the process performance of centrifugation, depth and sterile filtration.

    PubMed

    Joseph, Adrian; Kenty, Brian; Mollet, Michael; Hwang, Kenneth; Rose, Steven; Goldrick, Stephen; Bender, Jean; Farid, Suzanne S; Titchener-Hooker, Nigel

    2016-09-01

    In the production of biopharmaceuticals disk-stack centrifugation is widely used as a harvest step for the removal of cells and cellular debris. Depth filters followed by sterile filters are often then employed to remove residual solids remaining in the centrate. Process development of centrifugation is usually conducted at pilot-scale so as to mimic the commercial scale equipment but this method requires large quantities of cell culture and significant levels of effort for successful characterization. A scale-down approach based upon the use of a shear device and a bench-top centrifuge has been extended in this work towards a preparative methodology that successfully predicts the performance of the continuous centrifuge and polishing filters. The use of this methodology allows the effects of cell culture conditions and large-scale centrifugal process parameters on subsequent filtration performance to be assessed at an early stage of process development where material availability is limited. Biotechnol. Bioeng. 2016;113: 1934-1941. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:26927621

  4. Propulsion engineering study for small-scale Mars missions

    SciTech Connect

    Whitehead, J.

    1995-09-12

    Rocket propulsion options for small-scale Mars missions are presented and compared, particularly for the terminal landing maneuver and for sample return. Mars landing has a low propulsive {Delta}v requirement on a {approximately}1-minute time scale, but at a high acceleration. High thrust/weight liquid rocket technologies, or advanced pulse-capable solids, developed during the past decade for missile defense, are therefore more appropriate for small Mars landers than are conventional space propulsion technologies. The advanced liquid systems are characterize by compact lightweight thrusters having high chamber pressures and short lifetimes. Blowdown or regulated pressure-fed operation can satisfy the Mars landing requirement, but hardware mass can be reduced by using pumps. Aggressive terminal landing propulsion designs can enable post-landing hop maneuvers for some surface mobility. The Mars sample return mission requires a small high performance launcher having either solid motors or miniature pump-fed engines. Terminal propulsion for 100 kg Mars landers is within the realm of flight-proven thruster designs, but custom tankage is desirable. Landers on a 10 kg scale also are feasible, using technology that has been demonstrated but not previously flown in space. The number of sources and the selection of components are extremely limited on this smallest scale, so some customized hardware is required. A key characteristic of kilogram-scale propulsion is that gas jets are much lighter than liquid thrusters for reaction control. The mass and volume of tanks for inert gas can be eliminated by systems which generate gas as needed from a liquid or a solid, but these have virtually no space flight history. Mars return propulsion is a major engineering challenge; earth launch is the only previously-solved propulsion problem requiring similar or greater performance.

  5. Has the Performance of Regional-Scale Photochemical Modelling Systems Changed over the Past Decade?

    EPA Science Inventory

    This study analyzed summertime ozone concentrations that have been simulated by various regional-scale photochemical modelling systems over the Eastern U.S. as part of more than ten independent studies. Results indicate that there has been a reduction of root mean square errors ...

  6. Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance

    SciTech Connect

    Gu, Wenbin

    2015-02-05

    This report documents the work performed by General Motors (GM) under the Cooperative agreement No. DE-EE0000470, “Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance,” in collaboration with the Penn State University (PSU), University of Tennessee Knoxville (UTK), Rochester Institute of Technology (RIT), and University of Rochester (UR) via subcontracts. The overall objectives of the project are to investigate and synthesize fundamental understanding of transport phenomena at both the macro- and micro-scales for the development of a down-the-channel model that accounts for all transport domains in a broad operating space. GM as a prime contractor focused on cell level experiments and modeling, and the Universities as subcontractors worked toward fundamental understanding of each component and associated interface.

  7. Skin and scales of teleost fish: Simple structure but high performance and multiple functions

    NASA Astrophysics Data System (ADS)

    Vernerey, Franck J.; Barthelat, Francois

    2014-08-01

    Natural and man-made structural materials perform similar functions such as structural support or protection. Therefore they rely on the same types of properties: strength, robustness, lightweight. Nature can therefore provide a significant source of inspiration for new and alternative engineering designs. We report here some results regarding a very common, yet largely unknown, type of biological material: fish skin. Within a thin, flexible and lightweight layer, fish skins display a variety of strain stiffening and stabilizing mechanisms which promote multiple functions such as protection, robustness and swimming efficiency. We particularly discuss four important features pertaining to scaled skins: (a) a strongly elastic tensile behavior that is independent from the presence of rigid scales, (b) a compressive response that prevents buckling and wrinkling instabilities, which are usually predominant for thin membranes, (c) a bending response that displays nonlinear stiffening mechanisms arising from geometric constraints between neighboring scales and (d) a robust structure that preserves the above characteristics upon the loss or damage of structural elements. These important properties make fish skin an attractive model for the development of very thin and flexible armors and protective layers, especially when combined with the high penetration resistance of individual scales. Scaled structures inspired by fish skin could find applications in ultra-light and flexible armor systems, flexible electronics or the design of smart and adaptive morphing structures for aerospace vehicles.

  8. Field-aligned Currents' Scale Analysis Performed by the Swarm Constellation

    NASA Astrophysics Data System (ADS)

    Luhr, H.; Park, J.; Gjerloev, J. W.; Rauberg, J.; Michaelis, I.; Le, G.; Merayo, J. M. G.; Brauer, P.

    2014-12-01

    We present a statistical study of the temporal and spatial scale characteristics of different field-aligned current (FAC) types. Very suitable for this purpose is the closely spaced Swarm satellite formation, which existed shortly after launch during the commissioning phase. As dataset we use the standard Level 2 product, Single Satellite FAC, which comes at a data rate of 1 Hz, corresponding to an along-track distance of 7.5 km. FACs are known to cover a wide range of scales from 1km to several hundred kilometres, the smaller the scale the larger the amplitude. We like to divide the FACs into two classes. Those of intermediate scale, some tens of kilometres, which are carried predominantly by kinetic Alfvén waves, while the large-scale FACs are assumed to be stationary current structures on the timescales of a satellite crossing. For distinguishing between the two we first look how the temporal variability changes with scale. For that we consider subsequent measurements at the same point, the orbital cross-over near the geographic poles, and interpret the temporal current changes. Here we focus on observations in the southern hemisphere at locations where the geographic pole lies within the auroral region. In a next step the latitudinal and longitudinal scales of the larger-scale FAC structures are investigated. FACs related to Alfvén waves cannot be studied in this way because we have no simultaneous measurements at the same latitude and longitude. The results from this analysis are different for dayside and nightside. Implications for the FAC characteristics resulting from these observations are interpreted in the end.

  9. Performing three-dimensional neutral particle transport calculations on tera scale computers

    SciTech Connect

    Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R

    1999-01-12

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).

  10. Scale effects on propeller cavitating hydrodynamic and hydroacoustic performances with non-uniform inflow

    NASA Astrophysics Data System (ADS)

    Yang, Qiongfang; Wang, Yongsheng; Zhang, Zhihong

    2013-03-01

    Considering the lack of theoretical models and ingredients necessary to explain the scaling of the results of propeller cavitation inception and cavitating hydroacoustics from model tests to full scale currently, and the insufficient reflection of the nuclei effects on cavitation in the numerical methods, the cavitating hydrodynamics and cavitation low frequency noise spectrum of three geometrically similar 7-bladed highly skewed propellers with non-uniform inflow are addressed. In this process, a numerical bridge from the multiphase viscous simulation of propeller cavitation hydrodynamics to its hydro-acoustics is built, and the scale effects on performances and the applicability of exist scaling law are analyzed. The effects of non-condensable gas(NCG) on cavitation inception are involved explicitly in the improved Sauer's cavitation model, and the cavity volume acceleration related to its characteristic length is used to produce the noise spectrum. Results show that, with the same cavitation number, the cavity extension on propeller blades increases with diameter associated with an earlier shift of the beginning point of thrust decline induced by cavitation, while the three decline slopes of thrust breakdown curves are found to be nearly the same. The power of the scaling law based on local Reynolds number around 0.9 R section is determined as 0.11. As for the smallest propeller, the predominant tonal noise is located at blade passing frequency(BPF), whereas 2BPF for the middle and both 2BPF and 3BPF for the largest, which shows the cavitating line spectrum is fully related to the interaction between non-uniform inflow and fluctuated cavity volume. The predicted spectrum level exceedance from the middle to the large propeller is 6.65 dB at BPF and 5.94 dB at 2BPF. Since it just differs less than 2 dB to the increment obtained by empirical scaling law, it is inferred that the scale effects on them are acceptable with a sufficient model scale, and so do the

  11. Statistical analysis of microgravity experiment performance using the degrees of success scale

    NASA Technical Reports Server (NTRS)

    Upshaw, Bernadette; Liou, Ying-Hsin Andrew; Morilak, Daniel P.

    1994-01-01

    This paper describes an approach to identify factors that significantly influence microgravity experiment performance. Investigators developed the 'degrees of success' scale to provide a numerical representation of success. A degree of success was assigned to 293 microgravity experiments. Experiment information including the degree of success rankings and factors for analysis was compiled into a database. Through an analysis of variance, nine significant factors in microgravity experiment performance were identified. The frequencies of these factors are presented along with the average degree of success at each level. A preliminary discussion of the relationship between the significant factors and the degree of success is presented.

  12. Vortex shedding and aerodynamic performance of an airfoil with multi-scale trailing edge modifications

    NASA Astrophysics Data System (ADS)

    Nedic, Jovan; Vassilicos, J. Christos

    2014-11-01

    An experimental investigation was conducted into the aerodynamic performance and nature of the vortex shedding generated by truncated and non-flat serrated trailing edges of a NACA 0012 wing section. The truncated trailing edge generates a significant amount of vortex shedding, whilst increasing both the maximum lift and drag coefficients, resulting in an overall reduction in the maximum lift-to-drag ratio (L/D) compared to a plain NACA0012 wing section. By decreasing the chevron angle (ϕ) of the non-flat trailing edge serrations (i.e. by making them sharper), the energy of the vortex shedding significantly decreases and L/D increase compared to a plain NACA0012 wing section. Fractal/multi-scale patterns were also investigated with a view to further improve performance. It was found that the energy of the vortex shedding increases with increasing fractal iteration if the chevron is broad (ϕ ~65°), but decreases for sharper chevrons (ϕ =45°). It is believed that if ϕ is too big, the multi-scale trailing edges are too far away from each other to interact and break down the vortex shedding mechanism. Fractal/multi-scale trailing edges are also able to improve aerodynamic performance compared to the NACA 0012 wing section.

  13. Performance of Thorup's Shortest Path Algorithm for Large-Scale Network Simulation

    NASA Astrophysics Data System (ADS)

    Sakumoto, Yusuke; Ohsaki, Hiroyuki; Imase, Makoto

    In this paper, we investigate the performance of Thorup's algorithm by comparing it to Dijkstra's algorithm for large-scale network simulations. One of the challenges toward the realization of large-scale network simulations is the efficient execution to find shortest paths in a graph with N vertices and M edges. The time complexity for solving a single-source shortest path (SSSP) problem with Dijkstra's algorithm with a binary heap (DIJKSTRA-BH) is O((M+N)log N). An sophisticated algorithm called Thorup's algorithm has been proposed. The original version of Thorup's algorithm (THORUP-FR) has the time complexity of O(M+N). A simplified version of Thorup's algorithm (THORUP-KL) has the time complexity of O(Mα(N)+N) where α(N) is the functional inverse of the Ackerman function. In this paper, we compare the performances (i.e., execution time and memory consumption) of THORUP-KL and DIJKSTRA-BH since it is known that THORUP-FR is at least ten times slower than Dijkstra's algorithm with a Fibonaccii heap. We find that (1) THORUP-KL is almost always faster than DIJKSTRA-BH for large-scale network simulations, and (2) the performances of THORUP-KL and DIJKSTRA-BH deviate from their time complexities due to the presence of the memory cache in the microprocessor.

  14. Scales for rating motor impairment in Parkinson's disease: studies of reliability and convergent validity.

    PubMed Central

    Henderson, L; Kennard, C; Crawford, T J; Day, S; Everitt, B S; Goodrich, S; Jones, F; Park, D M

    1991-01-01

    Study 1 examined the reliability of the ratings assigned to the performance of five sign-and-symptom items drawn from tests of motor impairment in Parkinson's disease. Patients with Parkinson's disease of varying severity performed gait, rising from chair, and hand function items. Video recordings of these performances were rated by a large sample of experienced and inexperienced neurologists and by psychology undergraduates, using a four point scale. Inter-rater reliability was moderately high, being higher for gait than hand function items. Clinical experience proved to have no systematic effect on ratings or their reliability. The idiosyncrasy of particular performances was a major source of unreliable ratings. Study 2 examined the intercorrelation of several standard rating scales, comprised of sign-and-symptom items as well as activities of daily living. The correlation between scales was high, ranging from 0.70 to 0.83, despite considerable differences in item composition. Inter-item correlations showed that the internal cohesion of the tests was high, especially for the self-care scale. Regression analysis showed that the relationship between the scales could be efficiently captured by a small selection of test items, allowing the construction of a much briefer test. PMID:2010754

  15. Comparison of the Leiter International Performance Scale-Revised and the Stanford-Binet Intelligence Scales, 5th Edition, in Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Grondhuis, Sabrina Nicole; Mulick, James A.

    2013-01-01

    A review of hospital records was conducted for children evaluated for autism spectrum disorders who completed both the Leiter International Performance Scale-Revised (Leiter-R) and Stanford-Binet Intelligence Scales, 5th Edition (SB5). Participants were between 3 and 12 years of age. Diagnoses were autistic disorder (n = 26, 55%) and pervasive…

  16. Impact of continuing scaling on the device performance of 3D cylindrical junction-less charge trapping memory

    NASA Astrophysics Data System (ADS)

    Xinkai, Li; Zongliang, Huo; Lei, Jin; Dandan, Jiang; Peizhen, Hong; Qiang, Xu; Zhaoyun, Tang; Chunlong, Li; Tianchun, Ye

    2015-09-01

    This work presents a comprehensive analysis of 3D cylindrical junction-less charge trapping memory device performance regarding continuous scaling of the structure dimensions. The key device performance, such as program/erase speed, vertical charge loss, and lateral charge migration under high temperature are intensively studied using the Sentaurus 3D device simulator. Although scaling of channel radius is beneficial for operation speed improvement, it leads to a retention challenge due to vertical leakage, especially enhanced charge loss through TPO. Scaling of gate length not only decreases the program/erase speed but also leads to worse lateral charge migration. Scaling of spacer length is critical for the interference of adjacent cells and should be carefully optimized according to specific cell operation conditions. The gate stack shape is also found to be an important factor affecting the lateral charge migration. Our results provide guidance for high density and high reliability 3D CTM integration. Project supported by the National Natural Science Foundation of China (Nos. 61474137, 61176073, 61306107).

  17. Evaluating the long-term performance of low-cost adsorbents using small-scale adsorption column experiments.

    PubMed

    Callery, O; Healy, M G; Rognard, F; Barthelemy, L; Brennan, R B

    2016-09-15

    This study investigated a novel method of predicting the long-term phosphorus removal performance of large-scale adsorption filters, using data derived from short-term, small-scale column experiments. The filter media investigated were low-cost adsorbents such as aluminum sulfate drinking water treatment residual, ferric sulfate drinking water treatment residual, and fine and coarse crushed concretes. Small-bore adsorption columns were loaded with synthetic wastewater, and treated column effluent volume was plotted against the mass of phosphorus adsorbed per unit mass of filter media. It was observed that the curve described by the data strongly resembled that of a standard adsorption isotherm created from batch adsorption data. Consequently, it was hypothesized that an equation following the form of the Freundlich isotherm would describe the relationship between filter loading and media saturation. Moreover, the relationship between filter loading and effluent concentration could also be derived from this equation. The proposed model was demonstrated to accurately predict the performance of large-scale adsorption filters over a period of up to three months with a very high degree of accuracy. Furthermore, the coefficients necessary to produce said model could be determined from just 24 h of small-scale experimental data. PMID:27295617

  18. Space Power Free-Piston Stirling Engine Scaling Study

    NASA Technical Reports Server (NTRS)

    Jones, D.

    1989-01-01

    The design feasibility study is documented of a single cylinder, free piston Stirling engine/linear alternator (FPSE/LA) power module generating 150 kW-electric (kW sub e), and the determination of the module's maximum feasible power level. The power module configuration was specified to be a single cylinder (single piston, single displacer) FPSE/LA, with tuning capacitors if required. The design requirements were as follows: (1) Maximum electrical power output; (2) Power module thermal efficiency equal to or greater than 20 percent at a specific mass of 5 to 8 kg/kW(sub e); (3) Heater wall temperature/cooler wall temperature = 1050 K/525 K; (4) Sodium heat-pipe heat transport system, pumped loop NaK (sodium-potassium eutectic mixture) rejection system; (5) Maximum power module vibration amplitude = 0.0038 cm; and (6) Design life = 7 years (60,000 hr). The results show that a single cylinder FPSE/LA is capable of meeting program goals and has attractive scaling attributes over the power range from 25 to 150 kW(sub e). Scaling beyond the 150 kW(sub e) power level, the power module efficiency falls and the power module specific mass reaches 10 kg/kW(sub e) at a power output of 500 kW(sub e). A discussion of scaling rules for the engine, alternator, and heat transport systems is presented, along with a detailed description of the conceptual design of a 150 kW(sub e) power module that meets the requirements. Included is a discussion of the design of a dynamic balance system. A parametric study of power module performance conducted over the power output range of 25 to 150 kW(sub e) for temperature ratios of 1.7, 2.0, 2.5, and 3.0 is presented and discussed. The results show that as the temperature ratio decreases, the efficiency falls and specific mass increases. At a temperature ratio of 1.7, the 150 kW(sub e) power module cannot satisfy both efficiency and specific mass goals. As the power level increases from 25 to 150 kW(sub e) at a fixed temperature ratio, power

  19. A Validation Study of the Existential Anxiety Scale.

    ERIC Educational Resources Information Center

    Hullett, Michael A.

    Logotherapy is a meaning-centered psychotherapy which focuses on both the meaning of human existence and the personal search for meaning. If the will to search for meaning is frustrated, "existential frustration" may result. This study validates the Existential Anxiety Scale (EAS) developed by Good and Good (1974). Basic principles of logotherapy…

  20. Rap-Music Attitude and Perception Scale: A Validation Study

    ERIC Educational Resources Information Center

    Tyson, Edgar H.

    2006-01-01

    Objective: This study tests the validity of the Rap-music Attitude and Perception (RAP) Scale, a 1-page, 24-item measure of a person's thoughts and feelings surrounding the effects and content of rap music. The RAP was designed as a rapid assessment instrument for youth programs and practitioners using rap music and hip hop culture in their work…

  1. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  2. Book Reading Motivation Scale: Reliability and Validity Study

    ERIC Educational Resources Information Center

    Katranci, Mehmet

    2015-01-01

    Book reading enhances the intellectual world of people. It is very important to know the factors that motivate children to read books as it will help to instill book reading habit in them. As such, the present study aims to develop a "Book Reading Motivation Scale" to determine elementary and secondary school students' reading…

  3. Initial Scale Development: Sample Size for Pilot Studies

    ERIC Educational Resources Information Center

    Johanson, George A.; Brooks, Gordon P.

    2010-01-01

    Pilot studies are often recommended by scholars and consultants to address a variety of issues, including preliminary scale or instrument development. Specific concerns such as item difficulty, item discrimination, internal consistency, response rates, and parameter estimation in general are all relevant. Unfortunately, there is little discussion…

  4. Scaling a Single Attribute: A Methodological Study of Conservation

    ERIC Educational Resources Information Center

    Hofmann, Richard J.; Trepanier, Mary

    1975-01-01

    This study was designed to assess the acquisition of conservation of number on equal addition tasks through scalogram analysis to determine if this analysis defines a scale or continuum. Ten block tasks administered to 85 kindergarten children validated Piaget's theory that cognitive development is sequential and continuous. (Author/ED)

  5. SMALL-SCALE AND LOW-TECHNOLOGY RESOURCE RECOVERY STUDY

    EPA Science Inventory

    A study was conducted to assess the applicability of various approaches to resource recovery to selected waste generators. The resource recovery systems and technologies were limited to those operating in the small-scale range, defined as less than 100 tons per day input, or thos...

  6. Homework Purpose Scale for Middle School Students: A Validation Study

    ERIC Educational Resources Information Center

    Xu, Jianzhong

    2011-01-01

    The purpose of the present study is to test the validity of scores on the Homework Purpose Scale (HPS) for middle school students. The participants were 1,181 eighth graders in the southeastern United States, including (a) 699 students in urban school districts and (b) 482 students in rural school districts. First, confirmatory factor analysis was…

  7. VIDEO BLOGGING AND ENGLISH PRESENTATION PERFORMANCE: A PILOT STUDY.

    PubMed

    Alan Hung, Shao-Ting; Danny Huang, Heng-Tsung

    2015-10-01

    This study investigated the utility of video blogs in improving EFL students' performance in giving oral presentations and, further, examined the students' perceptions toward video blogging. Thirty-six English-major juniors participated in a semester-long video blog project for which they uploaded their 3-min. virtual presentation clips over 18 weeks. Their virtual presentation clips were rated by three raters using a scale for speaking performance that contained 14 presentation skills. Data sources included presentation clips, reflections, and interviews. The results indicated that the students' overall presentation performance improved significantly. In particular, among the 14 presentation skills projection, intonation, posture, introduction, conclusion, and purpose saw the most substantial improvement. Finally, the qualitative data revealed that learners perceived that the video blog project facilitated learning but increased anxiety. PMID:26444840

  8. Performance Validation and Scaling of a Capillary Membrane Solid-Liquid Separation System

    SciTech Connect

    Rogers, S; Cook, J; Juratovac, J; Goodwillie, J; Burke, T

    2011-10-25

    Algaeventure Systems (AVS) has previously demonstrated an innovative technology for dewatering algae slurries that dramatically reduces energy consumption by utilizing surface physics and capillary action. Funded by a $6M ARPA-E award, transforming the original Harvesting, Dewatering and Drying (HDD) prototype machine into a commercially viable technology has required significant attention to material performance, integration of sensors and control systems, and especially addressing scaling issues that would allow processing extreme volumes of algal cultivation media/slurry. Decoupling the harvesting, dewatering and drying processes, and addressing the rate limiting steps for each of the individual steps has allowed for the development individual technologies that may be tailored to the specific needs of various cultivation systems. The primary performance metric used by AVS to assess the economic viability of its Solid-Liquid Separation (SLS) dewatering technology is algae mass production rate as a function of power consumption (cost), cake solids/moisture content, and solids capture efficiency. An associated secondary performance metric is algae mass loading rate which is dependent on hydraulic loading rate, area-specific hydraulic processing capacity (gpm/in2), filter:capillary belt contact area, and influent algae concentration. The system is capable of dewatering 4 g/L (0.4%) algae streams to solids concentrations up to 30% with capture efficiencies of 80+%, however mass production is highly dependent on average cell size (which determines filter mesh size and percent open area). This paper will present data detailing the scaling efforts to date. Characterization and performance data for novel membranes, as well as optimization of off-the-shelf filter materials will be examined. Third party validation from Ohio University on performance and operating cost, as well as design modification suggestions will be discussed. Extrapolation of current productivities

  9. D0 central tracking chamber performance studies

    SciTech Connect

    Pizzuto, D.

    1991-12-01

    The performance of the completed DO central tracking chamber was studied using cosmic rays at the State University of New York at Stony Brook. Also studied was a prototype tracking chamber identical in design to the completed DO tracking chamber. The prototype chamber was exposed to a collimated beam of 150 GeV pions at the Fermilab NWA test facility. Results indicate an R{Phi} tracking resolution compatible with the limitations imposed by physical considerations, excellent 2 track resolution, and a high track reconstruction efficiency along with a good rejection power against {gamma} {yields} e {sup +} e{sup {minus}} events.

  10. Performance evaluation of a full-scale coke oven wastewater treatment plant in an integrated steel plant.

    PubMed

    Kumar, M Suresh; Vaidya, A N; Shivaraman, N; Bal, A S

    2003-01-01

    Wastewater generated during coke-oven gas cleaning operations in the integrated steel plant contains phenol, cyanide, thiocyanate, and also oil and grease. Although the activated sludge process is widely practiced for biological treatment of coke-oven wastewater, it was observed during the evaluation of performance of full scale coke-oven wastewater treatment plant that oil contamination and poor sludge settleability had resulted in poor maintenance of the activated sludge process. Keeping these aspects in view, treatability studies were conducted and an alternative treatment process is proposed. With these corrective measures the coke-oven wastewater treatment plant will give desired performance. In this paper we present results of the performance evaluation, data on treatability studies and alternative treatment process scheme. PMID:14723281

  11. Performance study of a data flow architecture

    NASA Technical Reports Server (NTRS)

    Adams, George

    1985-01-01

    Teams of scientists studied data flow concepts, static data flow machine architecture, and the VAL language. Each team mapped its application onto the machine and coded it in VAL. The principal findings of the study were: (1) Five of the seven applications used the full power of the target machine. The galactic simulation and multigrid fluid flow teams found that a significantly smaller version of the machine (16 processing elements) would suffice. (2) A number of machine design parameters including processing element (PE) function unit numbers, array memory size and bandwidth, and routing network capability were found to be crucial for optimal machine performance. (3) The study participants readily acquired VAL programming skills. (4) Participants learned that application-based performance evaluation is a sound method of evaluating new computer architectures, even those that are not fully specified. During the course of the study, participants developed models for using computers to solve numerical problems and for evaluating new architectures. These models form the bases for future evaluation studies.

  12. SOLVENT EXTRACTION AND SOIL WASHING TREATMENT OF CONTAMINATED SOILS FROM WOOD PRESERVING SITES: BENCH SCALE STUDIES

    EPA Science Inventory

    Bench-scale solvent extraction and soil washing studies were performed on soil samples obtained from three abandoned wood preserving sites that included in the NPL. The soil samples from these sites were contaminated with high levels of polyaromatic hydrocarbons (PAHs), pentachlo...

  13. Multi-scale Computer Simulations to Study the Reaction Zone of Solid Explosives

    SciTech Connect

    Reaugh, J E

    2006-06-23

    We have performed computer simulations at several different characteristic length scales to study the coupled mechanical, thermal, and chemical behavior of explosives under shock and other pressure loadings. Our objective is to describe the underlying physics and chemistry of the hot-spot theory for solid explosives, with enough detail to make quantitative predictions of the expected result from a given pressure loading.

  14. FULL-SCALE STUDIES OF THE TRICKLING FILTER/SOLIDS CONTACT PROCESS

    EPA Science Inventory

    The trickling filter/solids contact (TF/SC) process was first successfully demonstrated in 1979 as an outgrowth of the trickling filter process. In 1984, the U.S. Environmental Protection Agency (EPA) sponsored full-scale studies of the TF/SC process to document the performance o...

  15. Soil physical properties of agricultural systems in a large-scale study

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A large-scale field study was performed to determine the effects of agricultural management systems on soil physical properties, including their spatial and temporal variations. Replicates were established in 1998 at the Center for Environmental Farming Systems, Goldsboro, North Carolina; replicates...

  16. Comparison of the WISC-R and the Leiter International Performance Scale with Average and Above-Average Students.

    ERIC Educational Resources Information Center

    Mask, Nan; Bowen, Charles E.

    1984-01-01

    Compared the Wechsler Intelligence Scale for Children (Revised) (WISC-R) and the Leiter International Performance Scale with 40 average and above average students. Results indicated a curvilinear relationship between the WISC-R and the Leiter, which correlates higher at the mean and deviates as the Full Scale varies from the mean. (JAC)

  17. A scale model study of parallel urban street canyons

    NASA Astrophysics Data System (ADS)

    Hornikx, Maarten; Forssen, Jens; Kropp, Wolfgang

    2005-04-01

    The access to quiet areas in cities is of increasing importance. Recently, the equivalent sources method for a two dimensional situation of parallel urban street canyons has been developed. One canyon represents a busy road, whereas the other is one without traffic; the quiet side. With the model, the transfer function between the two canyons can be calculated, as well as the influence of diffusion, absorption, and atmospheric turbulence on the transfer function. A scale model study of two parallel canyons has now been executed. A scale of 1:40 has been chosen and the maximum length sequence technique has been applied using the MLSSA system. Results of the scale model study have been compared to calculations with the equivalent sources method. The difference between a two-dimensional and a three-dimensional quiet side, between a coherent and an incoherent line source and the influence of absorption and diffusion has been investigated. The scale model study also gives insight in the evolution of the sound field in the time domain. [Work supported by the Swedish Foundation for Strategic Environmental Research (MISTRA).

  18. Scaling of suction-induced flows in bluegill: morphological and kinematic predictors for the ontogeny of feeding performance.

    PubMed

    Holzman, Roi; Collar, David C; Day, Steven W; Bishop, Kristin L; Wainwright, Peter C

    2008-08-01

    During ontogeny, animals undergo changes in size and shape that result in shifts in performance, behavior and resource use. These ontogenetic changes provide an opportunity to test hypotheses about how the growth of structures affects biological functions. In the present study, we ask how ontogenetic changes in skull biomechanics affect the ability of bluegill sunfish, a high-performance suction feeder, to produce flow speeds and accelerations during suction feeding. The flow of water in front of the mouth was measured directly for fish ranging from young-of-year to large adults, using digital particle imaging velocimetry (DPIV). As bluegill size increased, the magnitude of peak flow speed they produced increased, and the effective suction distance increased because of increasing mouth size. However, throughout the size range, the timing of peak fluid speed remained unchanged, and flow was constrained to approximately one gape distance from the mouth. The observed scaling relationships between standard length and peak flow speed conformed to expectations derived from two biomechanical models, one based on morphological potential to produce suction pressure (the Suction Index model) and the other derived from a combination of morphological and kinematic variables (the Expanding Cone model). The success of these models in qualitatively predicting the observed allometry of induced flow speed reveals that the scaling of cranial morphology underlies the scaling of suction performance in bluegill. PMID:18689419

  19. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    NASA Astrophysics Data System (ADS)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  20. Students' Confidence in Their Performance Judgements: A Comparison of Different Response Scales

    ERIC Educational Resources Information Center

    Händel, Marion; Fritzsche, Eva Susanne

    2015-01-01

    We report results of two studies on metacognitive accuracy with undergraduate education students. Participating students were asked to judge their personal performance in a multiple-choice exam as well as to state their confidence in their performance judgement (second-order judgement [SOJ]). In each study, we compared four conditions that…

  1. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  2. Experimental Studies in Helicopter Vertical Climb Performance

    NASA Technical Reports Server (NTRS)

    McKillip, Robert M., Jr.

    1996-01-01

    Data and analysis from an experimental program to measure vertical climb performance on an eight-foot model rotor are presented. The rotor testing was performed using a unique moving-model facility capable of accurately simulating the flow conditions during axial flight, and was conducted from July 9, 1992 to July 16, 1992 at the Dynamic Model Track, or 'Long Track,' just prior to its demolition in August of 1992. Data collected during this brief test program included force and moment time histories from a sting-mounted strain gauge balance, support carriage velocity, and rotor rpm pulses. In addition, limited video footage (of marginal use) was recorded from smoke flow studies for both simulated vertical climb and descent trajectories. Analytical comparisons with these data include a series of progressively more detailed calculations ranging from simple momentum theory, a prescribed wake method, and a free-wake prediction.

  3. Reservoirs performances under climate variability: a case study

    NASA Astrophysics Data System (ADS)

    Longobardi, A.; Mautone, M.; de Luca, C.

    2014-09-01

    A case study, the Piano della Rocca dam (southern Italy) is discussed here in order to quantify the system performances under climate variability conditions. Different climate scenarios have been stochastically generated according to the tendencies in precipitation and air temperature observed during recent decades for the studied area. Climate variables have then been filtered through an ARMA model to generate, at the monthly scale, time series of reservoir inflow volumes. Controlled release has been computed considering the reservoir is operated following the standard linear operating policy (SLOP) and reservoir performances have been assessed through the calculation of reliability, resilience and vulnerability indices (Hashimoto et al. 1982), comparing current and future scenarios of climate variability. The proposed approach can be suggested as a valuable tool to mitigate the effects of moderate to severe and persistent droughts periods, through the allocation of new water resources or the planning of appropriate operational rules.

  4. Hydrologic and Pollutant Removal Performance of a Full-Scale, Fully Functional Permeable Pavement Parking Lot

    EPA Science Inventory

    In accordance with the need for full-scale, replicated studies of permeable pavement systems used in their intended application (parking lot, roadway, etc.) across a range of climatic events, daily usage conditions, and maintenance regimes to evaluate these systems, the EPA’s Urb...

  5. Hydraulic Performance and Mass Transfer Efficiency of Engineering Scale Centrifugal Contactors

    SciTech Connect

    David Meikrantz; Troy Garn; Nick Mann; Jack Law; Terry Todd

    2007-09-01

    Annular centrifugal contactors (ACCs) are being evaluated for process-scale solvent extraction operations in support of Advanced Fuel Cycle Initiative (AFCI) separations goals. Process-scale annular centrifugal contactors have the potential for high stage efficiency if properly employed and optimized for the application. Hydraulic performance issues related to flow instability and classical flooding are likely unimportant, especially for units with high throughputs. However, annular mixing increases rapidly with increasing rotor diameter while maintaining a fixed g force at the rotor wall. In addition, for engineering/process-scale contactors, elevated rotor speeds and/or throughput rates, can lead to organic phase foaming at the rotor discharge collector area. Foam buildup in the upper rotor head area can aspirate additional vapor from the contactor housing resulting in a complete loss of separation equilibrium. Variable speed drives are thus desirable to optimize and balance the operating parameters to help ensure acceptable performance. Proper venting of larger contactors is required to balance pressures across individual stages and prevent vapor lock due to foam aspiration.

  6. Analytical formulas for the performance scaling of quantum processors with a large number of defective gates

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2015-10-01

    Removing a single logical gate from a classical information processor renders this processor useless. This is not so for a quantum information processor. A large number of quantum gates may be removed without significantly affecting the processor's performance. In this paper, focusing on the quantum Fourier transform (QFT) and quantum adder, we show even more: Even if most of its gates are eliminated and the remaining gates are selected from a randomly generated set, the QFT, one of the most useful quantum processors, and the quantum adder, one of the most basic building blocks of a universal quantum computer, still operate with satisfactory success probability, comparable to that of a quantum computer constructed with perfect gates. We support these conclusions by first laying out a general analytical framework and then deriving analytical scaling relations, which are in excellent agreement with our numerical simulations. The demonstrated robustness of the QFT and quantum adder, to the point where randomly generated quantum gates take the place of the exact gates, is an important boon for the construction of quantum computers, since it shows that stringent gate error tolerances do not have to be met to obtain satisfactory performance of the corresponding quantum processors. Our analytical techniques are powerful enough to generate asymptotic scaling laws for any gate defect model of quantum information processors and we illustrate this point by explicitly computing asymptotic analytical scaling formulas for several other defect models as well.

  7. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.

    PubMed

    Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K

    2014-07-01

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this

  8. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging

    NASA Astrophysics Data System (ADS)

    Esposito, M.; Anaxagoras, T.; Konstantinidis, A. C.; Zheng, Y.; Speller, R. D.; Evans, P. M.; Allinson, N. M.; Wells, K.

    2014-07-01

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this

  9. Overview of large scale experiments performed within the LBB project in the Czech Republic

    SciTech Connect

    Kadecka, P.; Lauerova, D.

    1997-04-01

    During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, a brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.

  10. Performance characteristics of a one-third-scale, vectorable ventral nozzle for SSTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Esker, Barbara S.; Mcardle, Jack G.

    1990-01-01

    Several proposed configurations for supersonic short takeoff, vertical landing aircraft will require one or more ventral nozzles for lift and pitch control. The swivel nozzle is one possible ventral nozzle configuration. A swivel nozzle (approximately one-third scale) was built and tested on a generic model tailpipe. This nozzle was capable of vectoring the flow up to + or - 23 deg from the vertical position. Steady-state performance data were obtained at pressure ratios to 4.5, and pitot-pressure surveys of the nozzle exit plane were made. Two configurations were tested: the swivel nozzle with a square contour of the leading edge of the ventral duct inlet, and the same nozzle with a round leading edge contour. The swivel nozzle showed good performance overall, and the round-leading edge configuration showed an improvement in performance over the square-leading edge configuration.

  11. Performance evaluation of image-based location recognition approaches based on large-scale UAV imagery

    NASA Astrophysics Data System (ADS)

    Hesse, Nikolas; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    Recognizing the location where an image was taken, solely based on visual content, is an important problem in computer vision, robotics and remote sensing. This paper evaluates the performance of standard approaches for location recognition when applied to large-scale aerial imagery in both electro-optical (EO) and infrared (IR) domains. We present guidelines towards optimizing the performance and explore how well a standard location recognition system is suited to handle IR data. We show on three datasets that the performance of the system strongly increases if SIFT descriptors computed on Hessian-Affine regions are used instead of SURF features. Applications are widespread and include vision-based navigation, precise object geo-referencing or mapping.

  12. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. PMID:25731989

  13. Computational Studies of Magnetic Nozzle Performance

    NASA Technical Reports Server (NTRS)

    Ebersohn, Frans H.; Longmier, Benjamin W.; Sheehan, John P.; Shebalin, John B.; Raja, Laxminarayan

    2013-01-01

    An extensive literature review of magnetic nozzle research has been performed, examining previous work, as well as a review of fundamental principles. This has allow us to catalog all basic physical mechanisms which we believe underlie the thrust generation process. Energy conversion mechanisms include the approximate conservation of the magnetic moment adiabatic invariant, generalized hall and thermoelectric acceleration, swirl acceleration, thermal energy transformation into directed kinetic energy, and Joule heating. Momentum transfer results from the interaction of the applied magnetic field with currents induced in the plasma plume., while plasma detachment mechanisms include resistive diffusion, recombination and charge exchange collisions, magnetic reconnection, loss of adiabaticity, inertial forces, current closure, and self-field detachment. We have performed a preliminary study of Hall effects on magnetic nozzle jets with weak guiding magnetic fields and weak expansions (p(sub jet) approx. = P(sub background)). The conclusion from this study is that the Hall effect creates an azimuthal rotation of the plasma jet and, more generally, creates helical structures in the induced current, velocity field, and magnetic fields. We have studied plasma jet expansion to near vacuum without a guiding magnetic field, and are presently including a guiding magnetic field using a resistive MHD solver. This research is progressing toward the implementation of a full generalized Ohm's law solver. In our paper, we will summarize the basic principle, as well as the literature survey and briefly review our previous results. Our most recent results at the time of submittal will also be included. Efforts are currently underway to construct an experiment at the University of Michigan Plasmadynamics and Electric Propulsion Laboratory (PEPL) to study magnetic nozzle physics for a RF-thruster. Our computational study will work directly with this experiment to validate the numerical

  14. Effect of Home Exercise Program Performance in Patients with Osteoarthritis of the Knee or the Spine on the Visual Analog Scale after Discharge from Physical Therapy

    ERIC Educational Resources Information Center

    Chen, Hamilton; Onishi, Kentaro

    2012-01-01

    The aim of our study was to assess the effect of the frequency of home exercise program (HEP) performance on pain [10-point visual analog scale (VAS)] in patients with osteoarthritis of the spine or knee after more than 6 months discharge from physical therapy (PT). We performed a retrospective chart review of 48 adult patients with a clinical…

  15. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  16. Performance Study of Swimming Pool Heaters

    SciTech Connect

    McDonald, R.J.

    2009-01-01

    The objective of this report is to perform a controlled laboratory study on the efficiency and emissions of swimming pool heaters based on a limited field investigation into the range of expected variations in operational parameters. Swimming pool heater sales trends have indicated a significant decline in the number of conventional natural gas-fired swimming pool heaters (NGPH). On Long Island the decline has been quite sharp, on the order of 50%, in new installations since 2001. The major portion of the decline has been offset by a significant increase in the sales of electric powered heat pump pool heaters (HPPH) that have been gaining market favor. National Grid contracted with Brookhaven National Laboratory (BNL) to measure performance factors in order to compare the relative energy, environmental and economic consequences of using one technology versus the other. A field study was deemed inappropriate because of the wide range of differences in actual load variations (pool size), geographic orientations, ground plantings and shading variations, number of hours of use, seasonal use variations, occupancy patterns, hour of the day use patterns, temperature selection, etc. A decision was made to perform a controlled laboratory study based on a limited field investigation into the range of expected operational variations in parameters. Critical to this are the frequency of use, temperature selection, and sizing of the heater to the associated pool heating loads. This would be accomplished by installing a limited amount of relatively simple compact field data acquisition units on selected pool installations. This data included gas usage when available and alternately heater power or gas consumption rates were inferred from the manufacturer's specifications when direct metering was not available in the field. Figure 1 illustrates a typical pool heater installation layout.

  17. A high-performance dual-scale porous electrode for vanadium redox flow batteries

    NASA Astrophysics Data System (ADS)

    Zhou, X. L.; Zeng, Y. K.; Zhu, X. B.; Wei, L.; Zhao, T. S.

    2016-09-01

    In this work, we present a simple and cost-effective method to form a dual-scale porous electrode by KOH activation of the fibers of carbon papers. The large pores (∼10 μm), formed between carbon fibers, serve as the macroscopic pathways for high electrolyte flow rates, while the small pores (∼5 nm), formed on carbon fiber surfaces, act as active sites for rapid electrochemical reactions. It is shown that the Brunauer-Emmett-Teller specific surface area of the carbon paper is increased by a factor of 16 while maintaining the same hydraulic permeability as that of the original carbon paper electrode. We then apply the dual-scale electrode to a vanadium redox flow battery (VRFB) and demonstrate an energy efficiency ranging from 82% to 88% at current densities of 200-400 mA cm-2, which is record breaking as the highest performance of VRFB in the open literature.

  18. Rounbletz: An Excel-based software to perform cost-benefit analysis at local scale

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Jaboyedoff, Michel; Lévy, Sébastien

    2014-05-01

    Public financial support for natural hazard protection measure is generally attributed, in Switzerland, according to the results of cost-benefit analysis. The analysis is generally made with a very controlled procedure, where many parameters are predefined according to a few input parameters. Vulnerability is, for example, defined according to the building's type and the hazard's type and intensity (divided in 4 classes). Therefore, this procedure, although having the advantage of being reproducible, suffers from a lack of ability to describe the local specificities. This work describes an Excel-based application which allows to calculate the cost-benefit analysis at local scale, based on the usual 3 scenarios and the predefined intensities used in Switzerland. Although the risk equations are not new, this study focuses on finding the right balance between a rigid but reproducible, and a free but too much user-dependent approach. Many parameters value are therefore predefined, but are displayed and can be modified by the user if needed. If these predefined parameters are modified, the program highlights them in the output in order to be transparent for the person who will take a decision based on these results. The software is multi-hazard, but is not yet designed to account for the possible hazards interactions. A preliminary attempt to include the uncertainty in the calculation is also presented. The uncertainty analysis consists of using triangular distributions for the input parameters and performing a Monte-Carlo simulation to obtain a distribution of possible values. The triangular distribution is chosen because of its simplicity, which is a desirable characteristic since the specialist assessing the risk is most of the time more comfortable with the natural phenomenon than with probabilities. Thus, since this type of analysis always suffers from a high uncertainty, this simple procedure allows taking this uncertainty into account for the decision process.

  19. Management of Virtual Large-scale High-performance Computing Systems

    SciTech Connect

    Vallee, Geoffroy R; Naughton, III, Thomas J; Scott, Stephen L

    2011-01-01

    Linux is widely used on high-performance computing (HPC) systems, from commodity clusters to Cray su- percomputers (which run the Cray Linux Environment). These platforms primarily differ in their system config- uration: some only use SSH to access compute nodes, whereas others employ full resource management sys- tems (e.g., Torque and ALPS on Cray XT systems). Furthermore, latest improvements in system-level virtualization techniques, such as hardware support, virtual machine migration for system resilience purposes, and reduction of virtualization overheads, enables the usage of virtual machines on HPC platforms. Currently, tools for the management of virtual machines in the context of HPC systems are still quite basic, and often tightly coupled to the target platform. In this docu- ment, we present a new system tool for the management of virtual machines in the context of large-scale HPC systems, including a run-time system and the support for all major virtualization solutions. The proposed solution is based on two key aspects. First, Virtual System Envi- ronments (VSE), introduced in a previous study, provide a flexible method to define the software environment that will be used within virtual machines. Secondly, we propose a new system run-time for the management and deployment of VSEs on HPC systems, which supports a wide range of system configurations. For instance, this generic run-time can interact with resource managers such as Torque for the management of virtual machines. Finally, the proposed solution provides appropriate ab- stractions to enable use with a variety of virtualization solutions on different Linux HPC platforms, to include Xen, KVM and the HPC oriented Palacios.

  20. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE PAGESBeta

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; Hargrove, Paul; Jin, Haoqiang; Fuerlinger, Karl; Koniges, Alice; Wright, Nicholas J.

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  1. The role of reactant unmixedness, strain rate, and length scale on premixed combustor performance

    SciTech Connect

    Samuelsen, S.; LaRue, J.; Vilayanur, S.; Guillaume, D.

    1995-12-31

    Lean premixed combustion provides a means to reduce pollutant formation and increase combustion efficiency. However, fuel-air mixing is rarely uniform in space and time. This nonuniformity in concentration will lead to relative increases in pollutant formation and decreases in combustion efficiency. The nonuniformity of the concentration at the exit of the premixer has been defined by Lyons (1981) as the ``unmixedness.`` Although turbulence properties such as length scales and strain rate are known to effect unmixedness, the exact relationship is unknown. Evaluating this relationship and the effect of unmixedness in premixed combustion on pollutant formation and combustion efficiency are an important part of the overall goal of US Department of Energy`s Advanced Turbine System (ATS) program and are among the goals of the program described herein. The information obtained from ATS is intended to help to develop and commercialize gas turbines. The contributions to the program which the University of California (Irvine) Combustion Lab (UCICL) will provide are: (1) establish the relationship of inlet unmixedness, length scales, and mean strain rate to performance, (2) determine the optimal levels of inlet unmixedness, length scales, and mean strain rates to maximize combustor performance, and (3) identify efficient premixing methods for achieving the necessary inlet conditions. The program during this reporting period is focused on developing a means to measure and qualify different degrees of temporal and spatial unmixedness. Laser diagnostic methods for planer unmixedness measurements are being developed and preliminary results are presented herein. These results will be used to (1), aid in the design of experimental premixers, and (2), determine the unmixedness which will be correlated with the emissions of the combustor. This measure of unmixedness coupled with length scale, strain rate and intensity information is required to attain the UCI goals.

  2. Performance and scaling of a novel locomotor structure: adhesive capacity of climbing gobiid fishes.

    PubMed

    Maie, Takashi; Schoenfuss, Heiko L; Blob, Richard W

    2012-11-15

    Many species of gobiid fishes adhere to surfaces using a sucker formed from fusion of the pelvic fins. Juveniles of many amphidromous species use this pelvic sucker to scale waterfalls during migrations to upstream habitats after an oceanic larval phase. However, adults may still use suckers to re-scale waterfalls if displaced. If attachment force is proportional to sucker area and if growth of the sucker is isometric, then increases in the forces that climbing fish must resist might outpace adhesive capacity, causing climbing performance to decline through ontogeny. To test for such trends, we measured pressure differentials and adhesive suction forces generated by the pelvic sucker across wide size ranges in six goby species, including climbing and non-climbing taxa. Suction was achieved via two distinct growth strategies: (1) small suckers with isometric (or negatively allometric) scaling among climbing gobies and (2) large suckers with positively allometric growth in non-climbing gobies. Species using the first strategy show a high baseline of adhesive capacity that may aid climbing performance throughout ontogeny, with pressure differentials and suction forces much greater than expected if adhesion were a passive function of sucker area. In contrast, large suckers possessed by non-climbing species may help compensate for reduced pressure differentials, thereby producing suction sufficient to support body weight. Climbing Sicyopterus species also use oral suckers during climbing waterfalls, and these exhibited scaling patterns similar to those for pelvic suckers. However, oral suction force was considerably lower than that for pelvic suckers, reducing the ability for these fish to attach to substrates by the oral sucker alone. PMID:23100486

  3. Scale

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2009-01-01

    The common approach to scaling, according to Christopher Dede, a professor of learning technologies at the Harvard Graduate School of Education, is to jump in and say, "Let's go out and find more money, recruit more participants, hire more people. Let's just keep doing the same thing, bigger and bigger." That, he observes, "tends to fail, and fail…

  4. Simulation Studies of the CALET Performance

    NASA Astrophysics Data System (ADS)

    Akaike, Yosui; Torii, S.; Kasahara, K.; Yoshida, K.; Tamura, T.; CALET Collaboration

    2010-02-01

    The Calorimetric Electron Telescope (CALET) mission aims at revealing unsolved problems in high energy phenomena of the Universe by carrying out a precise measurement of high energy cosmic rays on-board the Japanese Experiment Module Exposed Facility of the International Space Station (ISS). CALET is designed to perform direct measurements of electrons from 1GeV to 10TeV, gamma-rays from 10GeV to a few TeV and nuclei from 10GeV to 1000TeV. The detector consists of Silicon Pixel Array (SIA), Imaging Calorimeter (IMC) and Total Absorption Calorimeter (TASC) to detect the various kinds of particles in very wide energy range. SIA has superior charge resolution of 0.1 e for protons and 0.35 e for irons. IMC composed of scintillating fiber of 1mm square in cross-section and tungsten plates provides the precisely shower profile. TASC composed of PWO scintillator determines the total energy of the incident particle, and separates electrons and gamma-rays from background hadrons. The total absorber is 31 radiation lengths for electromagnetic particles and 1.4 interaction mean free paths for protons. To optimize and evaluate the CALET performance, we have carrying out Monte Carlo simulation study by EPICS code. We obtained following performance for high energy electrons over 100GeV, which is a main target of the CALET mission. The geometrical factor is 1200 cm2sr. The energy resolution is better than a few %. Proton rejection power is 2.0x105. In this poster, we present thus obtained performance in observing each kind of particles. In addition, we will present the trigger system optimized for each kind of particles and expected trigger rate in ISS orbit as well.

  5. Performance and slipstream characteristics of small-scale propellers at low Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Deters, Robert W.

    The low Reynolds number effects of small-scale propellers were investigated. At the Reynolds numbers of interest (below 100,000), a decrease in lift and an increase in drag is common making it difficult to predict propeller performance characteristics. A propeller testing apparatus was built to test small scale propellers in static conditions and in an advancing flow. Twenty-seven off-the-shelf propellers, with diameters ranging from 2.25 in to 9 in, were tested in order to determine the general effects of low Reynolds numbers on small propellers. From these tests, increasing the Reynolds number for a propeller increases its efficiency by either increasing the thrust produced or decreasing the power. By doubling the Reynolds number of a propeller, it is not uncommon to increase the efficiency by more the 10%. Using off-the-shelf propellers limits the geometry available and finding propellers of the same geometry but of different scale is very difficult. To solve this problem, four propellers were design and built using a 3D printer. Two of the propellers were simple rectangular twisted blades of different chords. Another propeller was modeled after a full-scale propeller. The fourth propeller was created using inverse design to minimize power loss. Each propeller was built in a 5-in and 9-in diameter version in order to test a larger range of Reynolds numbers. A separate propeller blade and hub system was created to allow each propeller to be tested with different pitch angles and to test each propeller in a 2-, 3-, and 4-blade version. From the performance results of the 3D printed propellers, it was shown that propellers of different scale, but tested at the same Reynolds number, had about the same performance results. Finally, the slipstreams of different propellers were measured using a 7-hole probe. Propeller slipstreams can have a large effect on the aerodynamics of lifting surfaces downstream of the propeller. Small UAVs and MAVs flying in close proximity

  6. A priori study of subgrid-scale flux of a passive scalar in isotropic homogeneous turbulence

    SciTech Connect

    Chumakov, Sergei

    2008-01-01

    We perform a direct numerical simulation (DNS) of forced homogeneous isotropic turbulence with a passive scalar that is forced by mean gradient. The DNS data are used to study the properties of subgrid-scale flux of a passive scalar in the framework of large eddy simulation (LES), such as alignment trends between the flux, resolved, and subgrid-scale flow structures. It is shown that the direction of the flux is strongly coupled with the subgrid-scale stress axes rather than the resolved flow quantities such as strain, vorticity, or scalar gradient. We derive an approximate transport equation for the subgrid-scale flux of a scalar and look at the relative importance of the terms in the transport equation. A particular form of LES tensor-viscosity model for the scalar flux is investigated, which includes the subgrid-scale stress. Effect of different models for the subgrid-scale stress on the model for the subgrid-scale flux is studied.

  7. A large-scale study of the world wide web: network correlation functions with scale-invariant boundaries

    NASA Astrophysics Data System (ADS)

    Ludueña, Guillermo A.; Meixner, Harald; Kaczor, Gregor; Gros, Claudius

    2013-08-01

    We performed a large-scale crawl of the world wide web, covering 6.9 million domains and 57 million subdomains, including all high-traffic sites of the internet. We present a study of the correlations found between quantities measuring the structural relevance of each node in the network (the in- and out-degree, the local clustering coefficient, the first-neighbor in-degree and the Alexa rank). We find that some of these properties show strong correlation effects and that the dependencies occurring out of these correlations follow power laws not only for the averages, but also for the boundaries of the respective density distributions. In addition, these scale-free limits do not follow the same exponents as the corresponding averages. In our study we retain the directionality of the hyperlinks and develop a statistical estimate for the clustering coefficient of directed graphs. We include in our study the correlations between the in-degree and the Alexa traffic rank, a popular index for the traffic volume, finding non-trivial power-law correlations. We find that sites with more/less than about 103 links from different domains have remarkably different statistical properties, for all correlation functions studied, indicating towards an underlying hierarchical structure of the world wide web.

  8. Effects of compressibility on the performance of two full-scale helicopter rotors

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul J

    1952-01-01

    Report presents the results of an investigation conducted on the Langley helicopter test tower to determine experimentally the effects of compressibility on the performance and blade pitching moments of two full-scale helicopter rotors. Two sets of rotor blades were tested which differed only in that the blades of one set incorporated -8 degrees of linear twist, whereas the blades of the other set were untwisted. The tests covered a range of tip speeds from 350 to 770 feet per second and a range of pitch angles from 0 degrees to the limit imposed by extreme vibration.

  9. Large-scale Advanced Propfan (LAP) performance, acoustic and weight estimation, January, 1984

    NASA Technical Reports Server (NTRS)

    Parzych, D.; Shenkman, A.; Cohen, S.

    1985-01-01

    In comparison to turbo-prop applications, the Prop-Fan is designed to operate in a significantly higher range of aircraft flight speeds. Two concerns arise regarding operation at very high speeds: aerodynamic performance and noise generation. This data package covers both topics over a broad range of operating conditions for the eight (8) bladed SR-7L Prop-Fan. Operating conditions covered are: Flight Mach Number 0 - 0.85; blade tip speed 600-800 ft/sec; and cruise power loading 20-40 SHP/D2. Prop-Fan weight and weight scaling estimates are also included.

  10. Design and Performance of Insect-Scale Flapping-Wing Vehicles

    NASA Astrophysics Data System (ADS)

    Whitney, John Peter

    Micro-air vehicles (MAVs)---small versions of full-scale aircraft---are the product of a continued path of miniaturization which extends across many fields of engineering. Increasingly, MAVs approach the scale of small birds, and most recently, their sizes have dipped into the realm of hummingbirds and flying insects. However, these non-traditional biologically-inspired designs are without well-established design methods, and manufacturing complex devices at these tiny scales is not feasible using conventional manufacturing methods. This thesis presents a comprehensive investigation of new MAV design and manufacturing methods, as applicable to insect-scale hovering flight. New design methods combine an energy-based accounting of propulsion and aerodynamics with a one degree-of-freedom dynamic flapping model. Important results include analytical expressions for maximum flight endurance and range, and predictions for maximum feasible wing size and body mass. To meet manufacturing constraints, the use of passive wing dynamics to simplify vehicle design and control was investigated; supporting tests included the first synchronized measurements of real-time forces and three-dimensional kinematics generated by insect-scale flapping wings. These experimental methods were then expanded to study optimal wing shapes and high-efficiency flapping kinematics. To support the development of high-fidelity test devices and fully-functional flight hardware, a new class of manufacturing methods was developed, combining elements of rigid-flex printed circuit board fabrication with "pop-up book" folding mechanisms. In addition to their current and future support of insect-scale MAV development, these new manufacturing techniques are likely to prove an essential element to future advances in micro-optomechanics, micro-surgery, and many other fields.

  11. Impact of Verbal Scale Labels on the Elevation and Spread of Performance Ratings

    ERIC Educational Resources Information Center

    Kuhlemeier, Hans; Hemker, Bas; van den Bergh, Huub

    2013-01-01

    In recent years many countries have introduced authentic performance-based assessments in their national exam systems. Teachers' ratings of their own candidates' performances may suffer from errors of leniency and range restriction. The goal of this study was to examine the impact of manipulating the descriptiveness, balancedness, and polarity of…

  12. Minnesota Linking Study: A Study of the Alignment of the NWEA RIT Scale with the Minnesota Comprehensive Assessments (MCA) Testing Program

    ERIC Educational Resources Information Center

    Northwest Evaluation Association, 2014

    2014-01-01

    Recently, Northwest Evaluation Association (NWEA) completed a study to connect the scale of the Minnesota Comprehensive Assessments (MCA) Testing Program used for Minnesota's mathematics and reading assessments with NWEA's RIT (Rasch Unit) scale. Information from the state assessments was used in a study to establish performance-level scores on…

  13. Subrepository scale hydrothermal analysis in support of total system performance assessment at Yucca Mountain

    SciTech Connect

    Mishra, S.

    1994-12-31

    A coupled thermo-hydrologic model is developed to investigate the impact of emplacing high-level nuclear wastes on heat and fluid flow at the subrepository scale, and to develop abstracted results for input to the current total system performance assessment (TSPA) of Yucca Mountain. Numerical computations are carried out in 2-D axisymmetric geometry, using a range of thermal loads, to generate spatial/temporal evolutions in temperature and saturation fields within individual emplacement panels. These results are analyzed to understand the general nature of liquid movement in the repository due to waste heat, and also to define various temperature dependent mechanistic and phenomenological coefficients for predicting waste package and geosphere performance.

  14. NASA/GE Energy Efficient Engine low pressure turbine scaled test vehicle performance report

    NASA Technical Reports Server (NTRS)

    Bridgeman, M. J.; Cherry, D. G.; Pedersen, J.

    1983-01-01

    The low pressure turbine for the NASA/General Electric Energy Efficient Engine is a highly loaded five-stage design featuring high outer wall slope, controlled vortex aerodynamics, low stage flow coefficient, and reduced clearances. An assessment of the performance of the LPT has been made based on a series of scaled air-turbine tests divided into two phases: Block 1 and Block 2. The transition duct and the first two stages of the turbine were evaluated during the Block 1 phase from March through August 1979. The full five-stage scale model, representing the final integrated core/low spool (ICLS) design and incorporating redesigns of stages 1 and 2 based on Block 1 data analysis, was tested as Block 2 in June through September 1981. Results from the scaled air-turbine tests, reviewed herein, indicate that the five-stage turbine designed for the ICLS application will attain an efficiency level of 91.5 percent at the Mach 0.8/10.67-km (35,000-ft), max-climb design point. This is relative to program goals of 91.1 percent for the ICLS and 91.7 percent for the flight propulsion system (FPS).

  15. Embedded Sensors and Controls to Improve Component Performance and Reliability -- Bench-scale Testbed Design Report

    SciTech Connect

    Melin, Alexander M.; Kisner, Roger A.; Drira, Anis; Reed, Frederick K.

    2015-09-01

    Embedded instrumentation and control systems that can operate in extreme environments are challenging due to restrictions on sensors and materials. As a part of the Department of Energy's Nuclear Energy Enabling Technology cross-cutting technology development programs Advanced Sensors and Instrumentation topic, this report details the design of a bench-scale embedded instrumentation and control testbed. The design goal of the bench-scale testbed is to build a re-configurable system that can rapidly deploy and test advanced control algorithms in a hardware in the loop setup. The bench-scale testbed will be designed as a fluid pump analog that uses active magnetic bearings to support the shaft. The testbed represents an application that would improve the efficiency and performance of high temperature (700 C) pumps for liquid salt reactors that operate in an extreme environment and provide many engineering challenges that can be overcome with embedded instrumentation and control. This report will give details of the mechanical design, electromagnetic design, geometry optimization, power electronics design, and initial control system design.

  16. Investigation of performance, noise and detectability characteristics of small-scale remotely piloted vehicle /RPV/ propellers

    NASA Astrophysics Data System (ADS)

    Janakiram, D. S.; Scruggs, B. W.

    1981-10-01

    Several small-scale propeller configurations, applicable to a conceptual remotely piloted vehicle, were tested under static and simulated forward flight conditions in a wind tunnel to determine their performance, acoustic, and detectability characteristics. The propellers tested had tractor, pusher, and ducted configurations, designed to develop 4 thrust horsepower at a cruise speed of 75 knots at 4000 ft altitude and 95 F. The acoustic data were used to determine the slant range and altitude of no detection of each propeller configuration. The acoustic and detectability characteristics of small-scale propellers were found to be significantly different from those of the large-scale propellers; this is explained by low disk loading or the low operating Reynolds numbers of the propellers. An increase in forward velocity caused a significant drop in SPLs at higher harmonics of the blade passage frequency. Tip speed had a strong effect on noise and detectability in forward flight: most of the propellers were detected at either the first or second harmonic of their blade passage frequency. Three-bladed propellers were generally less detectable than twoor four-bladed propellers for most of the forward velocities. Finally, ducted and pusher propeller configurations were more detectable and less efficient than their free and tractor counterparts.

  17. Study of High Performance Coronagraphic Techniques

    NASA Technical Reports Server (NTRS)

    Crane, Phil (Technical Monitor); Tolls, Volker

    2004-01-01

    The goal of the Study of High Performance Coronagraphic Techniques project (called CoronaTech) is: 1) to verify the Labeyrie multi-step speckle reduction method and 2) to develop new techniques to manufacture soft-edge occulter masks preferably with Gaussian absorption profile. In a coronagraph, the light from a bright host star which is centered on the optical axis in the image plane is blocked by an occulter centered on the optical axis while the light from a planet passes the occulter (the planet has a certain minimal distance from the optical axis). Unfortunately, stray light originating in the telescope and subsequent optical elements is not completely blocked causing a so-called speckle pattern in the image plane of the coronagraph limiting the sensitivity of the system. The sensitivity can be increased significantly by reducing the amount of speckle light. The Labeyrie multi-step speckle reduction method implements one (or more) phase correction steps to suppress the unwanted speckle light. In each step, the stray light is rephased and then blocked with an additional occulter which affects the planet light (or other companion) only slightly. Since the suppression is still not complete, a series of steps is required in order to achieve significant suppression. The second part of the project is the development of soft-edge occulters. Simulations have shown that soft-edge occulters show better performance in coronagraphs than hard-edge occulters. In order to utilize the performance gain of soft-edge occulters. fabrication methods have to be developed to manufacture these occulters according to the specification set forth by the sensitivity requirements of the coronagraph.

  18. Aerodynamic Performance of Scale-Model Turbofan Outlet Guide Vanes Designed for Low Noise

    NASA Technical Reports Server (NTRS)

    Hughes, Christopher E.

    2001-01-01

    The design of effective new technologies to reduce aircraft propulsion noise is dependent on an understanding of the noise sources and noise generation mechanisms in the modern turbofan engine. In order to more fully understand the physics of noise in a turbofan engine, a comprehensive aeroacoustic wind tunnel test programs was conducted called the 'Source Diagnostic Test.' The text was cooperative effort between NASA and General Electric Aircraft Engines, as part of the NASA Advanced Subsonic Technology Noise Reduction Program. A 1/5-scale model simulator representing the bypass stage of a current technology high bypass ratio turbofan engine was used in the test. The test article consisted of the bypass fan and outlet guide vanes in a flight-type nacelle. The fan used was a medium pressure ratio design with 22 individual, wide chord blades. Three outlet guide vane design configurations were investigated, representing a 54-vane radial Baseline configuration, a 26-vane radial, wide chord Low Count configuration and a 26-vane, wide chord Low Noise configuration with 30 deg of aft sweep. The test was conducted in the NASA Glenn Research Center 9 by 15-Foot Low Speed Wind Tunnel at velocities simulating the takeoff and approach phases of the aircraft flight envelope. The Source Diagnostic Test had several acoustic and aerodynamic technical objectives: (1) establish the performance of a scale model fan selected to represent the current technology turbofan product; (2) assess the performance of the fan stage with each of the three distinct outlet guide vane designs; (3) determine the effect of the outlet guide vane configuration on the fan baseline performance; and (4) conduct detailed flowfield diagnostic surveys, both acoustic and aerodynamic, to characterize and understand the noise generation mechanisms in a turbofan engine. This paper addresses the fan and stage aerodynamic performance results from the Source Diagnostic Test.

  19. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis.

    PubMed

    Chen, Ho-Wen; Chang, Ni-Bin; Chen, Jeng-Chung; Tsai, Shu-Ju

    2010-07-01

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA)--a production economics tool--to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world. PMID:20181468

  20. The Numerical Performance of Wavelets for PDEs: The Multi-Scale Finite Element

    SciTech Connect

    Christon, M.A.; Roach, D.W.

    1998-12-23

    The research summarized in this paper is part of a multiyear effort focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality in the usual L 2 sense is less desirable than orthogonality in the energy norm. This conclusion has led to the development of a multi-scale lineax finite element based on a hierarchical change-of-basis. This work considers the numerical and computational performance of the hierarchical Schauder basis in a Galerkin context. A unique row-column lumping procedure is developed with multi-scale solution strategies for 1-D and 2-D elliptic partial differential equations.

  1. Environmental performance evaluation of large-scale municipal solid waste incinerators using data envelopment analysis

    SciTech Connect

    Chen, H.-W.; Chang, N.-B.; Chen, J.-C.; Tsai, S.-J.

    2010-07-15

    Limited to insufficient land resources, incinerators are considered in many countries such as Japan and Germany as the major technology for a waste management scheme capable of dealing with the increasing demand for municipal and industrial solid waste treatment in urban regions. The evaluation of these municipal incinerators in terms of secondary pollution potential, cost-effectiveness, and operational efficiency has become a new focus in the highly interdisciplinary area of production economics, systems analysis, and waste management. This paper aims to demonstrate the application of data envelopment analysis (DEA) - a production economics tool - to evaluate performance-based efficiencies of 19 large-scale municipal incinerators in Taiwan with different operational conditions. A 4-year operational data set from 2002 to 2005 was collected in support of DEA modeling using Monte Carlo simulation to outline the possibility distributions of operational efficiency of these incinerators. Uncertainty analysis using the Monte Carlo simulation provides a balance between simplifications of our analysis and the soundness of capturing the essential random features that complicate solid waste management systems. To cope with future challenges, efforts in the DEA modeling, systems analysis, and prediction of the performance of large-scale municipal solid waste incinerators under normal operation and special conditions were directed toward generating a compromised assessment procedure. Our research findings will eventually lead to the identification of the optimal management strategies for promoting the quality of solid waste incineration, not only in Taiwan, but also elsewhere in the world.

  2. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    DOE PAGESBeta

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and newmore » predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.« less

  3. Performance upgrades to the MCNP6 burnup capability for large scale depletion calculations

    SciTech Connect

    Fensin, M. L.; Galloway, J. D.; James, M. R.

    2015-04-11

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. With the merger of MCNPX and MCNP5, MCNP6 combined the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. The new MCNP6 depletion capability was first showcased at the International Congress for Advancements in Nuclear Power Plants (ICAPP) meeting in 2012. At that conference the new capabilities addressed included the combined distributive and shared memory parallel architecture for the burnup capability, improved memory management, physics enhancements, and new predictability as compared to the H.B Robinson Benchmark. At Los Alamos National Laboratory, a special purpose cluster named “tebow,” was constructed such to maximize available RAM per CPU, as well as leveraging swap space with solid state hard drives, to allow larger scale depletion calculations (allowing for significantly more burnable regions than previously examined). As the MCNP6 burnup capability was scaled to larger numbers of burnable regions, a noticeable slowdown was realized.This paper details two specific computational performance strategies for improving calculation speedup: (1) retrieving cross sections during transport; and (2) tallying mechanisms specific to burnup in MCNP. To combat this slowdown new performance upgrades were developed and integrated into MCNP6 1.2.

  4. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  5. Performance Comparison at Mach Numbers 1.8 and 2.0 of Full Scale and Quarter Scale Translating-Spike Inlets

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Dryer, M.; Hearth, D. P.

    1957-01-01

    The performance of a full-scale translating-spike inlet was obtained at Mach numbers of 1.8 and 2.0 and at angles of attach from 0 deg to 6 deg. Comparisons were made between the full-scale production inlet configuration and a geometrically similar quarter-scale model. The inlet pressure-recovery, cowl pressure-distribution, and compressor-face distortion characteristics of the full-scale inlet agreed fairly well with the quarter-scale results. In addition, the results indicated that bleeding around the periphery ahead of the compressor-face station improved pressure recovery and compressor-face distortion, especially at angle of attack.

  6. Scales

    ScienceCinema

    Murray Gibson

    2010-01-08

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  7. Scales

    SciTech Connect

    Murray Gibson

    2007-04-27

    Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

  8. Experimental, theoretical, and numerical studies of small scale combustion

    NASA Astrophysics Data System (ADS)

    Xu, Bo

    Recently, the demand increased for the development of microdevices such as microsatellites, microaerial vehicles, micro reactors, and micro power generators. To meet those demands the biggest challenge is obtaining stable and complete combustion at relatively small scale. To gain a fundamental understanding of small scale combustion in this thesis, thermal and kinetic coupling between the gas phase and the structure at meso and micro scales were theoretically, experimentally, and numerically studied; new stabilization and instability phenomena were identified; and new theories for the dynamic mechanisms of small scale combustion were developed. The reduction of thermal inertia at small scale significantly reduces the response time of the wall and leads to a strong flame-wall coupling and extension of burning limits. Mesoscale flame propagation and extinction in small quartz tubes were theoretically, experimentally and numerically studied. It was found that wall-flame interaction in mesoscale combustion led to two different flame regimes, a heat-loss dominant fast flame regime and a wall-flame coupling slow flame regime. The nonlinear transition between the two flame regimes was strongly dependent on the channel width and flow velocity. It is concluded that the existence of multiple flame regimes is an inherent phenomenon in mesoscale combustion. In addition, all practical combustors have variable channel width in the direction of flame propagation. Quasi-steady and unsteady propagations of methane and propane-air premixed flames in a mesoscale divergent channel were investigated experimentally and theoretically. The emphasis was the impact of variable cross-section area and the flame-wall coupling on the flame transition between different regimes and the onset of flame instability. For the first time, spinning flames were experimentally observed for both lean and rich methane and propane-air mixtures in a broad range of equivalence ratios. An effective Lewis number

  9. Profiling and Improving I/O Performance of a Large-Scale Climate Scientific Application

    NASA Technical Reports Server (NTRS)

    Liu, Zhuo; Wang, Bin; Wang, Teng; Tian, Yuan; Xu, Cong; Wang, Yandong; Yu, Weikuan; Cruz, Carlos A.; Zhou, Shujia; Clune, Tom; Klasky, Scott

    2013-01-01

    Exascale computing systems are soon to emerge, which will pose great challenges on the huge gap between computing and I/O performance. Many large-scale scientific applications play an important role in our daily life. The huge amounts of data generated by such applications require highly parallel and efficient I/O management policies. In this paper, we adopt a mission-critical scientific application, GEOS-5, as a case to profile and analyze the communication and I/O issues that are preventing applications from fully utilizing the underlying parallel storage systems. Through in-detail architectural and experimental characterization, we observe that current legacy I/O schemes incur significant network communication overheads and are unable to fully parallelize the data access, thus degrading applications' I/O performance and scalability. To address these inefficiencies, we redesign its I/O framework along with a set of parallel I/O techniques to achieve high scalability and performance. Evaluation results on the NASA discover cluster show that our optimization of GEOS-5 with ADIOS has led to significant performance improvements compared to the original GEOS-5 implementation.

  10. Investigating Operating System Noise in Extreme-Scale High-Performance Computing Systems using Simulation

    SciTech Connect

    Engelmann, Christian

    2013-01-01

    Hardware/software co-design for future-generation high-performance computing (HPC) systems aims at closing the gap between the peak capabilities of the hardware and the performance realized by applications (application-architecture performance gap). Performance profiling of architectures and applications is a crucial part of this iterative process. The work in this paper focuses on operating system (OS) noise as an additional factor to be considered for co-design. It represents the first step in including OS noise in HPC hardware/software co-design by adding a noise injection feature to an existing simulation-based co-design toolkit. It reuses an existing abstraction for OS noise with frequency (periodic recurrence) and period (duration of each occurrence) to enhance the processor model of the Extreme-scale Simulator (xSim) with synchronized and random OS noise simulation. The results demonstrate this capability by evaluating the impact of OS noise on MPI_Bcast() and MPI_Reduce() in a simulated future-generation HPC system with 2,097,152 compute nodes.

  11. Helicopter blade dynamic loads measured during performance testing of two scaled rotors

    NASA Technical Reports Server (NTRS)

    Berry, John D.

    1987-01-01

    A test to determine the performance differences between the 27-percent-scale models of two rotors for the U.S. Army AH-64 helicopter was conducted in the Langley 14- by 22-Foot Subsonic Tunnel. One rotor, referred to as the baseline rotor, simulated the geometry and dynamic characteristics of the production baseline rotor, and the other rotor, referred to as the advanced rotor, was designed to have improved hover performance. During the performance test, the dynamic pitch-link forces and blade bending and torsion moments were also measured. Dynamic data from the forward flight investigation are reduced and presented. The advanced blade set was designed to have dynamic characteristics similar to those of the baseline rotor so that test conditions would not be limited by potential rotor instability and blade resonances, and so that the measured performance increments could be considered to be due purely to aerodynamic causes. Data show consistent trends with advance ratio for both blade sets with generally higher oscillatory loads occurring for the advanced blade set when compared with the baseline blade set.

  12. Studying Emotional Expression in Music Performance.

    ERIC Educational Resources Information Center

    Gabrielsson, Alf

    1999-01-01

    Explores the importance of emotional expression in music performance. Performers played music to express different emotions and then listening tests were conducted in order to determine whether the intended expressions were perceived. Presents and discusses the results. (CMK)

  13. Ice Accretions and Full-Scale Iced Aerodynamic Performance Data for a Two-Dimensional NACA 23012 Airfoil

    NASA Technical Reports Server (NTRS)

    Addy, Harold E., Jr.; Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Guffond, Didier; Montreuil, Emmanuel; Moens, Frederic

    2016-01-01

    This report documents the data collected during the large wind tunnel campaigns conducted as part of the SUNSET project (StUdies oN Scaling EffecTs due to ice) also known as the Ice-Accretion Aerodynamics Simulation study: a joint effort by NASA, the Office National d'Etudes et Recherches Aérospatiales (ONERA), and the University of Illinois. These data form a benchmark database of full-scale ice accretions and corresponding ice-contaminated aerodynamic performance data for a two-dimensional (2D) NACA 23012 airfoil. The wider research effort also included an analysis of ice-contaminated aerodynamics that categorized ice accretions by aerodynamic effects and an investigation of subscale, low- Reynolds-number ice-contaminated aerodynamics for the NACA 23012 airfoil. The low-Reynolds-number investigation included an analysis of the geometric fidelity needed to reliably assess aerodynamic effects of airfoil icing using artificial ice shapes. Included herein are records of the ice accreted during campaigns in NASA Glenn Research Center's Icing Research Tunnel (IRT). Two different 2D NACA 23012 airfoil models were used during these campaigns; an 18-in. (45.7-cm) chord (subscale) model and a 72-in. (182.9-cm) chord (full-scale) model. The aircraft icing conditions used during these campaigns were selected from the Federal Aviation Administration's (FAA's) Code of Federal Regulations (CFR) Part 25 Appendix C icing envelopes. The records include the test conditions, photographs of the ice accreted, tracings of the ice, and ice depth measurements. Model coordinates and pressure tap locations are also presented. Also included herein are the data recorded during a wind tunnel campaign conducted in the F1 Subsonic Pressurized Wind Tunnel of ONERA. The F1 tunnel is a pressured, high- Reynolds-number facility that could accommodate the full-scale (72-in. (182.9-cm) chord) 2D NACA 23012 model. Molds were made of the ice accreted during selected test runs of the full-scale model

  14. Performance evaluation of an integrated small-scale SOFC-biomass gasification power generation system

    NASA Astrophysics Data System (ADS)

    Wongchanapai, Suranat; Iwai, Hiroshi; Saito, Motohiro; Yoshida, Hideo

    2012-10-01

    The combination of biomass gasification and high-temperature solid oxide fuel cells (SOFCs) offers great potential as a future sustainable power generation system. In order to provide insights into an integrated small-scale SOFC-biomass gasification power generation system, system simulation was performed under diverse operating conditions. A detailed anode-supported planar SOFC model under co-flow operation and a thermodynamic equilibrium for biomass gasification model were developed and verified by reliable experimental and simulation data. The other peripheral components include three gas-to-gas heat exchangers (HXs), heat recovery steam generator (HRSG), burner, fuel and air compressors. To determine safe operating conditions with high system efficiency, energy and exergy analysis was performed to investigate the influence through detailed sensitivity analysis of four key parameters, e.g. steam-to-biomass ratio (STBR), SOFC inlet stream temperatures, fuel utilization factor (Uf) and anode off-gas recycle ratio (AGR) on system performance. Due to the fact that SOFC stack is accounted for the most expensive part of the initial investment cost, the number of cells required for SOFC stack is economically optimized as well. Through the detailed sensitivity analysis, it shows that the increase of STBR positively affects SOFC while gasifier performance drops. The most preferable operating STBR is 1.5 when the highest system efficiencies and the smallest number of cells. The increase in SOFC inlet temperature shows negative impact on system and gasifier performances while SOFC efficiencies are slightly increased. The number of cells required for SOFC is reduced with the increase of SOFC inlet temperature. The system performance is optimized for Uf of 0.75 while SOFC and system efficiencies are the highest with the smallest number of cells. The result also shows the optimal anode off-gas recycle ratio of 0.6. Regarding with the increase of anode off-gas recycle ratio

  15. Performance assessment and optimization of an irreversible nano-scale Stirling engine cycle operating with Maxwell-Boltzmann gas

    NASA Astrophysics Data System (ADS)

    Ahmadi, Mohammad H.; Ahmadi, Mohammad-Ali; Pourfayaz, Fathollah

    2015-09-01

    Developing new technologies like nano-technology improves the performance of the energy industries. Consequently, emerging new groups of thermal cycles in nano-scale can revolutionize the energy systems' future. This paper presents a thermo-dynamical study of a nano-scale irreversible Stirling engine cycle with the aim of optimizing the performance of the Stirling engine cycle. In the Stirling engine cycle the working fluid is an Ideal Maxwell-Boltzmann gas. Moreover, two different strategies are proposed for a multi-objective optimization issue, and the outcomes of each strategy are evaluated separately. The first strategy is proposed to maximize the ecological coefficient of performance (ECOP), the dimensionless ecological function (ecf) and the dimensionless thermo-economic objective function ( F . Furthermore, the second strategy is suggested to maximize the thermal efficiency ( η), the dimensionless ecological function (ecf) and the dimensionless thermo-economic objective function ( F). All the strategies in the present work are executed via a multi-objective evolutionary algorithms based on NSGA∥ method. Finally, to achieve the final answer in each strategy, three well-known decision makers are executed. Lastly, deviations of the outcomes gained in each strategy and each decision maker are evaluated separately.

  16. Some results of the testing of a full-scale Ogee tip helicopter rotor; acoustics, loads, and performance

    NASA Technical Reports Server (NTRS)

    Mantay, W. R.; Shidler, P. A.; Campbell, R. L.

    1977-01-01

    Full-scale tests were utilized to investigate the effect of the Ogee tip on helicopter rotor acoustics, performance, and loads. Two facilities were used for this study: the Langley whirl tower and a UH-1H helicopter. The test matrix for hover on the whirl tower involved thrust values from 0 to 44,480 N (10,000 lbs) at several tip Mach numbers for both standard and Ogee rotors. The full-scale testing on the UH-1H encompassed the major portion of the flight envelope for that aircraft. Both near-field acoustic measurements as well as far-field flyover data were obtained for both the Ogee and standard rotors. Data analysis of the whirl-tower test shows that the Ogee tip does significantly diffuse the tip vortex while providing some improvement in hover performance. Flight testing of both rotors indicates that the strong impulsive noise signature of the standard rotor can be reduced with the Ogee rotor. Forward flight performance was significantly improved with the Ogee configuration for a large number of flight conditions. Further, rotor control loads and vibrations were reduced through use of this advanced tip rotor.

  17. Defining Anaerobic Digestion Stability-Full Scale Study

    NASA Astrophysics Data System (ADS)

    Demitry, M. E., Sr.

    2014-12-01

    A full-scale anaerobic digester receiving a mixture of primary and secondary sludge was monitored for one hundred days. A chemical oxygen demand, COD, and a volatile solids, VS, mass balance was conducted to evaluate the stability of the digester and its capability of producing methane gas. The COD mass balance could account for nearly 90% of the methane gas produced while the VS mass balance showed that 91% of the organic matter removed resulted in biogas formation. Other parameters monitored included: pH, alkalinity, VFA, and propionic acid. The values of these parameters showed that steady state had occurred. Finally, at mesophilic temperature and at steady state performance, the anaerobic digester stability was defined as a constant ratio of methane produced per substrate of ΔVS (average ratio=0.404 l/g). This ratio can be used as universal metric to determine the anaerobic digester stability in an easy and inexpensive way.

  18. Dynamic interaction numerical models in the time domain based on the high performance scaled boundary finite element method

    NASA Astrophysics Data System (ADS)

    Li, Jianbo; Liu, Jun; Lin, Gao

    2013-12-01

    Consideration of structure-foundation-soil dynamic interaction is a basic requirement in the evaluation of the seismic safety of nuclear power facilities. An efficient and accurate dynamic interaction numerical model in the time domain has become an important topic of current research. In this study, the scaled boundary finite element method (SBFEM) is improved for use as an effective numerical approach with good application prospects. This method has several advantages, including dimensionality reduction, accuracy of the radial analytical solution, and unlike other boundary element methods, it does not require a fundamental solution. This study focuses on establishing a high performance scaled boundary finite element interaction analysis model in the time domain based on the acceleration unit-impulse response matrix, in which several new solution techniques, such as a dimensionless method to solve the interaction force, are applied to improve the numerical stability of the actual soil parameters and reduce the amount of calculation. Finally, the feasibility of the time domain methods are illustrated by the response of the nuclear power structure and the accuracy of the algorithms are dynamically verified by comparison with the refinement of a large-scale viscoelastic soil model.

  19. Towards large-scale multi-socket, multicore parallel simulations: Performance of an MPI-only semiconductor device simulator

    NASA Astrophysics Data System (ADS)

    Lin, Paul T.; Shadid, John N.

    2010-09-01

    This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a set of multi-socket, multicore architectures with nonuniform memory access (NUMA) compute nodes. These multicore architectures include two linux clusters with multicore processors: a quad-socket, quad-core AMD Opteron platform and a dual-socket, quad-core Intel Xeon Nehalem platform; and a dual-socket, six-core AMD Opteron workstation. These platforms have complex memory hierarchies that include local core-based cache, local socket-based memory, access to memory on the same mainboard from another socket, and then memory across network links to different nodes. The specific semiconductor device simulator used in this study employs a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling results presented include a large-scale problem of 100+ million unknowns on 4096 cores and a comparison with the Cray XT3/4 Red Storm capability platform. Although the MPI-only device simulator employed for this work can take advantage of all the cores of quad-core and six-core CPUs, the efficiency of the linear system solve is decreasing with increased core count and eventually a different programming paradigm will be needed.

  20. Study of High-Performance Coronagraphic Techniques

    NASA Astrophysics Data System (ADS)

    Tolls, Volker; Aziz, M. J.; Gonsalves, R. A.; Korzennik, S. G.; Labeyrie, A.; Lyon, R. G.; Melnick, G. J.; Somerstein, S.; Vasudevan, G.; Woodruff, R. A.

    2007-05-01

    We will provide a progress report about our study of high-performance coronagraphic techniques. At SAO we have set up a testbed to test coronagraphic masks and to demonstrate Labeyrie's multi-step speckle reduction technique. This technique expands the general concept of a coronagraph by incorporating a speckle corrector (phase or amplitude) and second occulter for speckle light suppression. The testbed consists of a coronagraph with high precision optics (2 inch spherical mirrors with lambda/1000 surface quality), lasers simulating the host star and the planet, and a single Labeyrie correction stage with a MEMS deformable mirror (DM) for the phase correction. The correction function is derived from images taken in- and slightly out-of-focus using phase diversity. The testbed is operational awaiting coronagraphic masks. The testbed control software for operating the CCD camera, the translation stage that moves the camera in- and out-of-focus, the wavefront recovery (phase diversity) module, and DM control is under development. We are also developing coronagraphic masks in collaboration with Harvard University and Lockheed Martin Corp. (LMCO). The development at Harvard utilizes a focused ion beam system to mill masks out of absorber material and the LMCO approach uses patterns of dots to achieve the desired mask performance. We will present results of both investigations including test results from the first generation of LMCO masks obtained with our high-precision mask scanner. This work was supported by NASA through grant NNG04GC57G, through SAO IR&D funding, and by Harvard University through the Research Experience for Undergraduate Program of Harvard's Materials Science and Engineering Center. Central facilities were provided by Harvard's Center for Nanoscale Systems.

  1. Cobalt oxide hollow microspheres with micro- and nano-scale composite structure: Fabrication and electrochemical performance

    NASA Astrophysics Data System (ADS)

    Tao, Feifei; Gao, Cuiling; Wen, Zhenhai; Wang, Qiang; Li, Jinghong; Xu, Zheng

    2009-05-01

    Co 3O 4 hollow microspheres with micro- and nano-scale composite structure self-assembled by nanosheets were successfully fabricated by the template-free wet-chemical approach. This method is simple, facile and effective. The Co 3O 4 hollow microspheres with good purity and homogeneous size were well characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Fourier transform IR (FTIR), thermogravimetric analysis (TGA) and inductively coupled plasma atomic emission spectrometer (ICP). The formation mechanism was deeply studied. The micro- and nano-scale composite structure constructed by the porous nanosheets promotes to improve the electrochemical properties of Co 3O 4 hollow microspheres. The high discharge capacity of 1048 mAh g -1 indicates it to be the potential application in electrode materials of Li-ion battery.

  2. Holistic Watershed-Scale Approach for Studying Agricultural Chemicals

    NASA Astrophysics Data System (ADS)

    Capel, P. D.; Domagalski, J. L.

    2006-05-01

    The USGS National Water-Quality Assessment (NAWQA) Program studied the water quality of 51 areas across the United States during its first decade (1991-2001). Analyses of results from that phase of the NAWQA Program indicated that detailed studies of the processes affecting water quality could aid in the interpretation of these data, help to determine the direction and scope of future monitoring studies, and add to the understanding of the sources, transport and fate of non-point source chemicals, such as from agriculture. Now in the second decade of investigations, the NAWQA Program has initiated new process-based detailed studies to increase our understanding at the scale of a small watershed (about 3-15 square kilometers), nested within the larger basins studied during the first decade. The holistic, mass-budget approach for small agricultural watersheds that was adopted includes processes, and measures water and chemicals in the atmosphere, surface water, tile drains, overland flow, and within various sub-surface environments including the vadose, saturated, and hyporheic zones. The primary chemicals of interest were nutrients (nitrogen and phosphorous), the triazine and acetanilide herbicides, and the organophosphorus insecticides. Extensive field observations were made, and numerical models were developed to simulate important environmental compartments and interfaces associated with the transport and fate of agricultural chemicals. It is well recognized that these field measurements and simulations cannot fully achieve a full mass budget at this scale, but the approach provides a useful means for comparisons of various processes in different environmental settings. The results gained using this approach will add to the general knowledge of environmental transport and fate processes, and have transfer value to unstudied areas and different scales of investigation. The five initial study areas started in 2002, included watersheds in California, Indiana

  3. Socio-Economic Status and Language Acquisition: Children's Performance on the New Reynell Developmental Language Scales

    ERIC Educational Resources Information Center

    Letts, Carolyn; Edwards, Susan; Sinka, Indra; Schaefer, Blanca; Gibbons, Wendy

    2013-01-01

    Background: Several studies in recent years have indicated a link between socio-economic status (SES) of families and children's language development, including studies that have measured children's language through formal standardized test procedures. High numbers of children with low performance have been found in lower socio-economic groups in…

  4. Numerical study of the small scale structures in Boussinesq convection

    NASA Technical Reports Server (NTRS)

    Weinan, E.; Shu, Chi-Wang

    1992-01-01

    Two-dimensional Boussinesq convection is studied numerically using two different methods: a filtered pseudospectral method and a high order accurate Essentially Nonoscillatory (ENO) scheme. The issue whether finite time singularity occurs for initially smooth flows is investigated. The numerical results suggest that the collapse of the bubble cap is unlikely to occur in resolved calculations. The strain rate corresponding to the intensification of the density gradient across the front saturates at the bubble cap. We also found that the cascade of energy to small scales is dominated by the formulation of thin and sharp fronts across which density jumps.

  5. The scaling of performance and losses in miniature internal combustion engines

    NASA Astrophysics Data System (ADS)

    Menon, Shyam Kumar

    Miniature glow ignition internal combustion (IC) piston engines are an off--the--shelf technology that could dramatically increase the endurance of miniature electric power supplies and the range and endurance of small unmanned air vehicles provided their overall thermodynamic efficiencies can be increased to 15% or better. This thesis presents the first comprehensive analysis of small (<500 g) piston engine performance. A unique dynamometer system is developed that is capable of making reliable measurements of engine performance and losses in these small engines. Methodologies are also developed for measuring volumetric, heat transfer, exhaust, mechanical, and combustion losses. These instruments and techniques are used to investigate the performance of seven single-cylinder, two-stroke, glow fueled engines ranging in size from 15 to 450 g (0.16 to 7.5 cm3 displacement). Scaling rules for power output, overall efficiency, and normalized power are developed from the data. These will be useful to developers of micro-air vehicles and miniature power systems. The data show that the minimum length scale of a thermodynamically viable piston engine based on present technology is approximately 3 mm. Incomplete combustion is the most important challenge as it accounts for 60-70% of total energy losses. Combustion losses are followed in order of importance by heat transfer, sensible enthalpy, and friction. A net heat release analysis based on in-cylinder pressure measurements suggest that a two--stage combustion process occurs at low engine speeds and equivalence ratios close to 1. Different theories based on burning mode and reaction kinetics are proposed to explain the observed results. High speed imaging of the combustion chamber suggests that a turbulent premixed flame with its origin in the vicinity of the glow plug is the primary driver of combustion. Placing miniature IC engines on a turbulent combustion regime diagram shows that they operate in the 'flamelet in eddy

  6. 100 Area groundwater biodenitrification bench-scale treatability study procedures

    SciTech Connect

    Peyton, B.M.; Martin, K.R.

    1993-05-01

    This document describes the methodologies and procedures for conducting the bench-scale biodenitrification treatability tests at Pacific Northwest Laboratory{sup a} (PNL). Biodenitrification is the biological conversion of nitrate and nitrite to gaseous nitrogen. The tests will use statistically designed batch studies to determine if biodenitrification can reduce residual nitrate concentrations to 45 mg/L, the current maximum contaminant level (MCL). These tests will be carried out in anaerobic flasks with a carbon source added to demonstrate nitrate removal. At the pilot scale, an incremental amount of additional carbon will be required to remove the small amount of oxygen present in the incoming groundwater. These tests will be conducted under the guidance of Westinghouse Hanford Company (WHC) and the 100-HR-3 Groundwater Treatability Test Plan (DOE/RL-92-73) and the Treatability Study Program Plan (DOE/RL-92-48) using groundwater from 100-HR-3. In addition to the procedures, requirements for safety, quality assurance, reporting, and schedule are given. Appendices include analytical procedures, a Quality Assurance Project Plan, a Health and Safety Plan, and Applicable Material Data Safety Sheets. The procedures contained herein are designed specifically for the 100-HR-3 Groundwater Treatability Test Plan, and while the author believes that the methods described herein are scientifically valid, the procedures should not be construed or mistaken to be generally applicable to any other treatability study.

  7. Pattern Scaling for Developing Change Scenarios in Water Supply Studies

    NASA Astrophysics Data System (ADS)

    Anandhi, A.; Pierson, D.; Frie, A.

    2014-12-01

    Change factor methodology (CFM), or delta change factor methodology, is a type of pattern scaling. Although a variety of methods are available to develop scenarios, CFMs are widely used for their ease and speed of application and their capability to directly scale local data according to changes suggested by the global climate model (GCM) scenarios. Change factors (CFs) can be calculated and used in a number of ways to estimate future climate scenarios, but no clear guidelines are available in the literature to decide which methodologies are most suitable for different applications. This study compares and contrasts several categories of CFM (additive versus multiplicative and single versus multiple) for a number of climate variables. The study employs several theoretical examples as well as an applied study from the New York City water supply. Results show that in cases where the frequency distribution of the GCM baseline climate is close to the frequency distribution of the observed climate, or when the frequency distribution of the GCM future climate is close to the frequency distribution of the GCM baseline climate, additive and multiplicative single CFMs provide comparable results. Two options to guide the choice of CFM are suggested: the first is a detailed methodological analysis for choosing the most appropriate CFM, and the second is a default method for circumstances in which a detailed methodological analysis is too cumbersome.

  8. Performance on selected visual and auditory subtests of the Wechsler Memory Scale-Fourth Edition during laboratory-induced pain.

    PubMed

    Etherton, Joseph L; Tapscott, Brian E

    2015-01-01

    Although chronic pain patients commonly report problems with concentration and memory, recent research indicates that induced pain alone causes little or no impairment on several Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests, suggesting that cognitive complaints in chronic pain may be attributable to factors other than pain. The current studies examined potential effects of induced pain on Wechsler Memory Scale-Fourth Edition (WMS-IV) visual working memory index (VWM) subtests (Experiment 1, n = 32) and on the immediate portions of WMS-IV auditory memory (IAM) subtests (Experiment 2, n = 55). In both studies, participants were administered one of two subtests (Symbol Span or Spatial Addition for Experiment 1; Logical Memory or Verbal Paired Associates for Experiment 2) normally and were then administered the alternate subtest while experiencing either cold pressor pain induction or a nonpainful control condition. Results indicate that induced pain in nonclinical volunteers did not impair performance on either VWM or IAM performance, suggesting that pain alone does not account for complaints or deficits in these domains in chronic pain patients. Nonpainful variables such as sleep deprivation or emotional disturbance may be responsible for reported cognitive complaints in chronic pain patients. PMID:25655774

  9. Large-scale actuating performance analysis of a composite curved piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Chung, Soon Wan; Hwang, In Seong; Kim, Seung Jo

    2006-02-01

    In this paper, the electromechanical displacements of curved piezoelectric actuators composed of PZT ceramic and laminated composite materials are calculated on the basis of high performance computing technology and the optimal configuration of the composite curved actuator is examined. To accurately predict the local pre-stress in the device due to the mismatch in the coefficients of thermal expansion, carbon/epoxy and glass/epoxy as well as PZT ceramic are numerically modelled by using hexahedral solid elements. Because the modeling of these thin layers increases the number of degrees of freedom, large-scale structural analyses are performed using the PEGASUS supercomputer, which is installed in our laboratory. In the first stage, the curved shape of the actuator and the internal stress in each layer are obtained by cured curvature analysis. Subsequently, the displacement due to the piezoelectric force (which results from the applied voltage) is also calculated. The performance of the composite curved actuator is investigated by comparing the displacements obtained by variation of the thickness and the elastic modulus of laminated composite layers. In order to consider the finite deformation in the first stage of the analysis and include the pre-stress due to the curing process in the second stage, nonlinear finite element analyses are carried out.

  10. Large-scale hybrid motor performance and designs for use in launch vehicle applications

    NASA Astrophysics Data System (ADS)

    Flittie, K. J.; Estey, P.

    1993-11-01

    The American Rocket Company has developed two large-scale liquid oxygen/polybutadiene hybrid rocket motors at 334,000 N (75,000 lbf) and 1,112,000 N (250,000 lbf) thrust. These hybrid rocket motors or derivatives of these motors can be used as strap-on boosters to replace or upgrade the existing strap-on boosters for the fleet of U.S. launch vehicles and for the planned next generation launch vehicle. Hybrid rocket boosters offer a new solution for boost propulsion since hybrids solve many of the safety and environmental concerns facing solid rocket motor manufacture and operation, yet deliver performance comparable to liquid rocket engines with much less hardware and operational complexity. This paper presents motor performance data from AMROC's 334,000 N and 1,112,000 N thrust hybrid rocket motors. A description of these hybrid motors, their performance specifications, and the key enabling technologies that have been developed at AMROC is presented. The design and development approach for an 850K thrust hybrid motor is described.

  11. Assessment of Integer Precise Point Positioning performances at different temporal scales

    NASA Astrophysics Data System (ADS)

    Fund, F.; Perosanz, F.; Mercier, F.; Loyer, S.

    2012-04-01

    Recent improvements in Precise Point Positioning (PPP) including ambiguity resolution (Integer PPP; IPPP) make this technique a potential alternative to the classical differential approach. Single epoch positioning is also a powerful strategy to make GPS observation data screening. If all local earth deformations are correctly taken into account, residuals of position time series might be used to assess the processing quality in terms of receiver performance and local environment, constellation orbits and clocks error projection, and processing options pertinence. The aim of this presentation is to quantify current performances of PPP and IPPP at various temporal and spatial scales. We present what user should expect with respect to the classical double difference approach and what are the current noise characteristics of residual PPP time series. We use several geodetic GPS receivers located to different latitudes and suffer from different multipath situations and meteorological conditions. First, every situation is evaluated in terms of PPP performance with respect to double differences approach. Results are presented as a function of batch durations from hours to several days. Then, we show that GPS IPPP time series still suffer from various spurious signals (random, periodic, jumps...). Sometimes, errors clearly have a sidereal orbital period and a frequency analysis is provided. Also, artificial "midnight jumps" can be introduced when processing 24-hours batch solutions.

  12. Characterizing the performance of ecosystem models across time scales: A spectral analysis of the North American Carbon Program site-level synthesis

    SciTech Connect

    Dietze, Michael; Vargas, Rodrigo; Richardson, Andrew D.; Stoy, Paul C.; Barr, Alan; Anderson, Ryan; Arain, M. A.; Baker, Ian; Black, T. Andrew; Chen, Jing Ming; Ciais, Philippe; Flanagan, Lawrence; Gough, Christopher; Grant, R. F.; Hollinger, D.; Izaurralde, Roberto C.; Kucharik, Chris; Lafleur, Peter; Liu, Shuguang; Lokupitiya, Erandathie; Luo, Yiqi; Munger, J. W.; Peng, Changhui; Poulter, Benjamin; Price, David T.; Ricciuto, Daniel M.; Riley, William; Sahoo, Alok Kumar; Schaefer, Kevin; Suyker, Andrew E.; Tian, Hanqin; Tonitto, Christine; Verbeeck, Hans; Verma, Shashi B.; Wang, Weifeng; Weng, Ensheng

    2011-12-20

    Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Most assessments of model performance occur at individual temporal scales, but ecosystems respond to drivers at multiple time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the identification of the dominant time scales contributing to model performance in the frequency domain. In this study we used wavelet analyses to synthesize the performance of twenty-one ecosystem models at nine eddy-covariance towers as part of the North American Carbon Program's site-level inter-comparison. This study expands upon previous single-site and single-model analyses to determine what patterns of model failure are consistent across a diverse range of models and sites.

  13. EFRT M-12 Issue Resolution: Comparison of Filter Performance at PEP and CUF Scale

    SciTech Connect

    Daniel, Richard C.; Billing, Justin M.; Bontha, Jagannadha R.; Brown, Christopher F.; Eslinger, Paul W.; Hanson, Brady D.; Huckaby, James L.; Karri, Naveen K.; Kimura, Marcia L.; Kurath, Dean E.; Minette, Michael J.

    2009-08-13

    Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Hanford Tank Waste Treatment and Immobilization Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed and constructed and is to be operated as part of a plan to respond to issue M12, Undemonstrated Leaching Processes. The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP replicates the WTP leaching processes using prototypic equipment and control strategies. The PEP also includes non-prototypic ancillary equipment to support the core processing. Two operating scenarios are currently being evaluated for the ultrafiltration process (UFP) and leaching operations. The first scenario has caustic leaching performed in the UFP-2 ultrafiltration feed vessels (i.e., vessel UFP-VSL-T02A in the PEP and vessels UFP-VSL-00002A and B in the WTP PTF). The second scenario has caustic leaching conducted in the UFP-1 ultrafiltration feed-preparation vessels (i.e., vessels UFP-VSL-T01A and B in the PEP; vessels UFP-VSL-00001A and B in the WTP PTF). In both scenarios, 19-M sodium hydroxide solution (NaOH, caustic) is added to the waste slurry in the vessels to leach solid aluminum compounds (e.g., gibbsite, boehmite). Caustic addition is followed by a heating step that uses direct injection of steam to accelerate the leach process. Following the caustic leach, the vessel contents are cooled using vessel cooling jackets and/or external heat exchangers. The main difference between the two scenarios is that for leaching in UFP1, the 19-M NaOH is added to un-concentrated waste slurry (3 to 8 wt% solids), while for leaching in UFP2, the slurry is concentrated to nominally 20 wt

  14. Experimental Study on Revetec Engine Cam Performance

    NASA Astrophysics Data System (ADS)

    Mohyeldin Gasim, Maisara; Giok Chui, Lee; Anwar, Khirul Azhar bin

    2012-09-01

    In Revetec engine (three-lobed) cam replaces the crankshaft to convert the reciprocating motion of the engine piston, to a rotating motion in the drive line. Since the cam controls the piston movement, the cam profile has a great effect on engine performance. In this paper an experimental study was done to a (three- lobed) cam with Cycloidal motion profile but with different ratios between the base circle radius of the cam and the radius of the roller follower. DEWESoft was used to find the displacement and the vibration of the piston, and compare the actual results from the test with the theoretical results from the cam profile equation. The results showed that there is a periods of miss contact between the follower and the cam when the ratio between the base circle radius of the cam and the radius of the roller follower is less than a certain value, and also increasing of vibration. The suggested ratio between the cam and follower radius is to be more than 2:1.

  15. Intermediate-scale Fire Performance of Composite Panels under Varying Loads

    SciTech Connect

    Brown, Alexander; Jernigan, Dann A.; Dodd, Amanda B.

    2015-04-01

    New aircraft are being designed with increasing quantities of composite materials used in their construction. Different from the more traditional metals, composites have a higher propensity to burn. This presents a challenge to transportation safety analyses, as the aircraft structure now represents an additional fuel source involved in the fire scenario. Most of the historical fire testing of composite materials is aime d at studying kinetics, flammability or yield strength under fire conditions. Most of this testing is small - scale. Heterogeneous reactions are often length - scale dependent, and this is thought to be particularly true for composites which exhibit signific ant microscopic dynamics that can affect macro - scale behavior. We have designed a series of tests to evaluate composite materials under various structural loading conditions with a consistent thermal condition. We have measured mass - loss , heat flux, and temperature throughout the experiments. Several types of panels have been tested, including simple composite panels, and sandwich panels. The main objective of the testing was to understand the importance of the structural loading on a composite to its b ehavior in response to fire - like conditions. During flaming combustion at early times, there are some features of the panel decomposition that are unique to the type of loading imposed on the panels. At load levels tested, fiber reaction rates at later t imes appear to be independent of the initial structural loading.

  16. Performance and scaling effects in a multilayer microfluidic extracorporeal lung oxygenation device

    PubMed Central

    Kniazeva, Tatiana; Epshteyn, Alla A.; Hsiao, James C.; Kim, Ernest S.; Kolachalama, Vijaya B.; Charest, Joseph L.

    2012-01-01

    Microfluidic fabrication technologies are emerging as viable platforms for extracorporeal lung assist devices and oxygenators for cardiac surgical support and critical care medicine, based in part on their ability to more closely mimic the architecture of the human vasculature than existing technologies. In comparison with current hollow fiber oxygenator technologies, microfluidic systems have more physiologically-representative blood flow paths, smaller cross section blood conduits and thinner gas transfer membranes. These features can enable smaller device sizes and a reduced blood volume in the oxygenator, enhanced gas transfer efficiencies, and may also reduce the tendency for clotting in the system. Several critical issues need to be addressed in order to advance this technology from its current state and implement it in an organ-scale device for clinical use. Here we report on the design, fabrication and characterization of multilayer microfluidic oxygenators, investigating scaling effects associated with fluid mechanical resistance, oxygen transfer efficiencies, and other parameters in multilayer devices. Important parameters such as the fluidic resistance of interconnects are shown to become more predominant as devices are scaled towards many layers, while other effects such as membrane distensibility become less significant. The present study also probes the relationship between blood channel depth and membrane thickness on oxygen transfer, as well as the rate of oxygen transfer on the number of layers in the device. These results contribute to our understanding of the complexity involved in designing three-dimensional microfluidic oxygenators for clinical applications. PMID:22418858

  17. Metal contact effect on the performance and scaling behavior of carbon nanotube thin film transistors.

    PubMed

    Xia, Jiye; Dong, Guodong; Tian, Boyuan; Yan, Qiuping; Zhang, Han; Liang, Xuelei; Peng, Lianmao

    2016-05-21

    Metal-tube contact is known to play an important role in carbon nanotube field-effect transistors (CNT-FETs) which are fabricated on individual CNTs. Less attention has been paid to the contact effect in network type carbon nanotube thin film transistors (CNT-TFTs). In this study, we demonstrate that contact plays an even more important role in CNT-TFTs than in CNT-FETs. Although the Schottky barrier height at the metal-tube contact can be tuned by the work function of the metal, similar to the case in CNT-FETs, the contact resistance (Rc) forms a much higher proportion of the total resistance in CNT-TFTs. Interestingly, the contact resistivity was found to increase with channel length, which is a consequence of the percolating nature of the transport in CNT films, and this behavior does not exist in CNT-FETs and normal 2D Ohmic conductors. Electrical transport in CNT-TFTs has been predicted to scale with channel length by stick percolation theory. However, the scaling behavior is also impacted, or even covered up by the effect of Rc. Once the contact effect is excluded, the covered scaling behavior can be revealed correctly. A possible way of reducing Rc in CNT-TFTs was proposed. We believe the findings in this paper will strengthen our understanding of CNT-TFTs, and even accelerate the commercialization of CNT-TFT technology. PMID:27121370

  18. Impact of the Pedestal on Global Performance and Confinement Scalings in I-mode

    NASA Astrophysics Data System (ADS)

    Walk, John; Hughes, Jerry; Hubbard, Amanda; Whyte, Dennis; White, Anne; Alcator C-Mod Team

    2015-11-01

    The I-mode is a novel high-confinement regime pioneered on Alcator C-Mod, notable for its strong temperature pedestal without the accompanying density pedestal found in conventional H-modes. This separation in transport channels gives the desired improved energy confinement while maintaining low particle confinement, avoiding excessive impurity accumulation. Moreover, I-mode operation is naturally free of deleterious Edge-Localized Modes (ELMs). Recent experiments on Alcator C-Mod have characterized the pedestal structure in I-mode. The impact of the pedestal response (particularly to fueling and heating power) and core profile stiffness on global performance and confinement have demonstrated confinement metrics competitive with H-mode operation on Alcator C-Mod, and consistent with concepts for I-mode access & operation on ITER. Following the practice of the ITER89 and ITER98 scaling laws for L- and H-mode energy confinement, an initial, illustrative attempt at an I-mode confinement scaling has also been developed. The initial characterization from C-Mod data is consistent with the observed pedestal properties in I-mode, particularly the weak degradation of energy confinement with heating power, and comparatively strong positive response to fueling and increased magnetic field. Supported by U.S. Department of Energy award DE-FC02-99ER54512, using Alcator C-Mod, a DOE Office of Science User Facility.

  19. Impact of thermoelectric phenomena on phase-change memory performance metrics and scaling.

    PubMed

    Lee, Jaeho; Asheghi, Mehdi; Goodson, Kenneth E

    2012-05-25

    The coupled transport of heat and electrical current, or thermoelectric phenomena, can strongly influence the temperature distribution and figures of merit for phase-change memory (PCM). This paper simulates PCM devices with careful attention to thermoelectric transport and the resulting impact on programming current during the reset operation. The electrothermal simulations consider Thomson heating within the phase-change material and Peltier heating at the electrode interface. Using representative values for the Thomson and Seebeck coefficients extracted from our past measurements of these properties, we predict a cell temperature increase of 44% and a decrease in the programming current of 16%. Scaling arguments indicate that the impact of thermoelectric phenomena becomes greater with smaller dimensions due to enhanced thermal confinement. This work estimates the scaling of this reduction in programming current as electrode contact areas are reduced down to 10 nm × 10 nm. Precise understanding of thermoelectric phenomena and their impact on device performance is a critical part of PCM design strategies. PMID:22543873

  20. Performance evaluation of a full-scale innovative swine waste-to-energy system.

    PubMed

    Xu, Jiele; Adair, Charles W; Deshusses, Marc A

    2016-09-01

    Intensive monitoring was carried out to evaluate the performance of a full-scale innovative swine waste-to-energy system at a commercial swine farm with 8640 heads of swine. Detailed mass balances over each unit of the system showed that the system, which includes a 7600m(3) anaerobic digester, a 65-kW microturbine, and a 4200m(3) aeration basin, was able to remove up to 92% of the chemical oxygen demand (COD), 99% of the biological oxygen demand (BOD), 77% of the total nitrogen (TN), and 82% of the total phosphorous (TP) discharged into the system as fresh pig waste. The overall biogas yield based on the COD input was 64% of the maximum theoretical, a value that indicates that even greater environmental benefits could be obtained with process optimization. Overall, the characterization of the materials fluxes in the system provides a greater understanding of the fate of organics and nutrients in large scale animal waste management systems. PMID:27268434

  1. Combustion performance and scale effect from N2O/HTPB hybrid rocket motor simulations

    NASA Astrophysics Data System (ADS)

    Shan, Fanli; Hou, Lingyun; Piao, Ying

    2013-04-01

    HRM code for the simulation of N2O/HTPB hybrid rocket motor operation and scale effect analysis has been developed. This code can be used to calculate motor thrust and distributions of physical properties inside the combustion chamber and nozzle during the operational phase by solving the unsteady Navier-Stokes equations using a corrected compressible difference scheme and a two-step, five species combustion model. A dynamic fuel surface regression technique and a two-step calculation method together with the gas-solid coupling are applied in the calculation of fuel regression and the determination of combustion chamber wall profile as fuel regresses. Both the calculated motor thrust from start-up to shut-down mode and the combustion chamber wall profile after motor operation are in good agreements with experimental data. The fuel regression rate equation and the relation between fuel regression rate and axial distance have been derived. Analysis of results suggests improvements in combustion performance to the current hybrid rocket motor design and explains scale effects in the variation of fuel regression rate with combustion chamber diameter.

  2. A High-Performance Rechargeable Iron Electrode for Large-Scale Battery-Based Energy Storage

    SciTech Connect

    Manohar, AK; Malkhandi, S; Yang, B; Yang, C; Prakash, GKS; Narayanan, SR

    2012-01-01

    Inexpensive, robust and efficient large-scale electrical energy storage systems are vital to the utilization of electricity generated from solar and wind resources. In this regard, the low cost, robustness, and eco-friendliness of aqueous iron-based rechargeable batteries are particularly attractive and compelling. However, wasteful evolution of hydrogen during charging and the inability to discharge at high rates have limited the deployment of iron-based aqueous batteries. We report here new chemical formulations of the rechargeable iron battery electrode to achieve a ten-fold reduction in the hydrogen evolution rate, an unprecedented charging efficiency of 96%, a high specific capacity of 0.3 Ah/g, and a twenty-fold increase in discharge rate capability. We show that modifying high-purity carbonyl iron by in situ electro-deposition of bismuth leads to substantial inhibition of the kinetics of the hydrogen evolution reaction. The in situ formation of conductive iron sulfides mitigates the passivation by iron hydroxide thereby allowing high discharge rates and high specific capacity to be simultaneously achieved. These major performance improvements are crucial to advancing the prospect of a sustainable large-scale energy storage solution based on aqueous iron-based rechargeable batteries. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.034208jes] All rights reserved.

  3. TRICKLING FILTER/SOLIDS CONTACT PROCESS: FULL-SCALE STUDIES

    EPA Science Inventory

    Use of the trickling filter/solids contact process has increased significantly since its successful demonstration at the Corvallis, Oregon Plant in 1979. The purpose of the study was to document the design features and performance of existing trickling filter/solids contact facil...

  4. EFRT M-12 Issue Resolution: Comparison of Filter Performance at PEP and CUF Scale

    SciTech Connect

    Daniel, Richard C.; Billing, Justin M.; Bontha, Jagannadha R.; Brown, Christopher F.; Eslinger, Paul W.; Hanson, Brady D.; Huckaby, James L.; Karri, Naveen K.; Kimura, Marcia L.; Kurath, Dean E.; Minette, Michael J.

    2010-01-22

    Pacific Northwest National Laboratory (PNNL) has been tasked by Bechtel National Inc. (BNI) on the River Protection Project-Hanford Tank Waste Treatment and Immobilization Plant (RPP-WTP) project to perform research and development activities to resolve technical issues identified for the Pretreatment Facility (PTF). The Pretreatment Engineering Platform (PEP) was designed, constructed, and operated as part of a plan to respond to issue M12, “Undemonstrated Leaching Processes” of the External Flowsheet Review Team (EFRT) issue response plan.(a) The PEP is a 1/4.5-scale test platform designed to simulate the WTP pretreatment caustic leaching, oxidative leaching, ultrafiltration solids concentration, and slurry washing processes. The PEP replicates the WTP leaching processes using prototypic equipment and control strategies. The PEP also includes non-prototypic ancillary equipment to support the core processing.

  5. Interelectrode resistance and performance of small and large scale MHD generators

    SciTech Connect

    Doss, E.D.; Picologlou, B.F.

    1983-09-01

    The effect of reduced interelectrode resistance in MHD generators on the generator power output is investigated. The analytical model used in the investigation allows for the solution for the electric field and current density distributions in the cross plane of the generator. The power output, expressed as a fraction of the power output of a perfectly insulated generator, is found to be a function of the wall temperature, the ratio of boundary layer thickness to channel transverse dimension, and the product of interelectrode resistance and channel cross-sectional area. The interelectrode resistance is assumed to be inversely proportional to the channel transverse dimension and the variation of power output ratio with channel size is calculated. It is found that deterioration of performance of NHD generators, resulting from reduced interelectrode resistance, diminishes with generator size and is negligible for large-scale generators, provided that the interelectrode resistance remains larger than an order of one-tenth ohm.

  6. Cooperation, Technology, and Performance: A Case Study.

    ERIC Educational Resources Information Center

    Cavanagh, Thomas; Dickenson, Sabrina; Brandt, Suzanne

    1999-01-01

    Describes the CTP (Cooperation, Technology, and Performance) model and explains how it is used by the Department of Veterans Affairs-Veteran's Benefit Administration (VBA) for training. Discusses task analysis; computer-based training; cooperative-based learning environments; technology-based learning; performance-assessment methods; courseware…

  7. A pore scale study on turbulent combustion in porous media

    NASA Astrophysics Data System (ADS)

    Jouybari, N. F.; Maerefat, M.; Nimvari, M. E.

    2016-02-01

    This paper presents pore scale simulation of turbulent combustion of air/methane mixture in porous media to investigate the effects of multidimensionality and turbulence on the flame within the pores of porous media. In order to investigate combustion in the pores of porous medium, a simple but often used porous medium consisting of a staggered arrangement of square cylinders is considered in the present study. Results of turbulent kinetic energy, turbulent viscosity ratio, temperature, flame speed, convective heat transfer and thermal conductivity are presented and compared for laminar and turbulent simulations. It is shown that the turbulent kinetic energy increases from the inlet of burner, because of turbulence created by the solid matrix with a sudden jump or reduction at the flame front due to increase in temperature and velocity. Also, the pore scale simulation revealed that the laminarization of flow occurs after flame front in the combustion zone and turbulence effects are important mainly in the preheat zone. It is shown that turbulence enhances the diffusion processes in the preheat zone, but it is not enough to affect the maximum flame speed, temperature distribution and convective heat transfer in the porous burner. The dimensionless parameters associated with the Borghi-Peters diagram of turbulent combustion have been analyzed for the case of combustion in porous media and it is found that the combustion in the porous burner considered in the present study concerns the range of well stirred reactor very close to the laminar flame region.

  8. Aerodynamic performance of two-dimensional, chordwise flexible flapping wings at fruit fly scale in hover flight.

    PubMed

    Sridhar, Madhu; Kang, Chang-kwon

    2015-06-01

    Fruit flies have flexible wings that deform during flight. To explore the fluid-structure interaction of flexible flapping wings at fruit fly scale, we use a well-validated Navier-Stokes equation solver, fully-coupled with a structural dynamics solver. Effects of chordwise flexibility on a two dimensional hovering wing is studied. Resulting wing rotation is purely passive, due to the dynamic balance between aerodynamic loading, elastic restoring force, and inertial force of the wing. Hover flight is considered at a Reynolds number of Re = 100, equivalent to that of fruit flies. The thickness and density of the wing also corresponds to a fruit fly wing. The wing stiffness and motion amplitude are varied to assess their influences on the resulting aerodynamic performance and structural response. Highest lift coefficient of 3.3 was obtained at the lowest-amplitude, highest-frequency motion (reduced frequency of 3.0) at the lowest stiffness (frequency ratio of 0.7) wing within the range of the current study, although the corresponding power required was also the highest. Optimal efficiency was achieved for a lower reduced frequency of 0.3 and frequency ratio 0.35. Compared to the water tunnel scale with water as the surrounding fluid instead of air, the resulting vortex dynamics and aerodynamic performance remained similar for the optimal efficiency motion, while the structural response varied significantly. Despite these differences, the time-averaged lift scaled with the dimensionless shape deformation parameter γ. Moreover, the wing kinematics that resulted in the optimal efficiency motion was closely aligned to the fruit fly measurements, suggesting that fruit fly flight aims to conserve energy, rather than to generate large forces. PMID:25946079

  9. On mechanics and material length scales of failure in heterogeneous interfaces using a finite strain high performance solver

    NASA Astrophysics Data System (ADS)

    Mosby, Matthew; Matouš, Karel

    2015-12-01

    Three-dimensional simulations capable of resolving the large range of spatial scales, from the failure-zone thickness up to the size of the representative unit cell, in damage mechanics problems of particle reinforced adhesives are presented. We show that resolving this wide range of scales in complex three-dimensional heterogeneous morphologies is essential in order to apprehend fracture characteristics, such as strength, fracture toughness and shape of the softening profile. Moreover, we show that computations that resolve essential physical length scales capture the particle size-effect in fracture toughness, for example. In the vein of image-based computational materials science, we construct statistically optimal unit cells containing hundreds to thousands of particles. We show that these statistically representative unit cells are capable of capturing the first- and second-order probability functions of a given data-source with better accuracy than traditional inclusion packing techniques. In order to accomplish these large computations, we use a parallel multiscale cohesive formulation and extend it to finite strains including damage mechanics. The high-performance parallel computational framework is executed on up to 1024 processing cores. A mesh convergence and a representative unit cell study are performed. Quantifying the complex damage patterns in simulations consisting of tens of millions of computational cells and millions of highly nonlinear equations requires data-mining the parallel simulations, and we propose two damage metrics to quantify the damage patterns. A detailed study of volume fraction and filler size on the macroscopic traction-separation response of heterogeneous adhesives is presented.

  10. Study of an engine flow diverter system for a large scale ejector powered aircraft model

    NASA Technical Reports Server (NTRS)

    Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.

    1981-01-01

    Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.

  11. Local Scale Comparisons of Biodiversity as a Test for Global Protected Area Ecological Performance: A Meta-Analysis

    PubMed Central

    Coetzee, Bernard W. T.; Gaston, Kevin J.; Chown, Steven L.

    2014-01-01

    Terrestrial protected areas (PAs) are cornerstones of global biodiversity conservation. Their efficacy in terms of maintaining biodiversity is, however, much debated. Studies to date have been unable to provide a general answer as to PA conservation efficacy because of their typically restricted geographic and/or taxonomic focus, or qualitative approaches focusing on proxies for biodiversity, such as deforestation. Given the rarity of historical data to enable comparisons of biodiversity before/after PA establishment, many smaller scale studies over the past 30 years have directly compared biodiversity inside PAs to that of surrounding areas, which provides one measure of PA ecological performance. Here we use a meta-analysis of such studies (N = 86) to test if PAs contain higher biodiversity values than surrounding areas, and so assess their contribution to determining PA efficacy. We find that PAs generally have higher abundances of individual species, higher assemblage abundances, and higher species richness values compared with alternative land uses. Local scale studies in combination thus show that PAs retain more biodiversity than alternative land use areas. Nonetheless, much variation is present in the effect sizes, which underscores the context-specificity of PA efficacy. PMID:25162620

  12. Performance-Based Service Quality Model: An Empirical Study on Japanese Universities

    ERIC Educational Resources Information Center

    Sultan, Parves; Wong, Ho

    2010-01-01

    Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…

  13. Validity and reliability study of the Turkish psychiatric nurses of job motivation scale.

    PubMed

    Engin, E; Cam, O

    2009-06-01

    This methodological study was planned to explore the validity and reliability of the evaluation scale for measuring the job motivation of nurses who work in psychiatric clinics. The sample was composed of 378 nurses who work in all psychiatric units or psychiatric hospitals located in Turkey's four large cities - Ankara, Istanbul, Izmir and Manisa. For testing reliability of 'job motivation scale', the internal consistency tests were executed with split scale analysis, Cronbach's alpha coefficient and item-total score correlation. For construct validity, factor analysis was used. For the first part of scale, Cronbach's alpha was determined to be 0.79. For the second part, Cronbach's alpha was 0.72. Factor analysis was performed in an attempt to establish validity and underlying associations between items in the scale. The first analysis produced nine eigenvalues (>1) and nine factors were extracted. The scree test indicated that a two-factor model would be suitable. The factor structure of the tool for measuring the job motivation of nurses who work in psychiatric clinics was parallel with motivation concepts. Validity and reliability levels of the scale for measuring the job motivation of nurses who work in psychiatric clinics were found to be sufficient in the Turkish population. PMID:19538603

  14. Full scale subsonic wind tunnel requirements and design studies

    NASA Technical Reports Server (NTRS)

    Kelly, M. W.; Mort, K. W.; Hickey, D. H.

    1972-01-01

    The justification and requirements are summarized for a large subsonic wind tunnel capable of testing full-scale aircraft, rotor systems, and advanced V/STOL aircraft propulsion systems. The design considerations and constraints for such a facility are reviewed, and the trades between facility test capability and costs are discussed. The design studies showed that the structural cost of this facility is the most important cost factor. For this reason (and other considerations such as requirements for engine exhaust gas purging) an open-return wind tunnel having two test sections was selected. The major technical problem in the design of an open-return wind tunnel is maintaining good test section flow quality in the presence of external winds. This problem has been studied extensively, and inlet and exhaust systems which provide satisfactory attenuation of the effects of external winds on test section flow quality were developed.

  15. The evolution of CMS software performance studies

    NASA Astrophysics Data System (ADS)

    Kortelainen, M. J.; Elmer, P.; Eulisse, G.; Innocente, V.; Jones, C. D.; Tuura, L.

    2011-12-01

    CMS has had an ongoing and dedicated effort to optimize software performance for several years. Initially this effort focused primarily on the cleanup of many issues coming from basic C++ errors, namely reducing dynamic memory churn, unnecessary copies/temporaries and tools to routinely monitor these things. Over the past 1.5 years, however, the transition to 64bit, newer versions of the gcc compiler, newer tools and the enabling of techniques like vectorization have made possible more sophisticated improvements to the software performance. This presentation will cover this evolution and describe the current avenues being pursued for software performance, as well as the corresponding gains.

  16. Archaeal and bacterial community dynamics and bioprocess performance of a bench-scale two-stage anaerobic digester.

    PubMed

    Gonzalez-Martinez, Alejandro; Garcia-Ruiz, Maria Jesus; Rodriguez-Sanchez, Alejandro; Osorio, Francisco; Gonzalez-Lopez, Jesus

    2016-07-01

    Two-stage technologies have been developed for anaerobic digestion of waste-activated sludge. In this study, the archaeal and bacterial community structure dynamics and bioprocess performance of a bench-scale two-stage anaerobic digester treating urban sewage sludge have been studied by the means of high-throughput sequencing techniques and physicochemical parameters such as pH, dried sludge, volatile dried sludge, acid concentration, alkalinity, and biogas generation. The coupled analyses of archaeal and bacterial communities and physicochemical parameters showed a direct relationship between archaeal and bacterial populations and bioprocess performance during start-up and working operation of a two-stage anaerobic digester. Moreover, results demonstrated that archaeal and bacterial community structure was affected by changes in the acid/alkalinity ratio in the bioprocess. Thus, a predominance of the acetoclastic methanogen Methanosaeta was observed in the methanogenic bioreactor at high-value acid/alkaline ratio, while a predominance of Methanomassilicoccaeceae archaea and Methanoculleus genus was observed in the methanogenic bioreactor at low-value acid/alkaline ratio. Biodiversity tag-iTag sequencing studies showed that methanogenic archaea can be also detected in the acidogenic bioreactor, although its biological activity was decreased after 4 months of operation as supported by physicochemical analyses. Also, studies of the VFA producers and VFA consumers microbial populations showed as these microbiota were directly affected by the physicochemical parameters generated in the bioreactors. We suggest that the results obtained in our study could be useful for future implementations of two-stage anaerobic digestion processes at both bench- and full-scale. PMID:26940050

  17. Observational and numerical studies of extreme frontal scale contraction

    NASA Technical Reports Server (NTRS)

    Koch, Steven E.

    1995-01-01

    The general objective of this effort is to increase understanding of how frontal scale contraction processes may create and sustain intense mesoscale precipitation along intensifying cold fronts. The five-part project (an expansion of the originally proposed two-part project) employed conventional meteorological data, special mesoscale data, remote sensing measurements, and various numerical models. First an idealized hydrostatic modeling study of the scale contraction effects of differential cloud cover on low-level frontal structure and dynamics was completed and published in a peer-reviewed journal. The second objective was to complete and publish the results from a three dimensional numerical model simulation of a cold front in which differential sensible heating related to cloud coverage patterns was apparently crucial in the formation of a severe frontal squall line. The third objective was to use a nonhydrostatic model to examine the nonlinear interactions between the transverse circulation arising from inhomogeneous cloud cover, the adiabatic frontal circulation related to semi-geostrophic forcing, and diabatic effects related to precipitation processes, in the development of a density current-like microstructure at the leading edge of cold fronts. Although the development of a frontal model that could be used to initialize such a primitive equation model was begun, we decided to focus our efforts instead on a project that could be successfully completed in this short time, due to the lack of prospects for continued NASA funding beyond this first year (our proposal was not accepted for future funding). Thus, a fourth task was added, which was to use the nonhydrostatic model to test tentative hypotheses developed from the most detailed observations ever obtained on a density current (primarily sodar and wind profiler data). These simulations were successfully completed, the findings were reported at a scientific conference, and the results have recently been

  18. A Performance Study of Event Processing Systems

    NASA Astrophysics Data System (ADS)

    Mendes, Marcelo R. N.; Bizarro, Pedro; Marques, Paulo

    Event processing engines are used in diverse mission-critical scenarios such as fraud detection, traffic monitoring, or intensive care units. However, these scenarios have very different operational requirements in terms of, e.g., types of events, queries/patterns complexity, throughput, latency and number of sources and sinks. What are the performance bottlenecks? Will performance degrade gracefully with increasing loads? In this paper we make a first attempt to answer these questions by running several micro-benchmarks on three different engines, while we vary query parameters like window size, window expiration type, predicate selectivity, and data values. We also perform some experiments to assess engines scalability with respect to number of queries and propose ways for evaluating their ability in adapting to changes in load conditions. Lastly, we show that similar queries have widely different performances on the same or different engines and that no engine dominates the other two in all scenarios.

  19. Belief network algorithms: A study of performance

    SciTech Connect

    Jitnah, N.

    1996-12-31

    This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.

  20. Dynamic modeling and validation of a lignocellulosic enzymatic hydrolysis process--a demonstration scale study.

    PubMed

    Prunescu, Remus Mihail; Sin, Gürkan

    2013-12-01

    The enzymatic hydrolysis process is one of the key steps in second generation biofuel production. After being thermally pretreated, the lignocellulosic material is liquefied by enzymes prior to fermentation. The scope of this paper is to evaluate a dynamic model of the hydrolysis process on a demonstration scale reactor. The following novel features are included: the application of the Convection-Diffusion-Reaction equation to a hydrolysis reactor to assess transport and mixing effects; the extension of a competitive kinetic model with enzymatic pH dependency and hemicellulose hydrolysis; a comprehensive pH model; and viscosity estimations during the course of reaction. The model is evaluated against real data extracted from a demonstration scale biorefinery throughout several days of operation. All measurements are within predictions uncertainty and, therefore, the model constitutes a valuable tool to support process optimization, performance monitoring, diagnosis and process control at full-scale studies. PMID:24212094

  1. Parallel computing study for the large-scale generalized eigenvalue problems in modal analysis

    NASA Astrophysics Data System (ADS)

    Fan, XuanHua; Chen, Pu; Wu, RuiAn; Xiao, ShiFu

    2014-03-01

    In this paper we study the algorithms and their parallel implementation for solving large-scale generalized eigenvalue problems in modal analysis. Three predominant subspace algorithms, i.e., Krylov-Schur method, implicitly restarted Arnoldi method and Jacobi-Davidson method, are modified with some complementary techniques to make them suitable for modal analysis. Detailed descriptions of the three algorithms are given. Based on these algorithms, a parallel solution procedure is established via the PANDA framework and its associated eigensolvers. Using the solution procedure on a machine equipped with up to 4800 processors, the parallel performance of the three predominant methods is evaluated via numerical experiments with typical engineering structures, where the maximum testing scale attains twenty million degrees of freedom. The speedup curves for different cases are obtained and compared. The results show that the three methods are good for modal analysis in the scale of ten million degrees of freedom with a favorable parallel scalability.

  2. Effect of High-Fidelity Ice Accretion Simulations on the Performance of a Full-Scale Airfoil Model

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Bragg, Michael B.; Addy, Harold E., Jr.; Lee, Sam; Moens, Frederic; Guffond, Didier

    2010-01-01

    The simulation of ice accretion on a wing or other surface is often required for aerodynamic evaluation, particularly at small scale or low-Reynolds number. While there are commonly accepted practices for ice simulation, there are no established and validated guidelines. The purpose of this article is to report the results of an experimental study establishing a high-fidelity, full-scale, iced-airfoil aerodynamic performance database. This research was conducted as a part of a larger program with the goal of developing subscale aerodynamic simulation methods for iced airfoils. Airfoil performance testing was carried out at the ONERA F1 pressurized wind tunnel using a 72-in. (1828.8-mm) chord NACA 23012 airfoil over a Reynolds number range of 4.5x10(exp 6) to 16.0 10(exp 6) and a Mach number range of 0.10 to 0.28. The high-fidelity, ice-casting simulations had a significant impact on the aerodynamic performance. A spanwise-ridge ice shape resulted in a maximum lift coefficient of 0.56 compared to the clean value of 1.85 at Re = 15.9x10(exp 6) and M = 0.20. Two roughness and streamwise shapes yielded maximum lift values in the range of 1.09 to 1.28, which was a relatively small variation compared to the differences in the ice geometry. The stalling characteristics of the two roughness and one streamwise ice simulation maintained the abrupt leading-edge stall type of the clean NACA 23012 airfoil, despite the significant decrease in maximum lift. Changes in Reynolds and Mach number over the large range tested had little effect on the iced-airfoil performance.

  3. Do Learning and Study Skills Affect Academic Performance?--An Empirical Investigation

    ERIC Educational Resources Information Center

    Griffin, Richard; MacKewn, Angie; Moser, Ernest; VanVuren, Ken W.

    2012-01-01

    Universities and colleges are very interested in understanding the factors that influence their students' academic performance. This paper describes a study that was conducted at a mid-sized public university in the mid-south, USA, to examine this issue. In this study, the 10-scale, Learning and Study Strategies Inventory (LASSI) (Weinstein et…

  4. Hydriding performances and modeling of a small-scale ZrCo bed

    SciTech Connect

    Koo, D.; Lee, J.; Park, J.; Paek, S.; Chung, H.; Chang, M.H.; Yun, S.H.; Cho, S.; Jung, K.J.

    2015-03-15

    In order to evaluate the performance of the hydriding of a ZrCo bed, a small-scale getter bed of ZrCo was designed and fabricated. The results show that the hydriding time at room temperature was somewhat shorter than that at higher temperatures of ZrCo and that the performance of hydriding at low temperatures of ZrCo was better than that at high temperatures of ZrCo. The experimental results of the hydrogen pressure of hydriding (ZrCoH{sub 2.8}) at different temperatures were in agreement with the computed values using a numerical modeling equation but with a small difference during the first 10 minutes of the hydriding of ZrCo. The model is based on the Kozeny-Carman equation. The effect of a helium blanket on hydriding was measured and analyzed. The hydriding with no helium blanket in the primary vessel of ZrCo is much faster than that with a helium blanket. The hydriding at a helium concentration of 8% is slower than that at 0%. As the helium concentration increases, the hydriding of ZrCo decreases. The experimental results of the hydriding with 0 %, 4%, and 8% of helium concentration are in agreement with the calculated values but with minimal differences during the first 10 minutes.

  5. Array-scale performance of TES X-ray Calorimeters Suitable for Constellation-X

    NASA Technical Reports Server (NTRS)

    Kilbourne, C. A.; Bandler, S. R.; Brown, A. D.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Smith, S. J.; Doriese, W. B.; Irwin, K. D.

    2008-01-01

    Having developed a transition-edge-sensor (TES) calorimeter design that enables high spectral resolution in high fill-factor arrays, we now present array-scale results from 32-pixel arrays of identical closely packed TES pixels. Each pixel in such an array contains a Mo/Au bilayer with a transition temperature of 0.1 K and an electroplated Au or Au/Bi xray absorber. The pixels in an array have highly uniform physical characteristics and performance. The arrays are easy to operate due to the range of bias voltages and heatsink temperatures over which solution better than 3 eV at 6 keV can be obtained. Resolution better than 3 eV has also been obtained with 2x8 time-division SQUID multiplexing. We will present the detector characteristics and show spectra acquired through the read-out chain from the multiplexer electronics through the demultiplexer software to real-time signal processing. We are working towards demonstrating this performance over the range of count rates expected in the observing program of the Constellation-X observatory. We mill discuss the impact of increased counting rate on spectral resolution, including the effects of crosstalk and optimal-filtering dead time.

  6. Effect of blade planform variation on the forward-flight performance of small-scale rotors

    NASA Technical Reports Server (NTRS)

    Noonan, Kevin W.; Althoff, Susan L.; Samak, Dhananjay K.; Green, Michael D.

    1992-01-01

    An investigation was conducted in the Glenn L. Martin Wind Tunnel to determine the effect of blade planform variation on the forward-flight performance of four small-scale rotors. The rotors were 5.417 ft in diameter and differed only in blade planform geometry. The four planforms were: (1) rectangular; (2) 3:1 linear taper starting at 94 percent radius; (3) 3:1 linear taper starting at 75 percent radius; and (4) 3:1 linear taper starting at 50 percent radius. Each planform had a thrust-weighted solidity of 0.098. The investigation included forward-flight simulation at advance ratios from 0.14 to 0.43 for a range of rotor lift and drag coefficients. Among the four rotors, the rectangular rotor required the highest torque for the entire range of rotor drag coefficients attained at advanced ratios greater than 0.14 for rotor lift coefficients C sub L from 0.004 to 0.007. Among the rotors with tapered blades and for C sub L = 0.004 to 0.007, either the 75 percent tapered rotor or the 50 percent tapered rotor required the least amount of torque for the full range of rotor drag coefficients attained at each advance ratio. The performance of the 94 percent tapered rotor was generally between that of the rectangular rotor and the 75 and 50 percent tapered rotors at each advance ratio for this range of rotor lift coefficients.

  7. Experimental and Measurement Uncertainty Associated with Characterizing Slurry Mixing Performance of Pulsating Jets at Multiple Scales

    SciTech Connect

    Bamberger, Judith A.; Piepel, Gregory F.; Enderlin, Carl W.; Amidan, Brett G.; Heredia-Langner, Alejandro

    2015-09-10

    Understanding how uncertainty manifests itself in complex experiments is important for developing the testing protocol and interpreting the experimental results. This paper describes experimental and measurement uncertainties, and how they can depend on the order of performing experimental tests. Experiments with pulse-jet mixers in tanks at three scales were conducted to characterize the performance of transient-developing periodic flows in Newtonian slurries. Other test parameters included the simulant, solids concentration, and nozzle exit velocity. Critical suspension velocity and cloud height were the metrics used to characterize Newtonian slurry flow associated with mobilization and mixing. During testing, near-replicate and near-repeat tests were conducted. The experimental results were used to quantify the combined experimental and measurement uncertainties using standard deviations and percent relative standard deviations (%RSD) The uncertainties in critical suspension velocity and cloud height tend to increase with the values of these responses. Hence, the %RSD values are the more appropriate summary measure of near-replicate testing and measurement uncertainty.

  8. Performance of a full-scale ITER metal hydride storage bed in comparison with requirements

    SciTech Connect

    Beloglazov, S.; Glugla, M.; Fanghaenel, E.; Perevezentsev, A.; Wagner, R.

    2008-07-15

    The storage of hydrogen isotopes as metal hydride is the technique chosen for the ITER Tritium Plant Storage and Delivery System (SDS). A prototype storage bed of a full-scale has been designed, manufactured and intensively tested at the Tritium Laboratory, addressing main performance parameters specified for the ITER application. The main requirements for the hydrogen storage bed are a strict physical limitation of the tritium storage capacity (currently 70 g T{sub 2}), a high supply flow rate of hydrogen isotopes, in-situ calorimetry capabilities with an accuracy of 1 g and a fully tritium compatible design. The pressure composition isotherm of the ZrCo hydrogen system, as a reference material for ITER, is characterised by significant slope. As a result technical implementation of the ZrCo hydride bed in the SDS system requires further considerations. The paper presents the experience from the operation of ZrCo getter bed including loading/de-loading operation, calorimetric loop performance, and active gas cooling of the bed for fast absorption operation. The implications of hydride material characteristics on the SDS system configuration and design are discussed. (authors)

  9. The tendon network of the fingers performs anatomical computation at a macroscopic scale.

    PubMed

    Valero-Cuevas, Francisco J; Yi, Jae-Woong; Brown, Daniel; McNamara, Robert V; Paul, Chandana; Lipson, Hood

    2007-06-01

    Current thinking attributes information processing for neuromuscular control exclusively to the nervous system. Our cadaveric experiments and computer simulations show, however, that the tendon network of the fingers performs logic computation to preferentially change torque production capabilities. How this tendon network propagates tension to enable manipulation has been debated since the time of Vesalius and DaVinci and remains an unanswered question. We systematically changed the proportion of tension to the tendons of the extensor digitorum versus the two dorsal interosseous muscles of two cadaver fingers and measured the tension delivered to the proximal and distal interphalangeal joints. We find that the distribution of input tensions in the tendon network itself regulates how tensions propagate to the finger joints, acting like the switching function of a logic gate that nonlinearly enables different torque production capabilities. Computer modeling reveals that the deformable structure of the tendon networks is responsible for this phenomenon; and that this switching behavior is an effective evolutionary solution permitting a rich repertoire of finger joint actuation not possible with simpler tendon paths. We conclude that the structural complexity of this tendon network, traditionally oversimplified or ignored, may in fact be critical to understanding brain-body coevolution and neuromuscular control. Moreover, this form of information processing at the macroscopic scale is a new instance of the emerging principle of nonneural "somatic logic" found to perform logic computation such as in cellular networks. PMID:17549909

  10. Ecological Development and Validation of a Music Performance Rating Scale for Five Instrument Families

    ERIC Educational Resources Information Center

    Wrigley, William J.; Emmerson, Stephen B.

    2013-01-01

    This study investigated ways to improve the quality of music performance evaluation in an effort to address the accountability imperative in tertiary music education. An enhanced scientific methodology was employed incorporating ecological validity and using recognized qualitative methods involving grounded theory and quantitative methods…

  11. Monitoring Rater Performance over Time: A Framework for Detecting Differential Accuracy and Differential Scale Category Use

    ERIC Educational Resources Information Center

    Myford, Carol M.; Wolfe, Edward W.

    2009-01-01

    In this study, we describe a framework for monitoring rater performance over time. We present several statistical indices to identify raters whose standards drift and explain how to use those indices operationally. To illustrate the use of the framework, we analyzed rating data from the 2002 Advanced Placement English Literature and Composition…

  12. Solid Oxide Fuel Cell Performance Studies

    SciTech Connect

    Huebner, W.; Reed, D.M.; Anderson, H.U.

    1996-08-20

    Materials research in the area of SOFC`s is also driven by the recognition that processing and operating at lower temperatures would circumvent most of the reliability problems which are currently preventing these devices from achieving wide-scale commercialization. These considerations have directed interdisciplinary research thrusts in this field, namely: Alternate Materials, Processing, and Reliability Issues. This paper describes starting powder characteristics, electrical conductivity and over potential measurements, and resultant microstructures as a function of processing conditions (i.e. powder calcination temperature, and annealing temperature) and composition for the electrolyte and cathodes.

  13. Scaling and Predictability in Stock Markets: A Comparative Study

    PubMed Central

    Zhang, Huishu; Wei, Jianrong; Huang, Jiping

    2014-01-01

    Most people who invest in stock markets want to be rich, thus, many technical methods have been created to beat the market. If one knows the predictability of the price series in different markets, it would be easier for him/her to make the technical analysis, at least to some extent. Here we use one of the most basic sold-and-bought trading strategies to establish the profit landscape, and then calculate the parameters to characterize the strength of predictability. According to the analysis of scaling of the profit landscape, we find that the Chinese individual stocks are harder to predict than US ones, and the individual stocks are harder to predict than indexes in both Chinese stock market and US stock market. Since the Chinese (US) stock market is a representative of emerging (developed) markets, our comparative study on the markets of these two countries is of potential value not only for conducting technical analysis, but also for understanding physical mechanisms of different kinds of markets in terms of scaling. PMID:24632944

  14. Scaling and predictability in stock markets: a comparative study.

    PubMed

    Zhang, Huishu; Wei, Jianrong; Huang, Jiping

    2014-01-01

    Most people who invest in stock markets want to be rich, thus, many technical methods have been created to beat the market. If one knows the predictability of the price series in different markets, it would be easier for him/her to make the technical analysis, at least to some extent. Here we use one of the most basic sold-and-bought trading strategies to establish the profit landscape, and then calculate the parameters to characterize the strength of predictability. According to the analysis of scaling of the profit landscape, we find that the Chinese individual stocks are harder to predict than US ones, and the individual stocks are harder to predict than indexes in both Chinese stock market and US stock market. Since the Chinese (US) stock market is a representative of emerging (developed) markets, our comparative study on the markets of these two countries is of potential value not only for conducting technical analysis, but also for understanding physical mechanisms of different kinds of markets in terms of scaling. PMID:24632944

  15. Experimental studies on methane-fuel laboratory scale ram combustor

    SciTech Connect

    Kinoshita, Y.; Kitajima, J.; Seki, Y.; Tatara, A.

    1995-07-01

    The laboratory scale ram combustor test program has been investigating fundamental combustion characteristics of a ram combustor, which operates from Mach 2.5 to 5 for the super/hypersonic transport propulsion system. In the previous study, combustion efficiency had been found poor, less than 70 percent, due to a low inlet air temperature and a high velocity at Mach 3 condition. To improve the low combustion efficiency, a fuel zoning combustion concept was investigated by using a subscale combustor model first. Combustion efficiency more than 90 percent was achieved and the concept was found very effective. Then a laboratory scale ram combustor was fabricated and combustion tests were carried out mainly at the simulated condition of Mach 5. A vitiation technique wa used to simulate a high temperature of 1,263 K. The test results indicate that ignition, flame stability, and combustion efficiency were not significant, but the NO{sub x} emissions are a critical problem for the ram combustor at Mach 5 condition.

  16. Conflicts in Performance Criteria and a Prioritization Methodology for Stream Restoration at the Watershed Scale

    NASA Astrophysics Data System (ADS)

    Goodwin, P.

    2005-05-01

    River restoration has become a significant consulting, agency and academic endeavor during the past decade with the driving force for these management activities including mitigation, total maximum daily load concerns, the preservation of critical habitat or recovery of endangered species. Review of restoration activities undertaken by agencies and academia in the 1980s showed that frequently it was difficult to identify the specific project goals or quantifiable metrics. Further, many projects lacked the resources for pre- and post- monitoring to establish project performance and to develop a feed-back mechanism into future design or management strategies. Now, a greater emphasis is being placed on the articulation of clear restoration objectives and post-implementation monitoring to evaluate the performance of individual projects. This trend has coincided with new advances in technologies that are allowing not only the success of individual projects to be quantified, but also how different projects interact. The objectives of specific restoration activities undertaken for different purposes may conflict at the reach or watershed scale. Through examples in Idaho and California, the difficulties in quantifying trends of both physical processes and linkages to ecological response will be illustrated. We look at some of the potential conflicts and benefits in restoration objectives within an individual project and the cumulative effects of a series of projects that may enhance or diminish some individual benefits. An analysis framework is outlined and the temporal and spatial frequency of sampling to detect the consequences of restoration actions is described. For example, two common actions in a channelized reach might be to (1) create a more natural channel section (reducing the width-depth ratio and enhancing geomorphic diversity), and (2) allow a more natural plan-form to restore the floodplain connectivity and sustain the geomorphic diversity. One performance

  17. EVALUATING BENCH-SCALE NANOFILTRATION STUDIES FOR PREDICTING FULL-SCALE PERFORMANCE

    EPA Science Inventory

    The Information Collection Rule (ICR) requires water utilities of a certain size and water quality to conduct bench or pilot testing of either granular activated carbon or membranes for the control of disinfection byproduct (DBP) precursors. his paper evaluates the effectiveness ...

  18. [French validation study of the levels of emotional awareness scale].

    PubMed

    Bydlowski, S; Corcos, M; Paterniti, S; Guilbaud, O; Jeammet, P; Consoli, S M

    2002-01-01

    According to a thesis based on the idea of an influence of cognitions in the structuring of internal reality, emotional awareness, ie the capacity of representing your own emotional experience and that of others, is a cognitive process that goes into maturation. Defining this concept, Lane and Schwartz present a cognitivo-developmental model in five stages of the processes of symbolization, accounting for the differences in levels of emotional awareness observed in individuals. The organization of these cognitive processes would thus be structured in well differentiated stages, in which the development of the emotions would be inseparable from the development of ego and of the relation to others. These authors focus on the capacity of representing in a conscious way the emotional experience and consider that verbal representations used to describe the contents of what is experience constitute a good reflection of the organization structural of the emotional awareness. Therefore, they worked out an instrument of evaluation: the Levels of Emotional Awareness Scale (LEAS), which measures the capacity to describe your own emotional experience and the one you allow to others, in an emotional situation. The system of quotation of this scale is based on the analysis of the verbal contents of the provided answers, in direct reference to the authors' theory of the levels of differentiation and integration of the emotional experience. It is therefore an empirical measurement which is centered specifically on the structural organization of the emotional experience. The various studies of validation of this instrument show that it presents solid metrological properties. This work presents the validation of the French version of Lane and Schwartz's LEAS. Validity and fidelity were studied in a group of 121 healthy subjects. This setting is part of a larger clinical evaluation, also including a collection of socio-demographic and clinical data, and other instruments of self

  19. Multi-Scale Effects of Nestling Diet on Breeding Performance in a Terrestrial Top Predator Inferred from Stable Isotope Analysis

    PubMed Central

    Resano-Mayor, Jaime; Hernández-Matías, Antonio; Real, Joan; Moleón, Marcos; Parés, Francesc; Inger, Richard; Bearhop, Stuart

    2014-01-01

    Inter-individual diet variation within populations is likely to have important ecological and evolutionary implications. The diet-fitness relationships at the individual level and the emerging population processes are, however, poorly understood for most avian predators inhabiting complex terrestrial ecosystems. In this study, we use an isotopic approach to assess the trophic ecology of nestlings in a long-lived raptor, the Bonelli’s eagle Aquila fasciata, and investigate whether nestling dietary breath and main prey consumption can affect the species’ reproductive performance at two spatial scales: territories within populations and populations over a large geographic area. At the territory level, those breeding pairs whose nestlings consumed similar diets to the overall population (i.e. moderate consumption of preferred prey, but complemented by alternative prey categories) or those disproportionally consuming preferred prey were more likely to fledge two chicks. An increase in the diet diversity, however, related negatively with productivity. The age and replacements of breeding pair members had also an influence on productivity, with more fledglings associated to adult pairs with few replacements, as expected in long-lived species. At the population level, mean productivity was higher in those population-years with lower dietary breadth and higher diet similarity among territories, which was related to an overall higher consumption of preferred prey. Thus, we revealed a correspondence in diet-fitness relationships at two spatial scales: territories and populations. We suggest that stable isotope analyses may be a powerful tool to monitor the diet of terrestrial avian predators on large spatio-temporal scales, which could serve to detect potential changes in the availability of those prey on which predators depend for breeding. We encourage ecologists and evolutionary and conservation biologists concerned with the multi-scale fitness consequences of inter

  20. Emergency Locator Transmitter System Performance During Three Full-Scale General Aviation Crash Tests

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.; Stimson, Chad M.

    2016-01-01

    Full-scale crash tests were conducted on three Cessna 172 aircraft at NASA Langley Research Center's Landing and Impact Research facility during the summer of 2015. The purpose of the three tests was to evaluate the performance of commercially available Emergency Locator Transmitter (ELT) systems and support development of enhanced installation guidance. ELTs are used to provide location information to Search and Rescue (SAR) organizations in the event of an aviation distress situation, such as a crash. The crash tests simulated three differing severe but survivable crash conditions, in which it is expected that the onboard occupants have a reasonable chance of surviving the accident and would require assistance from SAR personnel. The first simulated an emergency landing onto a rigid surface, while the second and third simulated controlled flight into terrain. Multiple ELT systems were installed on each airplane according to federal regulations. The majority of the ELT systems performed nominally. In the systems which did not activate, post-test disassembly and inspection offered guidance for non-activation cause in some cases, while in others, no specific cause could be found. In a subset of installations purposely disregarding best practice guidelines, failure of the ELT-to-antenna cabling connections were found. Recommendations for enhanced installation guidance of ELT systems will be made to the Radio Technical Commission for Aeronautics (RTCA) Special Committee 229 for consideration for adoption in a future release of ELT minimum operational performance specifications. These recommendations will be based on the data gathered during this test series as well as a larger series of crash simulations using computer models that will be calibrated based on these data

  1. Development of EnergyPlus Utility to Batch Simulate Building Energy Performance on a National Scale

    SciTech Connect

    Valencia, Jayson F.; Dirks, James A.

    2008-08-29

    EnergyPlus is a simulation program that requires a large number of details to fully define and model a building. Hundreds or even thousands of lines in a text file are needed to run the EnergyPlus simulation depending on the size of the building. To manually create these files is a time consuming process that would not be practical when trying to create input files for thousands of buildings needed to simulate national building energy performance. To streamline the process needed to create the input files for EnergyPlus, two methods were created to work in conjunction with the National Renewable Energy Laboratory (NREL) Preprocessor; this reduced the hundreds of inputs needed to define a building in EnergyPlus to a small set of high-level parameters. The first method uses Java routines to perform all of the preprocessing on a Windows machine while the second method carries out all of the preprocessing on the Linux cluster by using an in-house built utility called Generalized Parametrics (GPARM). A comma delimited (CSV) input file is created to define the high-level parameters for any number of buildings. Each method then takes this CSV file and uses the data entered for each parameter to populate an extensible markup language (XML) file used by the NREL Preprocessor to automatically prepare EnergyPlus input data files (idf) using automatic building routines and macro templates. Using a Linux utility called “make”, the idf files can then be automatically run through the Linux cluster and the desired data from each building can be aggregated into one table to be analyzed. Creating a large number of EnergyPlus input files results in the ability to batch simulate building energy performance and scale the result to national energy consumption estimates.

  2. Laser tomography adaptive optics: a performance study.

    PubMed

    Tatulli, Eric; Ramaprakash, A N

    2013-12-01

    We present an analytical derivation of the on-axis performance of adaptive optics systems using a given number of guide stars of arbitrary altitude, distributed at arbitrary angular positions in the sky. The expressions of the residual error are given for cases of both continuous and discrete turbulent atmospheric profiles. Assuming Shack-Hartmann wavefront sensing with circular apertures, we demonstrate that the error is formally described by integrals of products of three Bessel functions. We compare the performance of adaptive optics correction when using natural, sodium, or Rayleigh laser guide stars. For small diameter class telescopes (≲5 m), we show that a small number of Rayleigh beacons can provide similar performance to that of a single sodium laser, for a lower overall cost of the instrument. For bigger apertures, using Rayleigh stars may not be such a suitable alternative because of the too severe cone effect that drastically degrades the quality of the correction. PMID:24323009

  3. Large scale steam valve test: Performance testing of large butterfly valves and full scale high flowrate steam testing

    SciTech Connect

    Meadows, J.B.; Robbins, G.E.; Roselius, D.G.

    1995-05-01

    This report presents the results of the design testing of large (36-inch diameter) butterfly valves under high flow conditions. The two butterfly valves were pneumatically operated air-open, air-shut valves (termed valves 1 and 2). These butterfly valves were redesigned to improve their ability to function under high flow conditions. Concern was raised regarding the ability of the butterfly valves to function as required with high flow-induced torque imposed on the valve discs during high steam flow conditions. High flow testing was required to address the flow-induced torque concerns. The valve testing was done using a heavily instrumented piping system. This test program was called the Large Scale Steam Valve Test (LSSVT). The LSSVT program demonstrated that the redesigned valves operated satisfactorily under high flow conditions.

  4. Metal contact effect on the performance and scaling behavior of carbon nanotube thin film transistors

    NASA Astrophysics Data System (ADS)

    Xia, Jiye; Dong, Guodong; Tian, Boyuan; Yan, Qiuping; Zhang, Han; Liang, Xuelei; Peng, Lianmao

    2016-05-01

    Metal-tube contact is known to play an important role in carbon nanotube field-effect transistors (CNT-FETs) which are fabricated on individual CNTs. Less attention has been paid to the contact effect in network type carbon nanotube thin film transistors (CNT-TFTs). In this study, we demonstrate that contact plays an even more important role in CNT-TFTs than in CNT-FETs. Although the Schottky barrier height at the metal-tube contact can be tuned by the work function of the metal, similar to the case in CNT-FETs, the contact resistance (Rc) forms a much higher proportion of the total resistance in CNT-TFTs. Interestingly, the contact resistivity was found to increase with channel length, which is a consequence of the percolating nature of the transport in CNT films, and this behavior does not exist in CNT-FETs and normal 2D Ohmic conductors. Electrical transport in CNT-TFTs has been predicted to scale with channel length by stick percolation theory. However, the scaling behavior is also impacted, or even covered up by the effect of Rc. Once the contact effect is excluded, the covered scaling behavior can be revealed correctly. A possible way of reducing Rc in CNT-TFTs was proposed. We believe the findings in this paper will strengthen our understanding of CNT-TFTs, and even accelerate the commercialization of CNT-TFT technology.Metal-tube contact is known to play an important role in carbon nanotube field-effect transistors (CNT-FETs) which are fabricated on individual CNTs. Less attention has been paid to the contact effect in network type carbon nanotube thin film transistors (CNT-TFTs). In this study, we demonstrate that contact plays an even more important role in CNT-TFTs than in CNT-FETs. Although the Schottky barrier height at the metal-tube contact can be tuned by the work function of the metal, similar to the case in CNT-FETs, the contact resistance (Rc) forms a much higher proportion of the total resistance in CNT-TFTs. Interestingly, the contact

  5. Atomic-scale studies of hydrogenated semiconductor surfaces

    NASA Astrophysics Data System (ADS)

    Mayne, A. J.; Riedel, D.; Comtet, G.; Dujardin, G.

    The adsorption of hydrogen on semiconductors strongly modifies the electronic and chemical properties of the surfaces, whether on the surface or in the sub-surface region. This has been the starting point, in recent years, of many new areas of research and technology. This paper will discuss the properties, at the atomic scale, of hydrogenated semiconductor surfaces studied with scanning tunnelling microscopy (STM) and synchrotron radiation. Four semiconductor surfaces will be described - germanium(1 1 1), silicon(1 0 0), silicon carbide(1 0 0) and diamond(1 0 0). Each surface has its particularities in terms of the physical and electronic structure and in regard to the adsorption of hydrogen. The manipulation of hydrogen on these surfaces by electronic excitation using electrons from the STM tip will be discussed in detail highlighting the excitation mechanisms. The reactivity of these surfaces towards various molecules and semiconductor nanocrystals will be illustrated.

  6. Scale effects on strength of geomaterials, case study: Coal

    NASA Astrophysics Data System (ADS)

    Scholtès, Luc; Donzé, Frédéric-Victor; Khanal, Manoj

    2011-05-01

    Scale effects on the strength of coal are studied using a discrete element model. The key point of the model is its capability to discriminate between the "strictly sample size" effect and the "Discrete Fracture Network (DFN) density" effect on the mechanical response. Simulations of true triaxial compression tests are carried out to identify their respective roles. The possible bias due to the discretization size distribution of the discrete element model is investigated in detail by considering low-resolution configurations. The model is shown to be capable of quantitatively reproducing the dependency of the maximum strength on the size of the sample. This relationship mainly relies on the DFN density. For all given sizes, as long as the DFN density remains constant with a uniform distribution or if discontinuities are absent in the considered medium, the maximum strength of the material remains constant.

  7. First quarter chemical borehole studies in the drift scale test

    SciTech Connect

    DeLoach, L., LLNL

    1998-05-19

    The chemistry boreholes of the Drift Scale Test (DST) have been designed to gather geochemical information and assess the impact of thermal perturbations on gas and liquid phases present in pore spaces and fractures within the rock. There are a total of ten boreholes dedicated to these chemical studies. Two arrays of five boreholes each were drilled from the access/observation drift (AOD) in planes which run normal to the heater drift and which are located approximately 15 and 45% of the way along the length of the drift as measured from the bulkhead. The boreholes each have a length of about 40 meters and have been drilled at low angles directed just above or just below the heater plane. In each array, three boreholes are directed at increasingly steeper angles (< 25-) above the line of wing heaters and two are directed at shallow angles below the wing heater plane.

  8. Task 3 -- Bench-scale char upgrading and utilization study

    SciTech Connect

    Jha, M.C.; McCormick, R.L.

    1989-08-02

    This report describes the results of the bench-scale char upgrading study conducted as Task 3 of Development of an Advanced, Continuous Mild Gasification Process for the Production of Coproducts. A process where the char is gasified to produce methane in a first stage reactor was investigated. This methane is then decomposed to produce carbon and hydrogen for recycle in a second stage. The results indicate that both reaction steps are feasible using mild gasification char as the starting feedstock. Conditions for methanation are 700 to 800 C and 200 to 400 psig. Carbon formation conditions are 1,200 to 1,400 C at atmospheric pressure. The carbon produced has properties similar to those of carbons which are commercially marketed as carbon black.

  9. Correlation between motor performance scales, body composition, and anthropometry in patients with Duchenne muscular dystrophy.

    PubMed

    Bayram, Erhan; Topcu, Yasemin; Karakaya, Pakize; Bayram, Meral Torun; Sahin, Ebru; Gunduz, Nihan; Yis, Uluc; Peker, Ozlen; Kurul, Semra Hiz

    2013-06-01

    The aim of this study is to investigate the relationship between body composition, anthropometry, and motor scales in patients with Duchenne muscular dystrophy (DMD). Twenty six patients with DMD were evaluated by Expanded Hammersmith Functional Motor Scale (HFMSE), gross motor function classification system (GMFCS), multifrequency bioelectrical impedance analysis, and anthropometric measurements. Seventeen healthy children served as control group. There were 26 patients with a mean age of 9.5 ± 4.8 years. Ages and anthropometric measurements did not differ between groups. Of the 26 patients, nine were level I, seven were level II, two were level III, seven were level IV, and one was level V, according to the GMFCS. Despite the similar percentage of total body water, extracellular water/intracellular water ratio was significantly elevated in DMD patients (p = 0.001). Increased values of fat percentage and body fat mass index (BFMI) correlated positively with elevated GMFCS levels (r = 0.785 and 0.719 respectively). Increased fat-free mass index (FFMI) correlated negatively with elevated GMFCS levels (r = -0.401). Increased fat percentage and BFMI correlated negatively with HFMSE scores (r = -0.779 and -0.698, respectively). Increased values of FFMI correlated positively with HFMSE scores. There was also a negative correlation between increased skin fold measurements from triceps and scapula and HFMSE scores (r = -0.618 and -0.683, respectively). Increased skin fold values from the same regions correlated positively with elevated GMFCS levels (r = 0.643 and 0.712, respectively). Significant body composition changes occur in patients with DMD. Anthropometric and multifrequency bioelectrical impedance analyses measurements show good correlation between motor function scales. These results may also be helpful to evaluate the effects of new treatment strategies. PMID:22975832

  10. Commercial-Scale Performance Predictions for High-Temperature Electrolysis Plants Coupled to Three Advanced Reactor Types

    SciTech Connect

    M. G. McKellar; J. E. O'Brien; J. S. Herring

    2007-09-01

    This report presents results of system analyses that have been developed to assess the hydrogen production performance of commercial-scale high-temperature electrolysis (HTE) plants driven by three different advanced reactor – power-cycle combinations: a high-temperature helium cooled reactor coupled to a direct Brayton power cycle, a supercritical CO2-cooled reactor coupled to a direct recompression cycle, and a sodium-cooled fast reactor coupled to a Rankine cycle. The system analyses were performed using UniSim software. The work described in this report represents a refinement of previous analyses in that the process flow diagrams include realistic representations of the three advanced reactors directly coupled to the power cycles and integrated with the high-temperature electrolysis process loops. In addition, this report includes parametric studies in which the performance of each HTE concept is determined over a wide range of operating conditions. Results of the study indicate that overall thermal-to- hydrogen production efficiencies (based on the low heating value of the produced hydrogen) in the 45 - 50% range can be achieved at reasonable production rates with the high-temperature helium cooled reactor concept, 42 - 44% with the supercritical CO2-cooled reactor and about 33 - 34% with the sodium-cooled reactor.

  11. Evaluation of Reading Habits of Teacher Candidates: Study of Scale Development

    ERIC Educational Resources Information Center

    Erkan, Senem Seda Sahenk; Dagal, Asude Balaban; Tezcan, Özlem

    2016-01-01

    The main purpose of this study was to develop a valid and reliable scale for printed and digital competencies ("The Printed and Digital Reading Habits Scale"). The problem statement of this research can be expressed as: "The Printed and Digital Reading Habits Scale: is a valid and reliable scale?" In this study, the scale…

  12. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  13. Laboratory Scale Antifoam Studies for the STTPB Process

    SciTech Connect

    Baich, M.A.

    2001-02-13

    Three candidate antifoam/defoam agents were tested on a laboratory scale with simulated KTPB slurry using the proposed STTPB process precipitation, concentration, and washing steps. Conclusions are if air entrainment in the slurry is carefully avoided, little or no foam will be generated during normal operations during precipitation, concentration, and washing of the precipitate. Three candidate antifoam/defoam agents were tested on a laboratory scale with simulated KTPB slurry using the proposed STTPB process precipitation, concentration and washing steps. In all cases little or no foam formed during normal operations of precipitation, concentration and washing. Foam was produced by purposely-introducing gas sub-surface into the slurry. Once produced, the IIT B52 antifoam was effective in defoaming the slurry. In separate foam column tests, all antifoam/defoam agents were effective in mitigating foam formation and in defoaming a foamed 10 wt % insoluble solids slurry. Based on the results in this report as well as foam column studies at IIT, it is recommended that IIT B52 antifoam at the 1000 ppmV level be used in subsequent STTPB work where foaming is a concern. This study indicates that the addition of antifoam agent hinders the recovery of NaTPB during washing. Washing precipitate with no antifoam agent added had the highest level of NaTPB recovery, but had the shortest overall washing time ({approximately}19 hours) compared to 26-28 hours for antifoam runs. The solubilities of the three candidate antifoam/defoam agents were measured in a 4.7 M sodium salt solution. The Surfynol DF-110D defoamer was essentially insoluble while the two IIT antifoamers; Particle Modifier (PM) and B52 were soluble to at least the 2000 ppmV level.

  14. Examiners and Content and Site: Oh My! a National Organization's Investigation of Score Variation in Large-Scale Performance Assessments

    ERIC Educational Resources Information Center

    Sebok, Stefanie S.; Roy, Marguerite; Klinger, Don A.; De Champlain, André F.

    2015-01-01

    Examiner effects and content specificity are two well known sources of construct irrelevant variance that present great challenges in performance-based assessments. National medical organizations that are responsible for large-scale performance based assessments experience an additional challenge as they are responsible for administering…

  15. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbes...

  16. ASBESTOS IN DRINKING WATER PERFORMANCE EVALUATION STUDIES

    EPA Science Inventory

    Performance evaluations of laboratories testing for asbestos in drinking water according to USEPA Test Method 100.1 or 100.2 are complicated by the difficulty of providing stable sample dispersions of asbestos in water. Reference samples of a graduated series of chrysotile asbest...

  17. Energy Savings Performance Contract Case Studies.

    ERIC Educational Resources Information Center

    Lefevre, Jessica S.

    Building owners and managers can use performance-contracting Energy Service Companies (ESCOs) to partially or fully fund building renovations that include energy efficiency upgrades. This report provides building owners and managers with an introduction to the energy efficiency and building upgrade services provided by ESCOs. It uses 20 case…

  18. Performance of a system with full- and pilot-scale sludge drying reed bed units treating septic tank sludge in Brazil.

    PubMed

    Calderón-Vallejo, Luisa Fernanda; Andrade, Cynthia Franco; Manjate, Elias Sete; Madera-Parra, Carlos Arturo; von Sperling, Marcos

    2015-01-01

    This study investigated the performance of sludge drying reed beds (SDRB) at full- and pilot-scale treating sludge from septic tanks in the city of Belo Horizonte, Brazil. The treatment units, planted with Cynodon spp., were based on an adaptation of the first-stage of the French vertical-flow constructed wetland, originally developed for treating sewage. Two different operational phases were investigated; in the first one, the full-scale unit was used together with six pilot-scale columns in order to test different feeding strategies. For the second phase, only the full-scale unit was used, including a recirculation of the filtered effluent (percolate) to one of the units of the French vertical wetland. Sludge application was done once a week emptying a full truck, during 25 weeks. The sludge was predominantly diluted, leading to low solids loading rates (median values of 18 kgTS m(-2) year(-1)). Chemical oxygen demand removal efficiency in the full-scale unit was reasonable (median of 71%), but the total solids removal was only moderate (median of 44%) in the full-scale unit without recirculation. Recirculation did not bring substantial improvements in the overall performance. The other loading conditions implemented in the pilot columns also did not show statistically different performances. PMID:26067493

  19. Area Scales of the Navy Vocational Interest Inventory as Predictors of School Performance and Rating Assignment.

    ERIC Educational Resources Information Center

    Lau, Alan W.; Abrahams, Norman M.

    The purpose of this research is to evaluate the effectiveness of the area (homogeneous) scales of the Navy Vocational Interest Inventory (NVII) as predictors of Class "A" school achievement and as measures of rating differentiation by comparing specific occupational scales with more general interest measures--the NVII area scales. The NVII was…

  20. In-situ determination of field-scale NAPL mass transfer coefficients: Performance, simulation and analysis

    NASA Astrophysics Data System (ADS)

    Mobile, Michael; Widdowson, Mark; Stewart, Lloyd; Nyman, Jennifer; Deeb, Rula; Kavanaugh, Michael; Mercer, James; Gallagher, Daniel

    2016-04-01

    Better estimates of non-aqueous phase liquid (NAPL) mass, its persistence into the future, and the potential impact of source reduction are critical needs for determining the optimal path to clean up sites impacted by NAPLs. One impediment to constraining time estimates of source depletion is the uncertainty in the rate of mass transfer between NAPLs and groundwater. In this study, an innovative field test is demonstrated for the purpose of quantifying field-scale NAPL mass transfer coefficients (klN) within a source zone of a fuel-contaminated site. Initial evaluation of the test concept using a numerical model revealed that the aqueous phase concentration response to the injection of clean groundwater within a source zone was a function of NAPL mass transfer. Under rate limited conditions, NAPL dissolution together with the injection flow rate and the radial distance to monitoring points directly controlled time of travel. Concentration responses observed in the field test were consistent with the hypothetical model results allowing field-scale NAPL mass transfer coefficients to be quantified. Site models for groundwater flow and solute transport were systematically calibrated and utilized for data analysis. Results show klN for benzene varied from 0.022 to 0.60 d- 1. Variability in results was attributed to a highly heterogeneous horizon consisting of layered media of varying physical properties.

  1. In-situ determination of field-scale NAPL mass transfer coefficients: Performance, simulation and analysis.

    PubMed

    Mobile, Michael; Widdowson, Mark; Stewart, Lloyd; Nyman, Jennifer; Deeb, Rula; Kavanaugh, Michael; Mercer, James; Gallagher, Daniel

    2016-04-01

    Better estimates of non-aqueous phase liquid (NAPL) mass, its persistence into the future, and the potential impact of source reduction are critical needs for determining the optimal path to clean up sites impacted by NAPLs. One impediment to constraining time estimates of source depletion is the uncertainty in the rate of mass transfer between NAPLs and groundwater. In this study, an innovative field test is demonstrated for the purpose of quantifying field-scale NAPL mass transfer coefficients (kl(N)) within a source zone of a fuel-contaminated site. Initial evaluation of the test concept using a numerical model revealed that the aqueous phase concentration response to the injection of clean groundwater within a source zone was a function of NAPL mass transfer. Under rate limited conditions, NAPL dissolution together with the injection flow rate and the radial distance to monitoring points directly controlled time of travel. Concentration responses observed in the field test were consistent with the hypothetical model results allowing field-scale NAPL mass transfer coefficients to be quantified. Site models for groundwater flow and solute transport were systematically calibrated and utilized for data analysis. Results show kl(N) for benzene varied from 0.022 to 0.60d(-1). Variability in results was attributed to a highly heterogeneous horizon consisting of layered media of varying physical properties. PMID:26855386

  2. Ethnography and Case Study Methodology: An Approach to Large-Scale Policy Studies of Federal Programs.

    ERIC Educational Resources Information Center

    Deslonde, James L.

    Studies of the use of ethnography as an evaluation tool in large-scale contract research studies were reviewed before data collection and design decisions were finalized in response to a Request for Proposal (RFP). Five tasks had to be accomplished before data collection could begin: study sites were selected, a system for data collection was…

  3. Continuing Education Program Administration: A Study of Competent Performance Indicators.

    ERIC Educational Resources Information Center

    Cookson, Peter S.; English, John

    1997-01-01

    A study had two parts: (1) construction of behaviorally anchored rating scales for continuing education administrative positions, using the DACUM (Developing a Curriculum) process and (2) professional development needs assessment of 11 directors and 22 area representatives. The utility of the scales for administrator self-assessment of…

  4. A Study of Scaling for Intercycle Ice Accretion Tests

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Botura, Galdemir C.; Broeren, Andy P.; Bond, Thomas H. (Technical Monitor)

    2003-01-01

    The Ruff method with matched scale and reference velocity was used to determine appropriate 1/2-scale test conditions to simulate a full-size icing encounter for an NACA 23012 wing section protected with a pneumatic boot deicing system. Intercycle ice accretions were recorded on a 36-in-chord model used to represent 1/2-scale and compared with a hybrid reference model (full-size leading-edge and truncated aft section) representing a 72-in-chord full-size airfoil. The intercycle ice thickness and extent of icing for the scale tests generally compared well with those from the reference model. However, the scale tests did not reproduce the location and number of feather rows seen in the reference tests aft of the main ice shape. Many of the differences observed were believed to result from not scaling the pneumatic boot design along with the model size for these tests.

  5. Scale effects in sliding friction: An experimental study

    SciTech Connect

    Blau, P.J.

    1991-07-24

    Solid friction is considered by some to be a fundamental property of two contacting materials, while others consider it to be a property of the larger tribosystem in which the materials are contained. A set of sliding friction experiments were designed to investigate the hypothesis that the unlubricated sliding friction between two materials is indeed a tribosystems-related property and that the relative influence of the materials properties or those of the machine on friction varies from one situation to another. Three tribometers were used: a friction microprobe (FMP), a typical laboratory-scale reciprocating pin-on-flat device, and a heavy-duty commercial wear tester. The slider material was stainless steel (AISI 440C) and the flat specimen material was an ordered alloy of Ni{sub 3}Al (IC-50). Sphere-on-flat geometry was used at ambient conditions and at normal forces ranging from 0.01 N to 100 N and average sliding velocities of 0.01 to 100.0 mm/s. The nominal, steady-state sliding friction coefficient tended to decrease with increases in normal force for each of the three tribometers, and the steady state value of sliding friction tended to increase as the mass of the machine increased. The variation of the friction force during sliding was also a characteristic of the test system. These studies provide further support to the idea that the friction of both laboratory-scale and engineering tribosystems should be treated as a parameter which may take on a range of characteristic values and not conceived as having a single, unique value for each material pair.

  6. Numerical study of dynamic behavior of contact line approaching a micro-scale particle

    NASA Astrophysics Data System (ADS)

    Miyazaki, Yusuke; Tsukahara, Takahiro; Ueno, Ichiro

    2014-11-01

    The behavior of contact line (CL) the boundary line of solid-liquid-gas interface is one of the important topics regarding the dynamic wetting. Many experimental and theoretical approaches have been performed about static and axisymmetric systems: e.g., Ally et al. (Langmuir 2010 vol. 26, 11797) measured the capillary force on a micro-scale particle attached to a liquid surface and they compared with their physical model. However, there are few numerical simulations of the dynamic and asymmetric systems Focusing on the CL passing micro-scale solid particles, we simulated solid-liquid-gas flows. Gas-liquid interface is captured by a VOF method and the surface tension model is the CSF model. Solid-fluid interaction is treated by an immersed boundary method. We studied the broken-dam problem with a fixed sphere in either macro or micro scale. Our results of the macro scale agree reasonably with the experimental result. In the micro scale, where the domain is of 2.0 × 2.0 × 2.0 μm3 and the sphere diameter is 0.5 μm, we tested two types of sphere surface: hydrophobic and hydrophilic solids. We demonstrated that, as the liquid touches the hydrophilic sphere, the velocity of CL is higher than the hydrophobic case.

  7. An Empirical Study of a Solo Performance Assessment Model

    ERIC Educational Resources Information Center

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  8. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.

    2013-01-01

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719

  9. A Study on the Surface Shape of Fish Scales

    NASA Astrophysics Data System (ADS)

    Sudo, Seiichi; Tsuyuki, Koji; Ito, Yoshiyasu; Ikohagi, Toshiaki

    This paper is concerned with the functional design and hydrodynamic characteristics of fish scales. The rough surfaces of fish scales were measured with a three-dimensional, optical shape measuring system. The measurement was carried out on embiotocid fish Ditrema temminck, rockfish Sebastes inermis, and dogfish Mustelus manago Bleeker. Some scales of a dogfish Mustelus manago Bleeker were also observed microscopically making use of the scanning electron microscope as a faster fish. From the viewpoint of hydrodynamics, the microscopic structure and morphological characteristics of fish scales were discussed.

  10. Small-Scale Screening to Large-Scale Over-Expression of Human Membrane Proteins for Structural Studies.

    PubMed

    Chaudhary, Sarika; Saha, Sukanya; Thamminana, Sobrahani; Stroud, Robert M

    2016-01-01

    Membrane protein structural studies are frequently hampered by poor expression. The low natural abundance of these proteins implies a need for utilizing different heterologous expression systems. E. coli and yeast are commonly used expression systems due to rapid cell growth at high cell density, economical production, and ease of manipulation. Here we report a simplified, systematically developed robust strategy from small-scale screening to large-scale over-expression of human integral membrane proteins in the mammalian expression system for structural studies. This methodology streamlines small-scale screening of several different constructs utilizing fluorescence size-exclusion chromatography (FSEC) towards optimization of buffer, additives, and detergents for achieving stability and homogeneity. This is followed by the generation of stable clonal cell lines expressing desired constructs, and lastly large-scale expression for crystallization. These techniques are designed to rapidly advance the structural studies of eukaryotic integral membrane proteins including that of human membrane proteins. PMID:27485338

  11. Acoustic Performance of Drive Rig Mufflers for Model Scale Engine Testing

    NASA Technical Reports Server (NTRS)

    Stephens, David, B.

    2013-01-01

    Aircraft engine component testing at the NASA Glenn Research Center (GRC) includes acoustic testing of scale model fans and propellers in the 9- by15-Foot Low Speed Wind Tunnel (LSWT). This testing utilizes air driven turbines to deliver power to the article being studied. These air turbines exhaust directly downstream of the model in the wind tunnel test section and have been found to produce significant unwanted noise that reduces the quality of the acoustic measurements of the engine model being tested. This report describes an acoustic test of a muffler designed to mitigate the extraneous turbine noise. The muffler was found to provide acoustic attenuation of at least 8 dB between 700 Hz and 20 kHz which significantly improves the quality of acoustic measurements in the facility.

  12. Characterization of pilot-scale dilute acid pretreatment performance using deacetylated corn stover

    PubMed Central

    2014-01-01

    Background Dilute acid pretreatment is a promising process technology for the deconstruction of low-lignin lignocellulosic biomass, capable of producing high yields of hemicellulosic sugars and enhancing enzymatic yields of glucose as part of a biomass-to-biofuels process. However, while it has been extensively studied, most work has historically been conducted at relatively high acid concentrations of 1 - 4% (weight/weight). Reducing the effective acid loading in pretreatment has the potential to reduce chemical costs both for pretreatment and subsequent neutralization. Additionally, if acid loadings are sufficiently low, capital requirements associated with reactor construction may be significantly reduced due to the relaxation of requirements for exotic alloys. Despite these benefits, past efforts have had difficulty obtaining high process yields at low acid loadings without supplementation of additional unit operations, such as mechanical refining. Results Recently, we optimized the dilute acid pretreatment of deacetylated corn stover at low acid loadings in a 1-ton per day horizontal pretreatment reactor. This effort included more than 25 pilot-scale pretreatment experiments executed at reactor temperatures ranging from 150 – 170°C, residence times of 10 – 20 minutes and hydrolyzer sulfuric acid concentrations between 0.15 – 0.30% (weight/weight). In addition to characterizing the process yields achieved across the reaction space, the optimization identified a pretreatment reaction condition that achieved total xylose yields from pretreatment of 73.5% ± 1.5% with greater than 97% xylan component balance closure across a series of five runs at the same condition. Feedstock reactivity at this reaction condition after bench-scale high solids enzymatic hydrolysis was 77%, prior to the inclusion of any additional conversion that may occur during subsequent fermentation. Conclusions This study effectively characterized a range of pretreatment reaction

  13. Evaluation of performance of full-scale duckweed and algal ponds receiving septage.

    PubMed

    Papadopoulos, Frantzis H; Metaxa, Eirini G; Iatrou, Miltos N; Papadopoulos, Aristotelis H

    2014-12-01

    The performance of duckweed and algal systems in removing fecal bacteria, organic matter, and nutrients was evaluated in three full-scale ponds operating in series. Trucks collected septage from holding tanks and discharged it into the system, daily. The inflow rates varied between the warm and the cold season. Duckweed and algae naturally colonized the ponds in two successive periods of 10 and 13 months, respectively. Environmental conditions were determined at various pond depths. Without harvesting, the duckweed system was neutral and anoxic. Alkaline and oversaturation conditions were observed in the algal system. The overall removals of 5-day biochemical oxygen demand, total suspended solids, total nitrogen removal, and orthophosphate (ortho-PO4(3-)) ranged from 94 to 97, 62 to 84, 68 to 74, and 0 to 26%, respectively. The E. coli and enterococci reductions varied between 2.2 to 3.0 and 1.1 to 1.4 log units, respectively. The upper values were always associated with the algal system. PMID:25654933

  14. Performance of the chemical mass balance model with simulated local-scale aerosols

    NASA Astrophysics Data System (ADS)

    Javitz, H. S.; Watson, J. G.; Robinson, N.

    A general methodology for performing simulations of the Chemical Mass Balance (CMB) model is developed and applied to simple and complex local scale scenarios. The simple scenario consists of crustal, coal-fired power plant, motor vehicle and vegetative burning sources; the complex scenario adds oil-fired power plant, ocean, steel mill, lead smelter, municipal incinerator and background aerosol sources. Daily receptor filter concentrations of the most commonly measured elements in the primary emissions are simulated. These simulations incorporate daily fluctuations in source strengths, daily fluctuations in source profiles (as parameterized by a coefficient of variation, or CV, of temporal source profiles) and measurement error at the receptor (as parameterized by a CV of measurement error). The CMB is applied to each daily measurement using a source library containing all sources and their long-term profiles (which, though correct on average, are incorrect on any particular day). The extent of agreement of the actual and CMBestimated primary emission source strengths is measured as an average absolute error (AAE, the absolute difference between the daily actual and estimated primary emission source strengths averaged over 100 simulated days). These moderately realistic simulations provide an encouraging picture of CMB accuracy and precision. The CMB yields acceptable accuracy and precision (an AAE of 50% or less) even when the CV of temporal source profiles is 25% and the CV of measurement error is 10%.

  15. Aerodynamic performance of a full-scale lifting ejector system in a STOVL fighter aircraft

    NASA Technical Reports Server (NTRS)

    Smith, Brian E.; Garland, Doug; Poppen, William A.

    1992-01-01

    The aerodynamic characteristics of an advanced lifting ejector system incorporated into a full-scale, powered, fighter aircraft model were measured at statically and at transition airspeeds in the 40- by 80- and 80- by 120-Foot Wind Tunnels at NASA-Ames. The ejector system was installed in an ejector-lift/vectored thrust STOVL (Short Take-Off Vertical Landing) fighter aircraft configuration. Ejector thrust augmentation ratios approaching 1.6 were demonstrated during static testing. Changes in the internal aerodynamics and exit flow conditions of the ejector ducts are presented for a variety of wind-off and forward-flight test conditions. Wind-on test results indicate a small decrease in ejector performance and increase in exit flow nonuniformity with forward speed. Simulated ejector start-up at high speed, nose-up attitudes caused only small effects on overall vehicle forces and moments despite the fact that the ejector inlet flow was found to induce large regions of negative pressure on the upper surface of the wing apex adjacent to the inlets.

  16. Association between Dementia Rating Scale performance and neurocognitive domains in Alzheimer's disease.

    PubMed

    Knox, Michael R; Lacritz, Laura H; Chandler, Melanie J; Munro Cullum, C

    2003-05-01

    The Dementia Rating Scale (DRS; Mattis, 1976, 1988) is commonly used in the assessment of dementia, although little is known about the relationship of performance on this test to specific cognitive deficits in Alzheimer's disease (AD). Additionally, cognitive profiles have not been investigated across different levels of dementia as determined by the DRS. A sample of 133 individuals diagnosed with possible or probable AD was administered the DRS as part of a comprehensive neuropsychological evaluation. Composite scores for the cognitive domains of attention, executive functioning, visuospatial skills, language abilities, immediate recall, and delayed memory were derived by averaging demographically corrected T scores of key measures. Individual domain scores were also averaged to develop a global index score. Pearson correlations between composite and total DRS scores were highly significant (p<.001) for all domains and the global index score, with the exception of delayed memory, which showed a floor effect. When the sample was divided into mild and moderate-to-severe groups to examine the effects of disease severity on the relationship between the DRS and standard neurocognitive domain scores, the resulting mean neuropsychological profile scores were significantly different while maintaining a parallel pattern of impairment across domains. Results demonstrate the relationship between the DRS and standard cognitive domain functions, which appears to underscore the validity and robustness of the DRS in characterizing patterns of cognitive impairment across the AD spectrum. PMID:13680428

  17. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. PMID:12820130

  18. Large-Scale Synthesis of Metal-Ion-Doped Manganese Dioxide for Enhanced Electrochemical Performance.

    PubMed

    Peng, Ruichao; Wu, Nian; Zheng, Yu; Huang, Yangbo; Luo, Yunbai; Yu, Ping; Zhuang, Lin

    2016-04-01

    One-dimensional (1D) MnO2 was widely applied in areas of enzyme biosensors, industrial sieves, and energy storage materials owing to its excellent thermal, optical, magnetic, and chemical features. However, its practical application into energy storage devices is often hindered by the bad electronic conductivity (from 10(-5) to 10(-6) S cm(-1)). As is widely known, doping with hetero elements is an efficient way to enhance the electronic conductivity of metal oxides. Herein, a novel and simple molten-salt method is developed to achieve a large-scale preparation of 1D MnO2 nanowires. Such an approach also realizes the easy tuning of electrical properties through doping with different transition metal ions. On the basis of first-principle calculation as well as four-probe measurement, we determined that the conductivity of the doped MnO2 nanowires can be promoted efficiently by utilizing such protocol. Meanwhile, a possible doping route is discussed in detail. As a result, a superior electrochemical performance can be observed in such metal ions (M(+))-doped nanowires. Such high-quality M(+)-doped MnO2 nanowires can satisfy a broad range of application needs beyond the electrochemical capacitors. PMID:26996352

  19. Large-scale performance and design for construction activity erosion control best management practices.

    PubMed

    Faucette, L B; Scholl, B; Beighley, R E; Governo, J

    2009-01-01

    The National Pollutant Discharge Elimination System (NPDES) Phase II requires construction activities to have erosion and sediment control best management practices (BMPs) designed and installed for site storm water management. Although BMPs are specified on storm water pollution prevention plans (SWPPPs) as part of the construction general permit (GP), there is little evidence in the research literature as to how BMPs perform or should be designed. The objectives of this study were to: (i) comparatively evaluate the performance of common construction activity erosion control BMPs under a standardized test method, (ii) evaluate the performance of compost erosion control blanket thickness, (iii) evaluate the performance of compost erosion control blankets (CECBs) on a variety of slope angles, and (iv) determine Universal Soil Loss Equation (USLE) cover management factors (C factors) for these BMPs to assist site designers and engineers. Twenty-three erosion control BMPs were evaluated using American Society of Testing and Materials (ASTM) D-6459, standard test method for determination of ECB performance in protecting hill slopes from rainfall induced erosion, on 4:1 (H:V), 3:1, and 2:1 slopes. Soil loss reduction for treatments exposed to 5 cm of rainfall on a 2:1 slope ranged from-7 to 99%. For rainfall exposure of 10 cm, treatment soil loss reduction ranged from 8 to 99%. The 2.5 and 5 cm CECBs significantly reduced erosion on slopes up to 2:1, while CECBs < 2.5 cm are not recommended on slopes >or= 4:1 when rainfall totals reach 5 cm. Based on the soil loss results, USLE C factors ranged from 0.01 to 0.9. These performance and design criteria should aid site planners and designers in decision-making processes. PMID:19398523

  20. The Need for System Scale Studies in Polar Regions

    NASA Astrophysics Data System (ADS)

    Hinzman, L. D.; Newman, D.

    2010-12-01

    The understanding of polar regions has advanced tremendously in the past two decades and much of the improved insight into our knowledge of environmental dynamics is due to multidisciplinary and interdisciplinary studies conducted by coordinated and collaborative research programs supported by national funding agencies. Although much remains to be learned with respect to component processes, many of the most urgent scientific, engineering and social questions can only be addressed through the broader perspective of studies on system scales in which these components are coupled to each other. Questions such as quantifying feedbacks, understanding the implications of sea ice loss to adjacent land areas or society, resolving future predictions of ecosystem evolution or population dynamics all require consideration of complex interactions and interdependent linkages among system components. Research that has identified physical controls on biological processes, or quantified impact/response relationships in physical and biological systems is critically important, and must be continued; however we are approaching a limitation in our ability to accurately project how the Arctic and the Antarctic will respond to a continued warming climate. Complex issues, such as developing accurate model algorithms of feedback processes require higher level synthesis of multiple component interactions. Several examples of important questions that may only be addressed through coupled complex systems analyses will be addressed.

  1. Refracted x-ray fluorescence (RXF) applied to the study of thermally grown oxide scales

    SciTech Connect

    Koshelev, I.; Paulikas, A.P.; Veal, B.W.

    1996-12-31

    RXF is a new technique for studying thin films. Here, it is applied to study of thermally grown oxide scales. Evolution of chromia scales on Fe-25Cr-20Ni-0.3Y alloys and the evolution of alumina scales on {beta}-NiAl are investigated. The technique provides scale composition and depth profile information, scale thicknesses and growth rates, and information about transient phase evolution.

  2. HCIT Broadband Contrast Performance Sensitivity Studies

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Shaklan, Stuart; Balasubramanian, Kunjithapatham

    2012-01-01

    One of the important milestones of the TPF Coronagraph project is to demonstrate the ability to predict the performance sensitivities of the system at levels consistent with exo-planet detection requirement. We want to gain some general understanding about the potentials and the limitations of the current single-Deformable-Mirror (DM) High-contrast imaging testbed (HCIT) system through modeling and simulations. Specifically, we want to understand the effects of some common errors on the EFC-based control of e-field over a half dark-hole region and broadband contrast. Investigated errors include: (1) Absorbing particles on a flat-mirror (2) Defects on the Occulter surface (3) Dead actuators on the DM. We also investigated the effects of control bandwidth on the broadband contrast. We used a MACOS-based simulation algorithm which (1) combines a ray trace, diffraction model, & a broadband wavefront control algorithm (2) is capable of performing full three-dimensional near-field diffraction analysis

  3. Teaching Social Studies through the Performing Arts

    ERIC Educational Resources Information Center

    Colley, Binta M.

    2012-01-01

    In the past decade, there have been growing efforts to improve and enhance the delivery of social studies content in the classroom through arts integration. Some educators have used music as a method for teaching social studies and found that interdisciplinary work increases students' understanding of history and different cultures. This article…

  4. When Does Scale Anchoring Work? A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Haberman, Shelby J.; Lee, Yi-Hsuan

    2011-01-01

    Providing information to test takers and test score users about the abilities of test takers at different score levels has been a persistent problem in educational and psychological measurement. Scale anchoring, a technique which describes what students at different points on a score scale know and can do, is a tool to provide such information.…

  5. Comparison of methanogenic community structure and anaerobic process performance treating swine wastewater between pilot and optimized lab scale bioreactors.

    PubMed

    Kim, Woong; Cho, Kyungjin; Lee, Seungyong; Hwang, Seokhwan

    2013-10-01

    To investigate methanogenic community structure and process performance of anaerobic digestion treating swine wastewater at different scale, a pilot plant with 20 m(3) of effective working volume and lab scale methanogenic digester with 6L working volume were operated for 71 days and 6 turnover periods, respectively. During the steady state of anaerobic digestion, COD and VS removal efficiency in pilot plant were 65.3±3.2, 51.6±4.3%, respectively, which was similar to those in lab scale. However, calculated VFAs removal efficiency and methane yield were lower in pilot plant than in lab scale digester. Also, organics removal efficiencies, which consist of total carbohydrates, proteins, and lipids, were different between pilot and lab scale. These results were thought to be due to the ratio of carbohydrates to proteins in the raw swine wastewater. As a result of qualitative microbial analysis, Methanoculleus receptaculii, and Methanoculleus bourgensis, were commonly concerned with methane production. PMID:23489568

  6. Towards reversible basic linear algebra subprograms: A performance study

    SciTech Connect

    Perumalla, Kalyan S.; Yoginath, Srikanth B.

    2014-12-06

    Problems such as fault tolerance and scalable synchronization can be efficiently solved using reversibility of applications. Making applications reversible by relying on computation rather than on memory is ideal for large scale parallel computing, especially for the next generation of supercomputers in which memory is expensive in terms of latency, energy, and price. In this direction, a case study is presented here in reversing a computational core, namely, Basic Linear Algebra Subprograms, which is widely used in scientific applications. A new Reversible BLAS (RBLAS) library interface has been designed, and a prototype has been implemented with two modes: (1) a memory-mode in which reversibility is obtained by checkpointing to memory in forward and restoring from memory in reverse, and (2) a computational-mode in which nothing is saved in the forward, but restoration is done entirely via inverse computation in reverse. The article is focused on detailed performance benchmarking to evaluate the runtime dynamics and performance effects, comparing reversible computation with checkpointing on both traditional CPU platforms and recent GPU accelerator platforms. For BLAS Level-1 subprograms, data indicates over an order of magnitude better speed of reversible computation compared to checkpointing. For BLAS Level-2 and Level-3, a more complex tradeoff is observed between reversible computation and checkpointing, depending on computational and memory complexities of the subprograms.

  7. Towards reversible basic linear algebra subprograms: A performance study

    DOE PAGESBeta

    Perumalla, Kalyan S.; Yoginath, Srikanth B.

    2014-12-06

    Problems such as fault tolerance and scalable synchronization can be efficiently solved using reversibility of applications. Making applications reversible by relying on computation rather than on memory is ideal for large scale parallel computing, especially for the next generation of supercomputers in which memory is expensive in terms of latency, energy, and price. In this direction, a case study is presented here in reversing a computational core, namely, Basic Linear Algebra Subprograms, which is widely used in scientific applications. A new Reversible BLAS (RBLAS) library interface has been designed, and a prototype has been implemented with two modes: (1) amore » memory-mode in which reversibility is obtained by checkpointing to memory in forward and restoring from memory in reverse, and (2) a computational-mode in which nothing is saved in the forward, but restoration is done entirely via inverse computation in reverse. The article is focused on detailed performance benchmarking to evaluate the runtime dynamics and performance effects, comparing reversible computation with checkpointing on both traditional CPU platforms and recent GPU accelerator platforms. For BLAS Level-1 subprograms, data indicates over an order of magnitude better speed of reversible computation compared to checkpointing. For BLAS Level-2 and Level-3, a more complex tradeoff is observed between reversible computation and checkpointing, depending on computational and memory complexities of the subprograms.« less

  8. Research Studies Performed Using the Cairo Fourier Diffractometer Facility

    NASA Astrophysics Data System (ADS)

    Maayouf, R. M. A.

    2013-03-01

    This report represents the results of research studies performed using the Cairo Fourier diffractometer facility (CFDF), within 10 years after it was installed and put into operation at the beginning of 1996. The main components of the CFDF were supplied by the IAEA according to the technical assistance project EGY/1/022. Plenty of measurements were performed, since then; yielding several publications, both in local and international scientific periodicals; and 8 M.Sc. & Ph.D. degrees from Egyptian Universities. Besides, a new approach for the analysis of the neutron spectra measured using the CFDF; applying especially designed interface card, along with its proper software program, instead of the reverse time of flight (RTOF), Finnish make, analyzer originally attached to the facility. It has been verified that the new approach cnn successfully replace the RTOF analyzer; significantly decreasing the time of measurement; and saving the reactor's operation time. A special fault diagnostic system program was developed and tested for caring and handling the possible failures of the CFDF. Besides the new developments required for the CFDF for industrial applications in wide scale, are also considered.

  9. Feasibility study of a Megaton-scale Underground Laboratory at the Frejus site

    SciTech Connect

    MOSCA, L.

    2007-11-08

    After a brief review of the main scientific motivations of LAGUNA, European cooperation projects (GLACIER, LENA and MEMPHYS), and of the virtues of the Frejus site, a preliminary feasibility study for a Megaton-scale Underground Laboratory at Frejus is presented and its positive results discussed. The need for a future more detailed investigation (Design Study), which will be performed in the framework of the LAGUNA collaboration, is stressed. The excellent opportunity presented by the recently approved Frejus Safety Tunnel (d = 8 m) project is also underlined.

  10. Cyanobacteria, Toxins and Indicators: Full-Scale Monitoring & Bench-Scale Treatment Studies

    EPA Science Inventory

    Summary of: 1) Lake Erie 2014 bloom season full-scale treatment plant monitoring data for cyanobacteria and cyanobacteria toxins; 2) Follow-up work to examine the impact of pre-oxidation on suspensions of intact toxin-producing cyanobacterial cells.

  11. Scaling studies and conceptual experiment designs for NGNP CFD assessment

    SciTech Connect

    D. M. McEligot; G. E. McCreery

    2004-11-01

    The objective of this report is to document scaling studies and conceptual designs for flow and heat transfer experiments intended to assess CFD codes and their turbulence models proposed for application to prismatic NGNP concepts. The general approach of the project is to develop new benchmark experiments for assessment in parallel with CFD and coupled CFD/systems code calculations for the same geometry. Two aspects of the complex flow in an NGNP are being addressed: (1) flow and thermal mixing in the lower plenum ("hot streaking" issue) and (2) turbulence and resulting temperature distributions in reactor cooling channels ("hot channel" issue). Current prismatic NGNP concepts are being examined to identify their proposed flow conditions and geometries over the range from normal operation to decay heat removal in a pressurized cooldown. Approximate analyses have been applied to determine key non-dimensional parameters and their magnitudes over this operating range. For normal operation, the flow in the coolant channels can be considered to be dominant turbulent forced convection with slight transverse property variation. In a pressurized cooldown (LOFA) simulation, the flow quickly becomes laminar with some possible buoyancy influences. The flow in the lower plenum can locally be considered to be a situation of multiple hot jets into a confined crossflow -- with obstructions. Flow is expected to be turbulent with momentumdominated turbulent jets entering; buoyancy influences are estimated to be negligible in normal full power operation. Experiments are needed for the combined features of the lower plenum flows. Missing from the typical jet experiments available are interactions with nearby circular posts and with vertical posts in the vicinity of vertical walls - with near stagnant surroundings at one extreme and significant crossflow at the other. Two types of heat transfer experiments are being considered. One addresses the "hot channel" problem, if necessary

  12. Performance Characteristics of a Partially Admitted Small-Scale Mixed-Type Turbine

    NASA Astrophysics Data System (ADS)

    Cho, Soo-Yong; Ahn, Kook-Young; Lee, Young-Duk

    2011-12-01

    In this study, a mixed-type turbine was designed and tested with the double or single stage to improve the specific torque when it operates at a low partial admission rate. The turbine consists of double stages and the outer diameter of its rotor is 108 mm. The turbine rotor blades were designed as an axial-type blade along the mixed flow direction because the partial admission rate was 1.7-2.0% depending on the flow direction. Performance characteristics were measured at the double and single stage rotors to investigate the effect of the second stage on the low partial admission. In addition, when the flow direction was radially inward or outward at the nozzle, turbine performances were studied. In this experiment, the specific power, torque, and total-to-static efficiency were measured at various rotational speeds to compare with the turbine performance according to different operating condition. The tested results showed that the second stage should be adopted to increase the operating torque when the operating rotational speed was less than the critical rotational speed. The specific torque was improved by 7.8% using the second stage at a radially inward flow direction turbine

  13. Developing an Attitude Scale for Cursive Handwriting: Validity and Reliability Study

    ERIC Educational Resources Information Center

    Karadag, Ruhan

    2013-01-01

    The aim of this study is to develop an attitude scale for designating the attitudes of primary school pre-service teachers towards cursive handwriting. In the process of developing draft scale a 57-item draft scale on cursive handwriting has been formed. While developing the scale, related literature was searched, pre-service teachers'…

  14. Digestion performance and microbial community in full-scale methane fermentation of stillage from sweet potato-shochu production.

    PubMed

    Kobayashi, Tsutomu; Tang, Yueqin; Urakami, Toyoshi; Morimura, Shigeru; Kida, Kenji

    2014-02-01

    Sweet potato shochu is a traditional Japanese spirit produced mainly in the South Kyushu area in Japan. The amount of stillage reaches approximately 8 x 10(5) tons per year. Wastewater mainly containing stillage from the production of sweet potato-shochu was treated thermophilically in a full-scale treatment plant using fixed-bed reactors (8 reactors x 283 m3). Following the addition of Ni2+ and Co2+, the reactors have been stably operated for six years at a high chemical oxygen demand (COD) loading rate of 14 kg/(m3 x day). Analysis of coenzyme content and microbial communities indicated that similar microbial communities were present in the liquid phase and on the fiber carriers installed in reactors. Bacteria in the phyla Firmicutes as well as Bacteroidetes were dominant bacteria, and Methanosarcina thermophila as well as Methanothermobacter crinale were dominant methanogens in the reactors. This study reveals that stillage from sweet potato-shochu production can be treated effectively in a full-scale fixed-bed reactor under thermophilic conditions with the help of Ni2+ and Co2+. The high diversity of bacterial community and the coexistence of both aceticlastic and hydrogenotrophic methanogens contributed to the excellent fermentation performance. PMID:25076534

  15. An Experimental Study of Cutting Performances of Worn Picks

    NASA Astrophysics Data System (ADS)

    Dogruoz, Cihan; Bolukbasi, Naci; Rostami, Jamal; Acar, Cemil

    2016-01-01

    The best means to assess rock cuttability and efficiency of cutting process for using mechanical excavation is specific energy (SE), measured in full-scale rock cutting test. This is especially true for the application of roadheaders, often fitted with drag-type cutting tools. Radial picks or drag bits are changed during the operation as they reach a certain amount of wear and become blunt. In this study, full-scale cutting tests in different sedimentary rock types with bits having various degree of wear were used to evaluate the influence of bit wear on cutting forces and specific energy. The relationship between the amount of wear as represented by the size of the wear flats at the tip of the bit, and cutting forces as well as specific energy was examined. The influence of various rock properties such as mineral content, uniaxial compressive strength, tensile strength, indentation index, shore hardness, Schmidt hammer hardness, and density with required SE of cutting using different levels of tool wear was also studied. The preliminary analysis of the data shows that the mean cutting forces increase 2-3 times and SE by 4-5 times when cutting with 4 mm wear flat as compared to cutting with new or sharp wedge shape bits. The grain size distribution of the muck for cutting different rock types and different level of bit wear was analyzed and discussed. The best fit prediction models for SE based on statistical analysis of laboratory test results are introduced. The model can be used for estimating the performance of mechanical excavators using radial tools, especially roadheaders, continuous miners and longwall drum shearers.

  16. Assessing the Performance of Large Scale Green Roofs and Their Impact on the Urban Microclimate

    NASA Astrophysics Data System (ADS)

    Smalls-Mantey, L.; Foti, R.; Montalto, F. A.

    2015-12-01

    In ultra-urban environments green roofs offer a feasible solution to add green infrastructure (GI) in neighborhoods where space is limited. Green roofs offer the typical advantages of urban GI such as stormwater reduction and management while providing direct benefits to the buildings on which they are installed through thermal protection and mitigation of temperature fluctuations. At 6.8 acres, the Jacob K. Javits Convention Center (JJCC) in New York City, hosts the second largest green roof in the United States. Since its installation in August 2013, the Sustainable Water Resource (SWRE) Laboratory at Drexel University has monitored the climate on and around the green roof by means of four weather stations situated on various roof and ground locations. Using two years of fine scale climatic data collected at the JJCC, this study explores the energy balance of a large scale green roof system. Temperature, radiation, evapotranspiration and wind profiles pre- and post- installation of the JJCC green roof were analyzed and compared across monitored locations, with the goal of identifying the impact of the green roof on the building and urban micro-climate. Our findings indicate that the presence of the green roof, not only altered the climatic conditions above the JJCC, but also had a measurable impact on the climatic profile of the areas immediately surrounding it. Furthermore, as a result of the mitigation of roof temperature fluctuations and of the cooling provided during warmer months, an improvement of the building thermal efficiency was contextually observed. Such findings support the installation of GI as an effective practice in urban settings and important in the discussion of key issues including energy conservation measures, carbon emission reductions and the mitigation of urban heat islands.

  17. Space Launch System Base Heating Test: Sub-Scale Rocket Engine/Motor Design, Development & Performance Analysis

    NASA Technical Reports Server (NTRS)

    Mehta, Manish; Seaford, Mark; Kovarik, Brian; Dufrene, Aaron; Solly, Nathan

    2014-01-01

    ATA-002 Technical Team has successfully designed, developed, tested and assessed the SLS Pathfinder propulsion systems for the Main Base Heating Test Program. Major Outcomes of the Pathfinder Test Program: Reach 90% of full-scale chamber pressure Achieved all engine/motor design parameter requirements Reach steady plume flow behavior in less than 35 msec Steady chamber pressure for 60 to 100 msec during engine/motor operation Similar model engine/motor performance to full-scale SLS system Mitigated nozzle throat and combustor thermal erosion Test data shows good agreement with numerical prediction codes Next phase of the ATA-002 Test Program Design & development of the SLS OML for the Main Base Heating Test Tweak BSRM design to optimize performance Tweak CS-REM design to increase robustness MSFC Aerosciences and CUBRC have the capability to develop sub-scale propulsion systems to meet desired performance requirements for short-duration testing.

  18. Performance on large-scale science tests: Item attributes that may impact achievement scores

    NASA Astrophysics Data System (ADS)

    Gordon, Janet Victoria

    Significant differences in achievement among ethnic groups persist on the eighth-grade science Washington Assessment of Student Learning (WASL). The WASL measures academic performance in science using both scenario and stand-alone question types. Previous research suggests that presenting target items connected to an authentic context, like scenario question types, can increase science achievement scores especially in underrepresented groups and thus help to close the achievement gap. The purpose of this study was to identify significant differences in performance between gender and ethnic subgroups by question type on the 2005 eighth-grade science WASL. MANOVA and ANOVA were used to examine relationships between gender and ethnic subgroups as independent variables with achievement scores on scenario and stand-alone question types as dependent variables. MANOVA revealed no significant effects for gender, suggesting that the 2005 eighth-grade science WASL was gender neutral. However, there were significant effects for ethnicity. ANOVA revealed significant effects for ethnicity and ethnicity by gender interaction in both question types. Effect sizes were negligible for the ethnicity by gender interaction. Large effect sizes between ethnicities on scenario question types became moderate to small effect sizes on stand-alone question types. This indicates the score advantage the higher performing subgroups had over the lower performing subgroups was not as large on stand-alone question types compared to scenario question types. A further comparison examined performance on multiple-choice items only within both question types. Similar achievement patterns between ethnicities emerged; however, achievement patterns between genders changed in boys' favor. Scenario question types appeared to register differences between ethnic groups to a greater degree than stand-alone question types. These differences may be attributable to individual differences in cognition

  19. Scaling to 150K cores: recent algorithm and performance engineering developments enabling XGC1 to run at scale

    SciTech Connect

    Mark F. Adams; Seung-Hoe Ku; Patrick Worley; Ed D'Azevedo; Julian C. Cummings; C.S. Chang

    2009-10-01

    Particle-in-cell (PIC) methods have proven to be eft#11;ective in discretizing the Vlasov-Maxwell system of equations describing the core of toroidal burning plasmas for many decades. Recent physical understanding of the importance of edge physics for stability and transport in tokamaks has lead to development of the fi#12;rst fully toroidal edge PIC code - XGC1. The edge region poses special problems in meshing for PIC methods due to the lack of closed flux surfaces, which makes fi#12;eld-line following meshes and coordinate systems problematic. We present a solution to this problem with a semi-#12;field line following mesh method in a cylindrical coordinate system. Additionally, modern supercomputers require highly concurrent algorithms and implementations, with all levels of the memory hierarchy being effe#14;ciently utilized to realize optimal code performance. This paper presents a mesh and particle partitioning method, suitable to our meshing strategy, for use on highly concurrent cache-based computing platforms.

  20. An Examination of Coach and Player Relationships According to the Adapted LMX 7 Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Caliskan, Gokhan

    2015-01-01

    The current study aims to test the reliability and validity of the Leader-Member Exchange (LMX 7) scale with regard to coach--player relationships in sports settings. A total of 330 professional soccer players from the Turkish Super League as well as from the First and Second Leagues participated in this study. Factor analyses were performed to…