Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems
Kravchenko, Alexandra N.; Snapp, Sieglinde S.; Robertson, G. Philip
2017-01-01
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based–organic, management practices for a corn–soybean–wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world. PMID:28096409
Field-scale experiments reveal persistent yield gaps in low-input and organic cropping systems.
Kravchenko, Alexandra N; Snapp, Sieglinde S; Robertson, G Philip
2017-01-31
Knowledge of production-system performance is largely based on observations at the experimental plot scale. Although yield gaps between plot-scale and field-scale research are widely acknowledged, their extent and persistence have not been experimentally examined in a systematic manner. At a site in southwest Michigan, we conducted a 6-y experiment to test the accuracy with which plot-scale crop-yield results can inform field-scale conclusions. We compared conventional versus alternative, that is, reduced-input and biologically based-organic, management practices for a corn-soybean-wheat rotation in a randomized complete block-design experiment, using 27 commercial-size agricultural fields. Nearby plot-scale experiments (0.02-ha to 1.0-ha plots) provided a comparison of plot versus field performance. We found that plot-scale yields well matched field-scale yields for conventional management but not for alternative systems. For all three crops, at the plot scale, reduced-input and conventional managements produced similar yields; at the field scale, reduced-input yields were lower than conventional. For soybeans at the plot scale, biological and conventional managements produced similar yields; at the field scale, biological yielded less than conventional. For corn, biological management produced lower yields than conventional in both plot- and field-scale experiments. Wheat yields appeared to be less affected by the experimental scale than corn and soybean. Conventional management was more resilient to field-scale challenges than alternative practices, which were more dependent on timely management interventions; in particular, mechanical weed control. Results underscore the need for much wider adoption of field-scale experimentation when assessing new technologies and production-system performance, especially as related to closing yield gaps in organic farming and in low-resourced systems typical of much of the developing world.
Statistical analysis of microgravity experiment performance using the degrees of success scale
NASA Technical Reports Server (NTRS)
Upshaw, Bernadette; Liou, Ying-Hsin Andrew; Morilak, Daniel P.
1994-01-01
This paper describes an approach to identify factors that significantly influence microgravity experiment performance. Investigators developed the 'degrees of success' scale to provide a numerical representation of success. A degree of success was assigned to 293 microgravity experiments. Experiment information including the degree of success rankings and factors for analysis was compiled into a database. Through an analysis of variance, nine significant factors in microgravity experiment performance were identified. The frequencies of these factors are presented along with the average degree of success at each level. A preliminary discussion of the relationship between the significant factors and the degree of success is presented.
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Balestrieri, M; Giaroli, G; Mazzi, M; Bellantuono, C
2006-05-01
Several studies indicate that subjective experience toward antipsychotic drugs (APs) in schizophrenic patients is a key factor in ensuring a smooth recovery from the illness. The principal aim of this study was to establish the psychometric performance of the Subjective Well-being Under Neuroleptic (SWN) scale in its Italian version and to assess, through the SWN scale, the subjective experience of stabilized psychotic outpatients in maintenance with APs. The original short version of SWN, consisting of 20 items, was back translated, and a focus group was also conducted to better improve the comprehension of the scale. The results showed a good performance of the Italian version of the SWN as documented by the internal consistency (Cronbach's alpha; 0.85). A satisfactory subjective experience was reported in the sample of schizophrenic outpatients interviewed (SWN mean total score: 84.95, SD: 17.5). The performance of the SWN scale in the present study was very similar to that reported by Naber et al. in the original validation study. Large multi-center studies are needed to better establish differences in the subjective experience of schizophrenic patients treated with first- and second-generation APs.
Detonation failure characterization of non-ideal explosives
NASA Astrophysics Data System (ADS)
Janesheski, Robert S.; Groven, Lori J.; Son, Steven
2012-03-01
Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.
U.S. perspective on technology demonstration experiments for adaptive structures
NASA Technical Reports Server (NTRS)
Aswani, Mohan; Wada, Ben K.; Garba, John A.
1991-01-01
Evaluation of design concepts for adaptive structures is being performed in support of several focused research programs. These include programs such as Precision Segmented Reflector (PSR), Control Structure Interaction (CSI), and the Advanced Space Structures Technology Research Experiment (ASTREX). Although not specifically designed for adaptive structure technology validation, relevant experiments can be performed using the Passive and Active Control of Space Structures (PACOSS) testbed, the Space Integrated Controls Experiment (SPICE), the CSI Evolutionary Model (CEM), and the Dynamic Scale Model Test (DSMT) Hybrid Scale. In addition to the ground test experiments, several space flight experiments have been planned, including a reduced gravity experiment aboard the KC-135 aircraft, shuttle middeck experiments, and the Inexpensive Flight Experiment (INFLEX).
Bell, M A; Fox, N A
1997-12-01
This work was designed to investigate individual differences in hands-and-knees crawling and frontal brain electrical activity with respect to object permanence performance in 76 eight-month-old infants. Four groups of infants (one prelocomotor and 3 with varying lengths of hands-and-knees crawling experience) were tested on an object permanence scale in a research design similar to that used by Kermoian and Campos (1988). In addition, baseline EEG was recorded and used as an indicator of brain development, as in the Bell and Fox (1992) longitudinal study. Individual differences in frontal and occipital EEG power and in locomotor experience were associated with performance on the object permanence task. Infants successful at A-not-B exhibited greater frontal EEG power and greater occipital EEG power than unsuccessful infants. In contrast to Kermoian and Campos (1988), who noted that long-term crawling experience was associated with higher performance on an object permanence scale, infants in this study with any amount of hands and knees crawling experience performed at a higher level on the object permanence scale than prelocomotor infants. There was no interaction among brain electrical activity, locomotor experience, and object permanence performance. These data highlight the value of electrophysiological research and the need for a brain-behavior model of object permanence performance that incorporates both electrophysiological and behavioral factors.
ERIC Educational Resources Information Center
Coelho, Francisco Antonio, Jr.; Ferreira, Rodrigo Rezende; Paschoal, Tatiane; Faiad, Cristiane; Meneses, Paulo Murce
2015-01-01
The purpose of this study was twofold: to assess evidences of construct validity of the Brazilian Scale of Tutors Competences in the field of Open and Distance Learning and to examine if variables such as professional experience, perception of the student´s learning performance and prior experience influence the development of technical and…
Overview of large scale experiments performed within the LBB project in the Czech Republic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadecka, P.; Lauerova, D.
1997-04-01
During several recent years NRI Rez has been performing the LBB analyses of safety significant primary circuit pipings of NPPs in Czech and Slovak Republics. The analyses covered the NPPs with reactors WWER 440 Type 230 and 213 and WWER 1000 Type 320. Within the relevant LBB projects undertaken with the aim to prove the fulfilling of the requirements of LBB, a series of large scale experiments were performed. The goal of these experiments was to verify the properties of the components selected, and to prove the quality and/or conservatism of assessments used in the LBB-analyses. In this poster, amore » brief overview of experiments performed in Czech Republic under guidance of NRI Rez is presented.« less
A Functional Measurement Study on Averaging Numerosity
ERIC Educational Resources Information Center
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Scaling of Performance in Liquid Propellant Rocket Engine Combustors
NASA Technical Reports Server (NTRS)
Hulka, James
2008-01-01
The objectives are: a) Re-introduce to you the concept of scaling; b) Describe the scaling research conducted in the 1950s and early 1960s, and present some of their conclusions; c) Narrow the focus to scaling for performance of combustion devices for liquid propellant rocket engines; and d) Present some results of subscale to full-scale performance from historical programs. Scaling is "The ability to develop new combustion devices with predictable performance on the basis of test experience with old devices." Scaling can be used to develop combustion devices of any thrust size from any thrust size. Scaling is applied mostly to increase thrust. Objective is to use scaling as a development tool. - Move injector design from an "art" to a "science"
PILOT-SCALE STUDIES ON THE INCINERATION OF ELECTRONICS INDUSTRY WASTE
The paper describes experiments performed on a pilot-scale rotary kiln incinerator to investigate the emissions and operational behavior during the incineration of consumer electronics waste. These experiments were targeted at destroying the organic components of printed circuit ...
Fractal Tempo Fluctuation and Pulse Prediction
Rankin, Summer K.; Large, Edward W.; Fink, Philip W.
2010-01-01
WE INVESTIGATED PEOPLES’ ABILITY TO ADAPT TO THE fluctuating tempi of music performance. In Experiment 1, four pieces from different musical styles were chosen, and performances were recorded from a skilled pianist who was instructed to play with natural expression. Spectral and rescaled range analyses on interbeat interval time-series revealed long-range (1/f type) serial correlations and fractal scaling in each piece. Stimuli for Experiment 2 included two of the performances from Experiment 1, with mechanical versions serving as controls. Participants tapped the beat at ¼- and ⅛-note metrical levels, successfully adapting to large tempo fluctuations in both performances. Participants predicted the structured tempo fluctuations, with superior performance at the ¼-note level. Thus, listeners may exploit long-range correlations and fractal scaling to predict tempo changes in music. PMID:25190901
Multi-mode evaluation of power-maximizing cross-flow turbine controllers
Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James; ...
2017-09-21
A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less
Multi-mode evaluation of power-maximizing cross-flow turbine controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbush, Dominic; Cavagnaro, Robert J.; Donegan, James
A general method for predicting and evaluating the performance of three candidate cross-flow turbine power-maximizing controllers is presented in this paper using low-order dynamic simulation, scaled laboratory experiments, and full-scale field testing. For each testing mode and candidate controller, performance metrics quantifying energy capture (ability of a controller to maximize power), variation in torque and rotation rate (related to drive train fatigue), and variation in thrust loads (related to structural fatigue) are quantified for two purposes. First, for metrics that could be evaluated across all testing modes, we considered the accuracy with which simulation or laboratory experiments could predict performancemore » at full scale. Second, we explored the utility of these metrics to contrast candidate controller performance. For these turbines and set of candidate controllers, energy capture was found to only differentiate controller performance in simulation, while the other explored metrics were able to predict performance of the full-scale turbine in the field with various degrees of success. Finally, effects of scale between laboratory and full-scale testing are considered, along with recommendations for future improvements to dynamic simulations and controller evaluation.« less
Image quality scaling of electrophotographic prints
NASA Astrophysics Data System (ADS)
Johnson, Garrett M.; Patil, Rohit A.; Montag, Ethan D.; Fairchild, Mark D.
2003-12-01
Two psychophysical experiments were performed scaling overall image quality of black-and-white electrophotographic (EP) images. Six different printers were used to generate the images. There were six different scenes included in the experiment, representing photographs, business graphics, and test-targets. The two experiments were split into a paired-comparison experiment examining overall image quality, and a triad experiment judging overall similarity and dissimilarity of the printed images. The paired-comparison experiment was analyzed using Thurstone's Law, to generate an interval scale of quality, and with dual scaling, to determine the independent dimensions used for categorical scaling. The triad experiment was analyzed using multidimensional scaling to generate a psychological stimulus space. The psychophysical results indicated that the image quality was judged mainly along one dimension and that the relationships among the images can be described with a single dimension in most cases. Regression of various physical measurements of the images to the paired comparison results showed that a small number of physical attributes of the images could be correlated with the psychophysical scale of image quality. However, global image difference metrics did not correlate well with image quality.
Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru
2018-04-01
Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.
Realtime monitoring of bridge scour using remote monitoring technology
DOT National Transportation Integrated Search
2011-02-01
The research performed in this project focuses on the application of instruments including accelerometers : and tiltmeters to monitor bridge scour. First, two large scale laboratory experiments were performed. One : experiment is the simulation of a ...
ERIC Educational Resources Information Center
Sebok, Stefanie S.; Roy, Marguerite; Klinger, Don A.; De Champlain, André F.
2015-01-01
Examiner effects and content specificity are two well known sources of construct irrelevant variance that present great challenges in performance-based assessments. National medical organizations that are responsible for large-scale performance based assessments experience an additional challenge as they are responsible for administering…
Analyses of 1/15 scale Creare bypass transient experiments. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kmetyk, L.N.; Buxton, L.D.; Cole, R.K. Jr.
1982-09-01
RELAP4 analyses of several 1/15 scale Creare H-series bypass transient experiments have been done to investigate the effect of using different downcomer nodalizations, physical scales, slip models, and vapor fraction donoring methods. Most of the analyses were thermal equilibrium calculations performed with RELAP4/MOD5, but a few such calculations were done with RELAP4/MOD6 and RELAP4/MOD7, which contain improved slip models. In order to estimate the importance of nonequilibrium effects, additional analyses were performed with TRAC-PD2, RELAP5 and the nonequilibrium option of RELAP4/MOD7. The purpose of these studies was to determine whether results from Westinghouse's calculation of the Creare experiments, which weremore » done with a UHI-modified version of SATAN, were sufficient to guarantee SATAN would be conservative with respect to ECC bypass in full-scale plant analyses.« less
Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hales, J. D.; Tonks, M. R.; Chockalingam, K.
2015-03-01
Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugh, C.E.
2001-01-29
Numerous large-scale fracture experiments have been performed over the past thirty years to advance fracture mechanics methodologies applicable to thick-wall pressure vessels. This report first identifies major factors important to nuclear reactor pressure vessel (RPV) integrity under pressurized thermal shock (PTS) conditions. It then covers 20 key experiments that have contributed to identifying fracture behavior of RPVs and to validating applicable assessment methodologies. The experiments are categorized according to four types of specimens: (1) cylindrical specimens, (2) pressurized vessels, (3) large plate specimens, and (4) thick beam specimens. These experiments were performed in laboratories in six different countries. This reportmore » serves as a summary of those experiments, and provides a guide to references for detailed information.« less
Khan, Anzalee; Keefe, Richard S. E.
2017-01-01
Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as “avolition and anhedonia,” specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes. PMID:29410933
QuickEval: a web application for psychometric scaling experiments
NASA Astrophysics Data System (ADS)
Van Ngo, Khai; Storvik, Jehans J.; Dokkeberg, Christopher A.; Farup, Ivar; Pedersen, Marius
2015-01-01
QuickEval is a web application for carrying out psychometric scaling experiments. It offers the possibility of running controlled experiments in a laboratory, or large scale experiment over the web for people all over the world. It is a unique one of a kind web application, and it is a software needed in the image quality field. It is also, to the best of knowledge, the first software that supports the three most common scaling methods; paired comparison, rank order, and category judgement. It is also the first software to support rank order. Hopefully, a side effect of this newly created software is that it will lower the threshold to perform psychometric experiments, improve the quality of the experiments being carried out, make it easier to reproduce experiments, and increase research on image quality both in academia and industry. The web application is available at www.colourlab.no/quickeval.
Familiar Tonal Context Improves Accuracy of Pitch Interval Perception.
Graves, Jackson E; Oxenham, Andrew J
2017-01-01
A fundamental feature of everyday music perception is sensitivity to familiar tonal structures such as musical keys. Many studies have suggested that a tonal context can enhance the perception and representation of pitch. Most of these studies have measured response time, which may reflect expectancy as opposed to perceptual accuracy. We instead used a performance-based measure, comparing participants' ability to discriminate between a "small, in-tune" interval and a "large, mistuned" interval in conditions that involved familiar tonal relations (diatonic, or major, scale notes), unfamiliar tonal relations (whole-tone or mistuned-diatonic scale notes), repetition of a single pitch, or no tonal context. The context was established with a brief sequence of tones in Experiment 1 (melodic context), and a cadence-like two-chord progression in Experiment 2 (harmonic context). In both experiments, performance significantly differed across the context conditions, with a diatonic context providing a significant advantage over no context; however, no correlation with years of musical training was observed. The diatonic tonal context also provided an advantage over the whole-tone scale context condition in Experiment 1 (melodic context), and over the mistuned scale or repetition context conditions in Experiment 2 (harmonic context). However, the relatively small benefit to performance suggests that the main advantage of tonal context may be priming of expected stimuli, rather than enhanced accuracy of pitch interval representation.
NASA Astrophysics Data System (ADS)
Chern, J. D.; Tao, W. K.; Lang, S. E.; Matsui, T.; Mohr, K. I.
2014-12-01
Four six-month (March-August 2014) experiments with the Goddard Multi-scale Modeling Framework (MMF) were performed to study the impacts of different Goddard one-moment bulk microphysical schemes and large-scale forcings on the performance of the MMF. Recently a new Goddard one-moment bulk microphysics with four-ice classes (cloud ice, snow, graupel, and frozen drops/hail) has been developed based on cloud-resolving model simulations with large-scale forcings from field campaign observations. The new scheme has been successfully implemented to the MMF and two MMF experiments were carried out with this new scheme and the old three-ice classes (cloud ice, snow graupel) scheme. The MMF has global coverage and can rigorously evaluate microphysics performance for different cloud regimes. The results show MMF with the new scheme outperformed the old one. The MMF simulations are also strongly affected by the interaction between large-scale and cloud-scale processes. Two MMF sensitivity experiments with and without nudging large-scale forcings to those of ERA-Interim reanalysis were carried out to study the impacts of large-scale forcings. The model simulated mean and variability of surface precipitation, cloud types, cloud properties such as cloud amount, hydrometeors vertical profiles, and cloud water contents, etc. in different geographic locations and climate regimes are evaluated against GPM, TRMM, CloudSat/CALIPSO satellite observations. The Goddard MMF has also been coupled with the Goddard Satellite Data Simulation Unit (G-SDSU), a system with multi-satellite, multi-sensor, and multi-spectrum satellite simulators. The statistics of MMF simulated radiances and backscattering can be directly compared with satellite observations to assess the strengths and/or deficiencies of MMF simulations and provide guidance on how to improve the MMF and microphysics.
Sex differences in virtual navigation influenced by scale and navigation experience.
Padilla, Lace M; Creem-Regehr, Sarah H; Stefanucci, Jeanine K; Cashdan, Elizabeth A
2017-04-01
The Morris water maze is a spatial abilities test adapted from the animal spatial cognition literature and has been studied in the context of sex differences in humans. This is because its standard design, which manipulates proximal (close) and distal (far) cues, applies to human navigation. However, virtual Morris water mazes test navigation skills on a scale that is vastly smaller than natural human navigation. Many researchers have argued that navigating in large and small scales is fundamentally different, and small-scale navigation might not simulate natural human navigation. Other work has suggested that navigation experience could influence spatial skills. To address the question of how individual differences influence navigational abilities in differently scaled environments, we employed both a large- (146.4 m in diameter) and a traditional- (36.6 m in diameter) scaled virtual Morris water maze along with a novel measure of navigation experience (lifetime mobility). We found sex differences on the small maze in the distal cue condition only, but in both cue-conditions on the large maze. Also, individual differences in navigation experience modulated navigation performance on the virtual water maze, showing that higher mobility was related to better performance with proximal cues for only females on the small maze, but for both males and females on the large maze.
Prediction of Gas Injection Performance for Heterogeneous Reservoirs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blunt, Martin J.; Orr, Franklin M.
This report describes research carried out in the Department of Petroleum Engineering at Stanford University from September 1997 - September 1998 under the second year of a three-year grant from the Department of Energy on the "Prediction of Gas Injection Performance for Heterogeneous Reservoirs." The research effort is an integrated study of the factors affecting gas injection, from the pore scale to the field scale, and involves theoretical analysis, laboratory experiments, and numerical simulation. The original proposal described research in four areas: (1) Pore scale modeling of three phase flow in porous media; (2) Laboratory experiments and analysis of factorsmore » influencing gas injection performance at the core scale with an emphasis on the fundamentals of three phase flow; (3) Benchmark simulations of gas injection at the field scale; and (4) Development of streamline-based reservoir simulator. Each state of the research is planned to provide input and insight into the next stage, such that at the end we should have an integrated understanding of the key factors affecting field scale displacements.« less
Penning, David A; Dartez, Schuyler F
2016-03-01
Constriction is a prey-immobilization technique used by many snakes and is hypothesized to have been important to the evolution and diversification of snakes. However, very few studies have examined the factors that affect constriction performance. We investigated constriction performance in ball pythons (Python regius) by evaluating how peak constriction pressure is affected by snake size, sex, and experience. In one experiment, we tested the ontogenetic scaling of constriction performance and found that snake diameter was the only significant factor determining peak constriction pressure. The number of loops applied in a coil and its interaction with snake diameter did not significantly affect constriction performance. Constriction performance in ball pythons scaled differently than in other snakes that have been studied, and medium to large ball pythons are capable of exerting significantly higher pressures than those shown to cause circulatory arrest in prey. In a second experiment, we tested the effects of experience on constriction performance in hatchling ball pythons over 10 feeding events. By allowing snakes in one test group to gain constriction experience, and manually feeding snakes under sedation in another test group, we showed that experience did not affect constriction performance. During their final (10th) feedings, all pythons constricted similarly and with sufficiently high pressures to kill prey rapidly. At the end of the 10 feeding trials, snakes that were allowed to constrict were significantly smaller than their non-constricting counterparts. © 2016 Wiley Periodicals, Inc.
Harvey, Philip D; Khan, Anzalee; Keefe, Richard S E
2017-12-01
Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as "avolition and anhedonia," specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes.
DOT National Transportation Integrated Search
2008-12-01
PROBLEM: The full-scale accelerated pavement testing (APT) provides a unique tool for pavement : engineers to directly collect pavement performance and failure data under heavy : wheel loading. However, running a full-scale APT experiment is very exp...
DOE Office of Scientific and Technical Information (OSTI.GOV)
N. R. Mann; T. A. Todd; K. N. Brewer
1999-04-01
Development of waste treatment processes for the remediation of radioactive wastes is currently underway. A number of experiments were performed at the Idaho Nuclear Technology and Environmental Center (INTEC) located at the Idaho National Engineering and Environmental Laboratory (INEEL) with the commercially available sorbent material, IONSIV IE-911, crystalline silicotitanate (CST), manufactured by UOP LLC. The purpose of this work was to evaluate the removal efficiency, sorbent capacity and selectivity of CST for removing Cs-137 from actual and simulated acidic tank waste in addition to dissolved pilot-plant calcine solutions. The scope of this work included batch contact tests performed with non-radioactivemore » dissolved Al and Run-64 pilot plant calcines in addition to simulants representing the average composition of tank waste. Small-scale column tests were performed with actual INEEL tank WM-183 waste, tank waste simulant, dissolved Al and Run-64 pilot plant calcine solutions. Small-scale column experiments using actual WM-183 tank waste resulted in fifty-percent Cs-137 breakthrough at approximately 589 bed volumes. Small-scale column experiments using the tank waste simulant displayed fifty-percent Cs-137 breakthrough at approximately 700 bed volumes. Small-scale column experiments using dissolved Al calcine simulant displayed fifty-percent Cs-137 breakthrough at approximately 795 bed volumes. Column experiments with dissolved Run-64, pilot plant calcine did not reach fifty-percent breakthrough throughout the test.« less
Approximate similarity principle for a full-scale STOVL ejector
NASA Astrophysics Data System (ADS)
Barankiewicz, Wendy S.; Perusek, Gail P.; Ibrahim, Mounir B.
1994-03-01
Full-scale ejector experiments are expensive and difficult to implement at engine exhaust temperatures. For this reason the utility of using similarity principles, in particular the Munk and prim principle for isentropic flow, was explored. Static performance test data for a full-scale thrust augmenting ejector were analyzed for primary flow temperature up to 1560 R. At different primary temperatures, exit pressure contours were compared for similarity. A nondimensional flow parameter is then used to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments. Under the assumption that an appropriate similarity principle can be established, properly chosen performance parameters were found to be similar for both flow and cold flow model tests.
Are Psychotic Experiences Related to Poorer Reflective Reasoning?
Mækelæ, Martin J.; Moritz, Steffen; Pfuhl, Gerit
2018-01-01
Background: Cognitive biases play an important role in the formation and maintenance of delusions. These biases are indicators of a weak reflective mind, or reduced engaging in reflective and deliberate reasoning. In three experiments, we tested whether a bias to accept non-sense statements as profound, treat metaphorical statements as literal, and suppress intuitive responses is related to psychotic-like experiences. Methods: We tested deliberate reasoning and psychotic-like experiences in the general population and in patients with a former psychotic episode. Deliberate reasoning was assessed with the bullshit receptivity scale, the ontological confabulation scale and the cognitive reflection test (CRT). We also measured algorithmic performance with the Berlin numeracy test and the wordsum test. Psychotic-like experiences were measured with the Community Assessment of Psychic Experience (CAPE-42) scale. Results: Psychotic-like experiences were positively correlated with a larger receptivity toward bullshit, more ontological confabulations, and also a lower score on the CRT but not with algorithmic task performance. In the patient group higher psychotic-like experiences significantly correlated with higher bullshit receptivity. Conclusion: Reduced deliberate reasoning may contribute to the formation of delusions, and be a general thinking bias largely independent of a person's general intelligence. Acceptance of bullshit may be facilitated the more positive symptoms a patient has, contributing to the maintenance of the delusions. PMID:29483886
ERIC Educational Resources Information Center
Özenç, Emine Gül; Dogan, M. Cihangir
2014-01-01
This study aims to perform a validity-reliability test by developing the Functional Literacy Experience Scale based upon Ecological Theory (FLESBUET) for primary education students. The study group includes 209 fifth grade students at Sabri Taskin Primary School in the Kartal District of Istanbul, Turkey during the 2010-2011 academic year.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.; Moridis, G.J.; Pruess, K.
1994-01-01
The emplacement of liquids under controlled viscosity conditions is investigated by means of numerical simulations. Design calculations are performed for a laboratory experiment on a decimeter scale, and a field experiment on a meter scale. The purpose of the laboratory experiment is to study the behavior of multiple gout plumes when injected in a porous medium. The calculations for the field trial aim at designing a grout injection test from a vertical well in order to create a grout plume of a significant extent in the subsurface.
Irradiation performance of HTGR fuel rods in HFIR experiments HRB-7 and -8
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valentine, K.H.; Homan, F.J.; Long, E.L. Jr.
1977-05-01
The HRB-7 and -8 experiments were designed as a comprehensive test of mixed thorium-uranium oxide fissile particles with Th:U ratios from 0 to 8 for HTGR recycle application. In addition, fissile particles derived from Weak-Acid Resin (WAR) were tested as a potential backup type of fissile particle for HTGR recycle. These experiments were conducted at two temperatures (1250 and 1500/sup 0/C) to determine the influence of operating temperature on the performance parameters studied. The minor objectives were comparison of advanced coating designs where ZrC replaced SiC in the Triso design, testing of fuel coated in laboratory-scale equipment with fuel coatedmore » in production-scale coaters, comparison of the performance of /sup 233/U-bearing particles with that of /sup 235/U-bearing particles, comparison of the performance of Biso coatings with Triso coatings for particles containing the same type of kernel, and testing of multijunction tungsten-rhenium thermocouples. All objectives were accomplished. As a result of these experiments the mixed thorium-uranium oxide fissile kernel was replaced by a WAR-derived particle in the reference recycle design. A tentative decision to make this change had been reached before the HRB-7 and -8 capsules were examined, and the results of the examination confirmed the accuracy of the previous decision. Even maximum dilution (Th/U approximately equal to 8) of the mixed thorium-uranium oxide kernel was insufficient to prevent amoeba of the kernels at rates that are unacceptable in a large HTGR. Other results showed the performance of /sup 233/U-bearing particles to be identical to that of /sup 235/U-bearing particles, the performance of fuel coated in production-scale equipment to be at least as good as that of fuel coated in laboratory-scale coaters, the performance of ZrC coatings to be very promising, and Biso coatings to be inferior to Triso coatings relative to fission product retention.« less
Barth, Gilbert R.; Illangasekare, T.H.; Rajaram, H.
2003-01-01
This work considers the applicability of conservative tracers for detecting high-saturation nonaqueous-phase liquid (NAPL) entrapment in heterogeneous systems. For this purpose, a series of experiments and simulations was performed using a two-dimensional heterogeneous system (10??1.2 m), which represents an intermediate scale between laboratory and field scales. Tracer tests performed prior to injecting the NAPL provide the baseline response of the heterogeneous porous medium. Two NAPL spill experiments were performed and the entrapped-NAPL saturation distribution measured in detail using a gamma-ray attenuation system. Tracer tests following each of the NAPL spills produced breakthrough curves (BTCs) reflecting the impact of entrapped NAPL on conservative transport. To evaluate significance, the impact of NAPL entrapment on the conservative-tracer breakthrough curves was compared to simulated breakthrough curve variability for different realizations of the heterogeneous distribution. Analysis of the results reveals that the NAPL entrapment has a significant impact on the temporal moments of conservative-tracer breakthrough curves. ?? 2003 Elsevier B.V. All rights reserved.
Suggestibility and signal detection performance in hallucination-prone students.
Alganami, Fatimah; Varese, Filippo; Wagstaff, Graham F; Bentall, Richard P
2017-03-01
Auditory hallucinations are associated with signal detection biases. We examine the extent to which suggestions influence performance on a signal detection task (SDT) in highly hallucination-prone and low hallucination-prone students. We also explore the relationship between trait suggestibility, dissociation and hallucination proneness. In two experiments, students completed on-line measures of hallucination proneness (the revised Launay-Slade Hallucination Scale; LSHS-R), trait suggestibility (Inventory of Suggestibility) and dissociation (Dissociative Experiences Scale-II). Students in the upper and lower tertiles of the LSHS-R performed an auditory SDT. Prior to the task, suggestions were made pertaining to the number of expected targets (Experiment 1, N = 60: high vs. low suggestions; Experiment 2, N = 62, no suggestion vs. high suggestion vs. no voice suggestion). Correlational and regression analyses indicated that trait suggestibility and dissociation predicted hallucination proneness. Highly hallucination-prone students showed a higher SDT bias in both studies. In Experiment 1, both bias scores were significantly affected by suggestions to the same degree. In Experiment 2, highly hallucination-prone students were more reactive to the high suggestion condition than the controls. Suggestions may affect source-monitoring judgments, and this effect may be greater in those who have a predisposition towards hallucinatory experiences.
Analytical investigation of critical phenomena in MHD power generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-07-31
Critical phenomena in the Arnold Engineering Development Center (AEDC) High Performance Demonstration Experiment (HPDE) and the US U-25 Experiment, are analyzed. Also analyzed are the performance of a NASA-specified 500 MW(th) flow train and computations concerning critica issues for the scale-up of MHD Generators. The HPDE is characterized by computational simulations of both the nominal conditions and the conditions during the experimental runs. The steady-state performance is discussed along with the Hall voltage overshoots during the start-up and shutdown transients. The results of simulations of the HPDE runs with codes from the Q3D and TRANSIENT code families are compared tomore » the experimental results. The results of the simulations are in good agreement with the experimental data. Additional critica phenomena analyzed in the AEDC/HPDE are the optimal load schedules, parametric variations, the parametric dependence of the electrode voltage drops, the boundary layer behavior, near electrode phenomena with finite electrode segmentation, and current distribution in the end regions. The US U-25 experiment is characterized by computational simulations of the nominal operating conditions. The steady-state performance for the nominal design of the US U-25 experiment is analyzed, as is the dependence of performance on the mass flow rate. A NASA-specified 500 MW(th) MHD flow train is characterized for computer simulation and the electrical, transport, and thermodynamic properties at the inlet plane are analyzed. Issues for the scale-up of MHD power trains are discussed. The AEDC/HPDE performance is analyzed to compare these experimental results to scale-up rules.« less
Fava, Joseph L.; Rosen, Rochelle K.; Vargas, Sara; Shaw, Julia G.; Kojic, E. Milu; Kiser, Patrick F.; Friend, David R.; Katz, David F.
2014-01-01
Abstract The effectiveness of any biomedical prevention technology relies on both biological efficacy and behavioral adherence. Microbicide trials have been hampered by low adherence, limiting the ability to draw meaningful conclusions about product effectiveness. Central to this problem may be an inadequate conceptualization of how product properties themselves impact user experience and adherence. Our goal is to expand the current microbicide development framework to include product “perceptibility,” the objective measurement of user sensory perceptions (i.e., sensations) and experiences of formulation performance during use. For vaginal gels, a set of biophysical properties, including rheological properties and measures of spreading and retention, may critically impact user experiences. Project LINK sought to characterize the user experience in this regard, and to validate measures of user sensory perceptions and experiences (USPEs) using four prototype topical vaginal gel formulations designed for pericoital use. Perceptibility scales captured a range of USPEs during the product application process (five scales), ambulation after product insertion (six scales), and during sexual activity (eight scales). Comparative statistical analyses provided empirical support for hypothesized relationships between gel properties, spreading performance, and the user experience. Project LINK provides preliminary evidence for the utility of evaluating USPEs, introducing a paradigm shift in the field of microbicide formulation design. We propose that these user sensory perceptions and experiences initiate cognitive processes in users resulting in product choice and willingness-to-use. By understanding the impact of USPEs on that process, formulation development can optimize both drug delivery and adherence. PMID:24180360
Morrow, Kathleen M; Fava, Joseph L; Rosen, Rochelle K; Vargas, Sara; Shaw, Julia G; Kojic, E Milu; Kiser, Patrick F; Friend, David R; Katz, David F
2014-01-01
Abstract The effectiveness of any biomedical prevention technology relies on both biological efficacy and behavioral adherence. Microbicide trials have been hampered by low adherence, limiting the ability to draw meaningful conclusions about product effectiveness. Central to this problem may be an inadequate conceptualization of how product properties themselves impact user experience and adherence. Our goal is to expand the current microbicide development framework to include product "perceptibility," the objective measurement of user sensory perceptions (i.e., sensations) and experiences of formulation performance during use. For vaginal gels, a set of biophysical properties, including rheological properties and measures of spreading and retention, may critically impact user experiences. Project LINK sought to characterize the user experience in this regard, and to validate measures of user sensory perceptions and experiences (USPEs) using four prototype topical vaginal gel formulations designed for pericoital use. Perceptibility scales captured a range of USPEs during the product application process (five scales), ambulation after product insertion (six scales), and during sexual activity (eight scales). Comparative statistical analyses provided empirical support for hypothesized relationships between gel properties, spreading performance, and the user experience. Project LINK provides preliminary evidence for the utility of evaluating USPEs, introducing a paradigm shift in the field of microbicide formulation design. We propose that these user sensory perceptions and experiences initiate cognitive processes in users resulting in product choice and willingness-to-use. By understanding the impact of USPEs on that process, formulation development can optimize both drug delivery and adherence.
Scale effects and a method for similarity evaluation in micro electrical discharge machining
NASA Astrophysics Data System (ADS)
Liu, Qingyu; Zhang, Qinhe; Wang, Kan; Zhu, Guang; Fu, Xiuzhuo; Zhang, Jianhua
2016-08-01
Electrical discharge machining(EDM) is a promising non-traditional micro machining technology that offers a vast array of applications in the manufacturing industry. However, scale effects occur when machining at the micro-scale, which can make it difficult to predict and optimize the machining performances of micro EDM. A new concept of "scale effects" in micro EDM is proposed, the scale effects can reveal the difference in machining performances between micro EDM and conventional macro EDM. Similarity theory is presented to evaluate the scale effects in micro EDM. Single factor experiments are conducted and the experimental results are analyzed by discussing the similarity difference and similarity precision. The results show that the output results of scale effects in micro EDM do not change linearly with discharge parameters. The values of similarity precision of machining time significantly increase when scaling-down the capacitance or open-circuit voltage. It is indicated that the lower the scale of the discharge parameter, the greater the deviation of non-geometrical similarity degree over geometrical similarity degree, which means that the micro EDM system with lower discharge energy experiences more scale effects. The largest similarity difference is 5.34 while the largest similarity precision can be as high as 114.03. It is suggested that the similarity precision is more effective in reflecting the scale effects and their fluctuation than similarity difference. Consequently, similarity theory is suitable for evaluating the scale effects in micro EDM. This proposed research offers engineering values for optimizing the machining parameters and improving the machining performances of micro EDM.
Comparison of batch sorption tests, pilot studies, and modeling for estimating GAC bed life.
Scharf, Roger G; Johnston, Robert W; Semmens, Michael J; Hozalski, Raymond M
2010-02-01
Saint Paul Regional Water Services (SPRWS) in Saint Paul, MN experiences annual taste and odor episodes during the warm summer months. These episodes are attributed primarily to geosmin that is produced by cyanobacteria growing in the chain of lakes used to convey and store the source water pumped from the Mississippi River. Batch experiments, pilot-scale experiments, and model simulations were performed to determine the geosmin removal performance and bed life of a granular activated carbon (GAC) filter-sorber. Using batch adsorption isotherm parameters, the estimated bed life for the GAC filter-sorber ranged from 920 to 1241 days when challenged with a constant concentration of 100 ng/L of geosmin. The estimated bed life obtained using the AdDesignS model and the actual pilot-plant loading history was 594 days. Based on the pilot-scale GAC column data, the actual bed life (>714 days) was much longer than the simulated values because bed life was extended by biological degradation of geosmin. The continuous feeding of high concentrations of geosmin (100-400 ng/L) in the pilot-scale experiments enriched for a robust geosmin-degrading culture that was sustained when the geosmin feed was turned off for 40 days. It is unclear, however, whether a geosmin-degrading culture can be established in a full-scale filter that experiences taste and odor episodes for only 1 or 2 months per year. The results of this research indicate that care must be exercised in the design and interpretation of pilot-scale experiments and model simulations for predicting taste and odor removal in full-scale GAC filter-sorbers. Adsorption and the potential for biological degradation must be considered to estimate GAC bed life for the conditions of intermittent geosmin loading typically experienced by full-scale systems. (c) 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kornev, V. A.; Askinazi, L. G.; Belokurov, A. A.; Chernyshev, F. V.; Lebedev, S. V.; Melnik, A. D.; Shabelsky, A. A.; Tukachinsky, A. S.; Zhubr, N. A.
2017-12-01
The paper presents DD neutron flux measurements in neutron beam injection (NBI) experiments aimed at the optimization of target plasma and heating beam parameters to achieve maximum neutron flux in the TUMAN-3M compact tokamak. Two ion sources of different design were used, which allowed the separation of the beam’s energy and power influence on the neutron rate. Using the database of experiments performed with the two ion sources, an empirical scaling was derived describing the neutron rate dependence on the target plasma and heating beam parameters. Numerical modeling of the neutron rate in the NBI experiments performed using the ASTRA transport code showed good agreement with the scaling.
CRBR pump water test experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, M.E.; Huber, K.A.
1983-01-01
The hydraulic design features and water testing of the hydraulic scale model and prototype pump of the sodium pumps used in the primary and intermediate sodium loops of the Clinch River Breeder Reactor Plant (CRBRP) are described. The Hydraulic Scale Model tests are performed and the results of these tests are discussed. The Prototype Pump tests are performed and the results of these tests are discussed.
Pretest characterization of WIPP experimental waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.; Davis, H.; Drez, P.E.
The Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, is an underground repository designed for the storage and disposal of transuranic (TRU) wastes from US Department of Energy (DOE) facilities across the country. The Performance Assessment (PA) studies for WIPP address compliance of the repository with applicable regulations, and include full-scale experiments to be performed at the WIPP site. These experiments are the bin-scale and alcove tests to be conducted by Sandia National Laboratories (SNL). Prior to conducting these experiments, the waste to be used in these tests needs to be characterized to provide data on the initial conditionsmore » for these experiments. This characterization is referred to as the Pretest Characterization of WIPP Experimental Waste, and is also expected to provide input to other programmatic efforts related to waste characterization. The purpose of this paper is to describe the pretest waste characterization activities currently in progress for the WIPP bin-scale waste, and to discuss the program plan and specific analytical protocols being developed for this characterization. The relationship between different programs and documents related to waste characterization efforts is also highlighted in this paper.« less
Flooding Fragility Experiments and Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Curtis L.; Tahhan, Antonio; Muchmore, Cody
2016-09-01
This report describes the work that has been performed on flooding fragility, both the experimental tests being carried out and the probabilistic fragility predictive models being produced in order to use the text results. Flooding experiments involving full-scale doors have commenced in the Portal Evaluation Tank. The goal of these experiments is to develop a full-scale component flooding experiment protocol and to acquire data that can be used to create Bayesian regression models representing the fragility of these components. This work is in support of the Risk-Informed Safety Margin Characterization (RISMC) Pathway external hazards evaluation research and development.
ERIC Educational Resources Information Center
Sisco, Frankie H.; Anderson, Richard J.
1980-01-01
One hundred deaf children with deaf parents performed significantly better than 100 deaf children with hearing parents on all performance subtest of the Wechsler Intelligence Scale for Children-Revised. (CL)
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1982-01-01
The Jet Propulsion Laboratory has developed a number of photovoltaic test and measurement specifications to guide the development of modules toward the requirements of future large-scale applications. Experience with these specifications and the extensive module measurement and testing that has accompanied their use is examined. Conclusions are drawn relative to three aspects of product certification: performance measurement, endurance testing and safety evaluation.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi
2016-08-01
The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.
A wind tunnel study on the effects of complex topography on wind turbine performance
NASA Astrophysics Data System (ADS)
Howard, Kevin; Hu, Stephen; Chamorro, Leonardo; Guala, Michele
2012-11-01
A set of wind tunnel experiments were conducted to study the response of a wind turbine under flow conditions typically observed at the wind farm scale, in complex terrain. A scale model wind turbine was placed in a fully developed turbulent boundary layer flow obtained in the SAFL Wind Tunnel. Experiments focused on the performance of a turbine model, under the effects induced by a second upwind turbine or a by three-dimensional, sinusoidal hill, peaking at the turbine hub height. High frequency measurements of fluctuating streamwise and wall normal velocities were obtained with a X-wire anemometer simultaneously with the rotor angular velocity and the turbine(s) voltage output. Velocity measurements in the wake of the first turbine and of the hill were used to determine the inflow conditions for the downwind test turbine. Turbine performance was inferred by the mean and fluctuating voltage statistics. Specific experiments were devoted to relate the mean voltage to the mean hub velocity, and the fluctuating voltage to the unsteadiness in the rotor kinematics induced by the perturbed (hill or turbine) or unperturbed (boundary layer) large scales of the incoming turbulent flow. Results show that the voltage signal can be used to assess turbine performance in complex flows.
Centrifuge impact cratering experiments: Scaling laws for non-porous targets
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.
1987-01-01
This research is a continuation of an ongoing program whose objective is to perform experiments and to develop scaling relationships for large body impacts onto planetary surfaces. The development of the centrifuge technique has been pioneered by the present investigator and is used to provide experimental data for actual target materials of interest. With both powder and gas guns mounted on a rotor arm, it is possible to match various dimensionless similarity parameters, which have been shown to govern the behavior of large scale impacts. Current work is directed toward the determination of scaling estimates for nonporous targets. The results are presented in summary form.
Bullock, Robin J; Aggarwal, Srijan; Perkins, Robert A; Schnabel, William
2017-04-01
In the event of a marine oil spill in the Arctic, government agencies, industry, and the public have a stake in the successful implementation of oil spill response. Because large spills are rare events, oil spill response techniques are often evaluated with laboratory and meso-scale experiments. The experiments must yield scalable information sufficient to understand the operability and effectiveness of a response technique under actual field conditions. Since in-situ burning augmented with surface collecting agents ("herders") is one of the few viable response options in ice infested waters, a series of oil spill response experiments were conducted in Fairbanks, Alaska, in 2014 and 2015 to evaluate the use of herders to assist in-situ burning and the role of experimental scale. This study compares burn efficiency and herder application for three experimental designs for in-situ burning of Alaska North Slope crude oil in cold, fresh waters with ∼10% ice cover. The experiments were conducted in three project-specific constructed venues with varying scales (surface areas of approximately 0.09 square meters, 9 square meters and 8100 square meters). The results from the herder assisted in-situ burn experiments performed at these three different scales showed good experimental scale correlation and no negative impact due to the presence of ice cover on burn efficiency. Experimental conclusions are predominantly associated with application of the herder material and usability for a given experiment scale to make response decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Neiman, Robert A.
2002-01-01
Describes the use of small-scale change projects by Philadelphia's Department of Human Services to generate new outcomes and new skills and experience that improved basic day-to-day operations, strategic planning, and cumulatively produced larger-scale changes in service, financing, and performance. (Author/LRW)
Diffraction-based analysis of tunnel size for a scaled external occulter testbed
NASA Astrophysics Data System (ADS)
Sirbu, Dan; Kasdin, N. Jeremy; Vanderbei, Robert J.
2016-07-01
For performance verification of an external occulter mask (also called a starshade), scaled testbeds have been developed to measure the suppression of the occulter shadow in the pupil plane and contrast in the image plane. For occulter experiments the scaling is typically performed by maintaining an equivalent Fresnel number. The original Princeton occulter testbed was oversized with respect to both input beam and shadow propagation to limit any diffraction effects due to finite testbed enclosure edges; however, to operate at realistic space-mission equivalent Fresnel numbers an extended testbed is currently under construction. With the longer propagation distances involved, diffraction effects due to the edge of the tunnel must now be considered in the experiment design. Here, we present a diffraction-based model of two separate tunnel effects. First, we consider the effect of tunnel-edge induced diffraction ringing upstream from the occulter mask. Second, we consider the diffraction effect due to clipping of the output shadow by the tunnel downstream from the occulter mask. These calculations are performed for a representative point design relevant to the new Princeton occulter experiment, but we also present an analytical relation that can be used for other propagation distances.
Bannai, Kurara; Kase, Takayoshi; Endo, Shintaro; Oishi, Kazou
2016-12-01
The purpose of this study was to investigate the relationships among anxiety prior to actual performance (music performance anxiety, MPA), mental and physical negative responses during performance (agari), and depressive tendencies in Japanese college students majoring in music. Participants were 171 music majors (33 males, 138 females, 20.6±1.7 yrs). They rated the degree of self-perceived MPA before their performance on a scale ranging from 0-100%. The Features of Agari Experience Questionnaire was used to assess agari response levels during standard performances, and the Japanese version of the Center for Epidemiologic Studies Depression Scale (CESD) was used to measure depressive tendencies. Path analysis showed that MPA levels were positively related to agari scores, which were positively related to CES-D scores. Mediation analysis found a significant indirect effect of MPA scores on CES-D scores via the agari scores. These results suggest that MPA first occurs before an actual music performance and evokes agari, which in turn may cause an increase in depressive tendencies.
Performance data for a terrestrial solar photovoltaic/water electrolysis experiment
NASA Technical Reports Server (NTRS)
Costogue, E. N.; Yasui, R. K.
1977-01-01
A description is presented of the equipment used in the experiment, taking into account the surplus solar panel from the Mariner 4 spacecraft which was used as a solar array source and an electrolytic hydrogen generator. Attention is also given to operational considerations and performance data, system considerations and aspects of optimization, and large-scale hydrogen production considerations.
Haugum, Mona; Danielsen, Kirsten; Iversen, Hilde Hestad; Bjertnaes, Oyvind
2014-12-01
An important goal for national and large-scale surveys of user experiences is quality improvement. However, large-scale surveys are normally conducted by a professional external surveyor, creating an institutionalized division between the measurement of user experiences and the quality work that is performed locally. The aim of this study was to identify and describe scientific studies related to the use of national and large-scale surveys of user experiences in local quality work. Ovid EMBASE, Ovid MEDLINE, Ovid PsycINFO and the Cochrane Database of Systematic Reviews. Scientific publications about user experiences and satisfaction about the extent to which data from national and other large-scale user experience surveys are used for local quality work in the health services. Themes of interest were identified and a narrative analysis was undertaken. Thirteen publications were included, all differed substantially in several characteristics. The results show that large-scale surveys of user experiences are used in local quality work. The types of follow-up activity varied considerably from conducting a follow-up analysis of user experience survey data to information sharing and more-systematic efforts to use the data as a basis for improving the quality of care. This review shows that large-scale surveys of user experiences are used in local quality work. However, there is a need for more, better and standardized research in this field. The considerable variation in follow-up activities points to the need for systematic guidance on how to use data in local quality work. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Study of the Influence of Key Process Parameters on Furfural Production.
Fele Žilnik, Ljudmila; Grilc, Viktor; Mirt, Ivan; Cerovečki, Željko
2016-01-01
The present work reports the influence of key process variables on the furfural formation from leached chestnut-wood chips in a pressurized reactor. Effect of temperature, pressure, type and concentration of the catalyst solution, the steam flow rate or stripping module, the moisture content of the wood particles and geometric characteristics such as size and type of the reactor, particle size and bed height were considered systematically. One stage process was only taken into consideration. Lab-scale and pilot-scale studies were performed. The results of the non-catalysed laboratory experiments were compared with an actual non-catalysed (auto-catalysed) industrial process and with experiments on the pilot scale, the latter with 28% higher furfural yield compared to the others. Application of sulphuric acid as catalyst, in an amount of 0.03-0.05 g (H2SO4 100%)/g d.m. (dry material), enables a higher production of furfural at lower temperature and pressure of steam in a shorter reaction time. Pilot scale catalysed experiments have revealed very good performance for furfural formation under less severe operating conditions, with a maximum furfural yield as much as 88% of the theoretical value.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Microfluidic biolector-microfluidic bioprocess control in microtiter plates.
Funke, Matthias; Buchenauer, Andreas; Schnakenberg, Uwe; Mokwa, Wilfried; Diederichs, Sylvia; Mertens, Alan; Müller, Carsten; Kensy, Frank; Büchs, Jochen
2010-10-15
In industrial-scale biotechnological processes, the active control of the pH-value combined with the controlled feeding of substrate solutions (fed-batch) is the standard strategy to cultivate both prokaryotic and eukaryotic cells. On the contrary, for small-scale cultivations, much simpler batch experiments with no process control are performed. This lack of process control often hinders researchers to scale-up and scale-down fermentation experiments, because the microbial metabolism and thereby the growth and production kinetics drastically changes depending on the cultivation strategy applied. While small-scale batches are typically performed highly parallel and in high throughput, large-scale cultivations demand sophisticated equipment for process control which is in most cases costly and difficult to handle. Currently, there is no technical system on the market that realizes simple process control in high throughput. The novel concept of a microfermentation system described in this work combines a fiber-optic online-monitoring device for microtiter plates (MTPs)--the BioLector technology--together with microfluidic control of cultivation processes in volumes below 1 mL. In the microfluidic chip, a micropump is integrated to realize distinct substrate flow rates during fed-batch cultivation in microscale. Hence, a cultivation system with several distinct advantages could be established: (1) high information output on a microscale; (2) many experiments can be performed in parallel and be automated using MTPs; (3) this system is user-friendly and can easily be transferred to a disposable single-use system. This article elucidates this new concept and illustrates applications in fermentations of Escherichia coli under pH-controlled and fed-batch conditions in shaken MTPs. Copyright 2010 Wiley Periodicals, Inc.
Preferential flow across scales: how important are plot scale processes for a catchment scale model?
NASA Astrophysics Data System (ADS)
Glaser, Barbara; Jackisch, Conrad; Hopp, Luisa; Klaus, Julian
2017-04-01
Numerous experimental studies showed the importance of preferential flow for solute transport and runoff generation. As a consequence, various approaches exist to incorporate preferential flow in hydrological models. However, few studies have applied models that incorporate preferential flow at hillslope scale and even fewer at catchment scale. Certainly, one main difficulty for progress is the determination of an adequate parameterization for preferential flow at these spatial scales. This study applies a 3D physically based model (HydroGeoSphere) of a headwater region (6 ha) of the Weierbach catchment (Luxembourg). The base model was implemented without preferential flow and was limited in simulating fast catchment responses. Thus we hypothesized that the discharge performance can be improved by utilizing a dual permeability approach for a representation of preferential flow. We used the information of bromide irrigation experiments performed on three 1m2 plots to parameterize preferential flow. In a first step we ran 20.000 Monte Carlo simulations of these irrigation experiments in a 1m2 column of the headwater catchment model, varying the dual permeability parameters (15 variable parameters). These simulations identified many equifinal, yet very different parameter sets that reproduced the bromide depth profiles well. Therefore, in the next step we chose 52 parameter sets (the 40 best and 12 low performing sets) for testing the effect of incorporating preferential flow in the headwater catchment scale model. The variability of the flow pattern responses at the headwater catchment scale was small between the different parameterizations and did not coincide with the variability at plot scale. The simulated discharge time series of the different parameterizations clustered in six groups of similar response, ranging from nearly unaffected to completely changed responses compared to the base case model without dual permeability. Yet, in none of the groups the simulated discharge response clearly improved compared to the base case. Same held true for some observed soil moisture time series, although at plot scale the incorporation of preferential flow was necessary to simulate the irrigation experiments correctly. These results rejected our hypothesis and open a discussion on how important plot scale processes and heterogeneities are at catchment scale. Our preliminary conclusion is that vertical preferential flow is important for the irrigation experiments at the plot scale, while discharge generation at the catchment scale is largely controlled by lateral preferential flow. The lateral component, however, was already considered in the base case model with different hydraulic conductivities in different soil layers. This can explain why the internal behavior of the model at single spots seems not to be relevant for the overall hydrometric catchment response. Nonetheless, the inclusion of vertical preferential flow improved the realism of internal processes of the model (fitting profiles at plot scale, unchanged response at catchment scale) and should be considered depending on the intended use of the model. Furthermore, we cannot exclude with certainty yet that the quantitative discharge performance at catchment scale cannot be improved by utilizing a dual permeability approach, which will be tested in parameter optimization process.
Paying for performance: Performance incentives increase desire for the reward object.
Hur, Julia D; Nordgren, Loran F
2016-09-01
The current research examines how exposure to performance incentives affects one's desire for the reward object. We hypothesized that the flexible nature of performance incentives creates an attentional fixation on the reward object (e.g., money), which leads people to become more desirous of the rewards. Results from 5 laboratory experiments and 1 large-scale field study provide support for this prediction. When performance was incentivized with monetary rewards, participants reported being more desirous of money (Study 1), put in more effort to earn additional money in an ensuing task (Study 2), and were less willing to donate money to charity (Study 4). We replicated the result with nonmonetary rewards (Study 5). We also found that performance incentives increased attention to the reward object during the task, which in part explains the observed effects (Study 6). A large-scale field study replicated these findings in a real-world setting (Study 7). One laboratory experiment failed to replicate (Study 3). (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Hydro-scaling of DT implosions on the National Ignition Facility
NASA Astrophysics Data System (ADS)
Patel, Pravesh; Spears, Brian; Clark, Dan
2017-10-01
Recent implosion experiments on the National Ignition Facility (NIF) exceed 50 kJ in fusion yield and exhibit yield amplifications of >2.5-3x due to alpha-particle self-heating of the hot-spot. Two methods to increase the yield are (i) to improve the implosion quality, or stagnation pressure, at fixed target scale (by increasing implosion velocity, reducing 3D effects, etc.), and (ii) to hydrodynamically scale the capsule and absorbed energy. In the latter case the stagnation pressure remains constant, but the yield-in the absence of alpha-heating-increases as Y S 4 . 5 , where the capsule radius is increased by S, and the absorbed energy by S3 . With alpha-heating the increase with scale is considerably stronger. We present projections in the performance of current DT experiments, and the extrapolations to ignition, based on applying hydro-scaling theory and accounting for the effect of alpha-heating. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Bacterial Transport in Heterogeneous Porous Media: Laboratory and Field Experiments
NASA Astrophysics Data System (ADS)
Fuller, M. E.
2001-12-01
A fully instrumented research site for examining field-scale bacterial transport has been established on the eastern shore of Virginia. Studies employing intact sediment cores from the South Oyster site have been performed to examine the effects of physical and chemical heterogeneity, to derive transport parameters, and to aid in the selection of bacterial strains for use in field experiments. A variety of innovative methods for tracking bacteria were developed and evaluated under both laboratory and field conditions, providing the tools to detect target cell concentrations in groundwater down to <20 cells/ml, and to perform real-time monitoring in the field. Comprehensive modeling efforts have provided a framework for the layout and instrumentation of the field site, and have aided in the design and interpretation of field-scale bacterial transport experiments. Field transport experiments were conducted in both aerobic and an anoxic flow cells to determine the effects of physical and chemical heterogeneity on field-scale bacterial transport. The results of this research not only contribute to the development of more effective bioremediation strategies, but also have implications for a better understanding of bacterial movement in the subsurface as it relates to public health microbiology and general microbial ecology.
Solar Cells in the School Physics Laboratory.
ERIC Educational Resources Information Center
Mikulski, Kazimeirz
1996-01-01
Discusses the goals of experiments which show examples of the use of solar energy on a scale suitable for a school laboratory. Highlights the history of discoveries and developments in photoelectricity. Presents investigations and experiments, that can be performed by students. (JRH)
2012-01-01
Objective. To evaluate preceptors’ perception of their ability to perform the Structured Practical Experiences in Pharmacy (SPEP) learning objectives through a self-assessment activity. Methods. A self-assessment instrument consisting of 28 learning objectives associated with clinic, community, and hospital pharmacy practice experiences were developed. Preceptors rated their performance ability for each of the learning objectives using a 3-point Likert scale. Results. Of the 116 preceptors, 89 (77%) completed the self-assessment survey instrument. The overall preceptor responses to the items on performance of the 28 SPEP learning objectives ranged from good to excellent. Years of experience, practice experience setting, and involvement as a SPEP or SPEP and PharmD preceptor had no influence on their self-reported capabilities. Conclusion. Most preceptors rated their ability to perform the learning objectives for the structured practical experiences in pharmacy as high. Competency areas requiring further preceptor development were identified. PMID:23193333
1980-08-01
AD-AGAB 906 ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURG--ETC FIG 14/2 LABORATORY AND PILOT SCALE EVALUATION OF COAGULATION, CLARIFICA -ETC U...FILTRATION FOR LWGRADING JEWAGE LAGOON EFFLUENTS~ w IL j0 ( M John ullinane, Jr., Richard A. hafer (0 Environmental Laboratory gel U. S. Army Engineer ...Shafer 9. PERFORMING ORGANIZATION NAME AND ADORESS SO. PROGRAM ELEMENT, PROJECT, TASK AREA a WORK UNIT NUMBERS U. S. Army Engineer Waterways Experiment
An economy of scale system's mensuration of large spacecraft
NASA Technical Reports Server (NTRS)
Deryder, L. J.
1981-01-01
The systems technology and cost particulars of using multipurpose platforms versus several sizes of bus type free flyer spacecraft to accomplish the same space experiment missions. Computer models of these spacecraft bus designs were created to obtain data relative to size, weight, power, performance, and cost. To answer the question of whether or not large scale does produce economy, the dominant cost factors were determined and the programmatic effect on individual experiment costs were evaluated.
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Gutsche, O.
The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The "Scale Test for the Experiment Program" (STEP'09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.
Simplified Summative Temporal Bone Dissection Scale Demonstrates Equivalence to Existing Measures.
Pisa, Justyn; Gousseau, Michael; Mowat, Stephanie; Westerberg, Brian; Unger, Bert; Hochman, Jordan B
2018-01-01
Emphasis on patient safety has created the need for quality assessment of fundamental surgical skills. Existing temporal bone rating scales are laborious, subject to evaluator fatigue, and contain inconsistencies when conferring points. To address these deficiencies, a novel binary assessment tool was designed and validated against a well-established rating scale. Residents completed a mastoidectomy with posterior tympanotomy on identical 3D-printed temporal bone models. Four neurotologists evaluated each specimen using a validated scale (Welling) and a newly developed "CanadaWest" scale, with scoring repeated after a 4-week interval. Nineteen participants were clustered into junior, intermediate, and senior cohorts. An ANOVA found significant differences between performance of the junior-intermediate and junior-senior cohorts for both Welling and CanadaWest scales ( P < .05). Neither scale found a significant difference between intermediate-senior resident performance ( P > .05). Cohen's kappa found strong intrarater reliability (0.711) with a high degree of interrater reliability of (0.858) for the CanadaWest scale, similar to scores on the Welling scale of (0.713) and (0.917), respectively. The CanadaWest scale was facile and delineated performance by experience level with strong intrarater reliability. Comparable to the validated Welling Scale, it distinguished junior from senior trainees but was challenged in differentiating intermediate and senior trainee performance.
NASA Technical Reports Server (NTRS)
Opila, Elizabeth
1995-01-01
Pure coupons of chemically vapor deposited (CVD) SiC were oxidized for 100 h in dry flowing oxygen at 1300 C. The oxidation kinetics were monitored using thermogravimetry (TGA). The experiments were first performed using high-purity alumina reaction tubes. The experiments were then repeated using fused quartz reaction tubes. Differences in oxidation kinetics, scale composition, and scale morphology were observed. These differences were attributed to impurities in the alumina tubes. Investigators interested in high-temperature oxidation of silica formers should be aware that high-purity alumina can have significant effects on experiment results.
FULL-SCALE VIBRATING PERVAPORATION MEMBRANE UNIT: VOC REMOVAL FROM WATER AND SURFACTANT SOLUTIONS
A commercial-scale vibrating membrane system was evaluated for the separation of volatile organic compounds (VOCs) from aqueous solutions by pervaporation. Experiments with surrogate solutions of up to five VOCs in the presence and absence of a surfactant were performed to compar...
Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenburg, C.M.
2011-06-01
The need for risk-driven field experiments for CO{sub 2} geologic storage processes to complement ongoing pilot-scale demonstrations is discussed. These risk-driven field experiments would be aimed at understanding the circumstances under which things can go wrong with a CO{sub 2} capture and storage (CCS) project and cause it to fail, as distinguished from accomplishing this end using demonstration and industrial scale sites. Such risk-driven tests would complement risk-assessment efforts that have already been carried out by providing opportunities to validate risk models. In addition to experimenting with high-risk scenarios, these controlled field experiments could help validate monitoring approaches to improvemore » performance assessment and guide development of mitigation strategies.« less
Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea
2015-01-01
Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Kellas, Sotiris; Morton, John
1992-01-01
The feasibility of using scale model testing for predicting the full-scale behavior of flat composite coupons loaded in tension and beam-columns loaded in flexure is examined. Classical laws of similitude are applied to fabricate and test replica model specimens to identify scaling effects in the load response, strength, and mode of failure. Experiments were performed on graphite-epoxy composite specimens having different laminate stacking sequences and a range of scaled sizes. From the experiments it was deduced that the elastic response of scaled composite specimens was independent of size. However, a significant scale effect in strength was observed. In addition, a transition in failure mode was observed among scaled specimens of certain laminate stacking sequences. A Weibull statistical model and a fracture mechanics based model were applied to predict the strength scale effect since standard failure criteria cannot account for the influence of absolute specimen size on strength.
Flow topologies and turbulence scales in a jet-in-cross-flow
Oefelein, Joseph C.; Ruiz, Anthony M.; Lacaze, Guilhem
2015-04-03
This study presents a detailed analysis of the flow topologies and turbulence scales in the jet-in-cross-flow experiment of [Su and Mungal JFM 2004]. The analysis is performed using the Large Eddy Simulation (LES) technique with a highly resolved grid and time-step and well controlled boundary conditions. This enables quantitative agreement with the first and second moments of turbulence statistics measured in the experiment. LES is used to perform the analysis since experimental measurements of time-resolved 3D fields are still in their infancy and because sampling periods are generally limited with direct numerical simulation. A major focal point is the comprehensivemore » characterization of the turbulence scales and their evolution. Time-resolved probes are used with long sampling periods to obtain maps of the integral scales, Taylor microscales, and turbulent kinetic energy spectra. Scalar-fluctuation scales are also quantified. In the near-field, coherent structures are clearly identified, both in physical and spectral space. Along the jet centerline, turbulence scales grow according to a classical one-third power law. However, the derived maps of turbulence scales reveal strong inhomogeneities in the flow. From the modeling perspective, these insights are useful to design optimized grids and improve numerical predictions in similar configurations.« less
On the relation between personality and job performance of airline pilots.
Hormann, H J; Maschke, P
1996-01-01
The validity of a personality questionnaire for the prediction of job success of airline pilots is compared to validities of a simulator checkflight and of flying experience data. During selection, 274 pilots applying for employment with a European charter airline were examined with a multidimensional personality questionnaire (Temperature Structure Scales; TSS). Additionally, the applicants were graded in a simulator checkflight. On the basis of training records, the pilots were classified as performing at standard or below standard after about 3 years of employment in the hiring company. In a multiple-regression model, this dichotomous criterion for job success can be predicted with 73.8% accuracy through the simulator checkflight and flying experience prior to employment. By adding the personality questionnaire to the regression equation, the number of correct classifications increases to 79.3%. On average, successful pilots score substantially higher on interpersonal scales and lower on emotional scales of the TSS.
Chemical disinfection of combined sewer overflow waters using performic acid or peracetic acids.
Chhetri, Ravi Kumar; Thornberg, Dines; Berner, Jesper; Gramstad, Robin; Öjstedt, Ulrik; Sharma, Anitha Kumari; Andersen, Henrik Rasmus
2014-08-15
We investigated the possibility of applying performic acid (PFA) and peracetic acid (PAA) for disinfection of combined sewer overflow (CSO) in existing CSO management infrastructures. The disinfection power of PFA and PAA towards Escherichia coli (E. coli) and Enterococcus was studied in batch-scale and pre-field experiments. In the batch-scale experiment, 2.5 mg L(-1) PAA removed approximately 4 log unit of E. coli and Enterococcus from CSO with a 360 min contact time. The removal of E. coli and Enterococcus from CSO was always around or above 3 log units using 2-4 mg L(-1) PFA; with a 20 min contact time in both batch-scale and pre-field experiments. There was no toxicological effect measured by Vibrio fischeri when CSO was disinfected with PFA; a slight toxic effect was observed on CSO disinfected with PAA. When the design for PFA based disinfection was applied to CSO collected from an authentic event, the disinfection efficiencies were confirmed and degradation rates were slightly higher than predicted in simulated CSO. Copyright © 2014 Elsevier B.V. All rights reserved.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
The Amazon Boundary-Layer Experiment (ABLE 2B) - A meteorological perspective
NASA Technical Reports Server (NTRS)
Garstang, Michael; Greco, Steven; Scala, John; Swap, Robert; Ulanski, Stanley; Fitzjarrald, David; Martin, David; Browell, Edward; Shipman, Mark; Connors, Vickie
1990-01-01
The Amazon Boundary-Layer Experiments (ABLE) 2A and 2B, which were performed near Manaus, Brazil in July-August, 1985, and April-May, 1987 are discussed. The experiments were performed to study the sources, sinks, concentrations, and transports of trace gases and aerosols in rain forest soils, wetlands, and vegetation. Consideration is given the design and preliminary results of the experiment, focusing on the relationships between meteorological scales of motion and the flux, transports, and reactions of chemical species and aerosols embedded in the atmospheric fluid. Meteorological results are presented and the role of the meteorological results in the atmospheric chemistry experiment is examined.
42 CFR 425.502 - Calculating the ACO quality performance score.
Code of Federal Regulations, 2014 CFR
2014-10-01
... four domains: (i) Patient/care giver experience. (ii) Care coordination/Patient safety. (iii... year. (1) For the first performance year of an ACO's agreement, CMS defines the quality performance... a point scale for the measures. (2)(i) CMS will define the quality benchmarks using fee-for-service...
NASA Glenn Research Center Experience with LENR Phenomenon
NASA Technical Reports Server (NTRS)
Wrbanek, Susan Y.; Fralick, Gustave C.; Wrbanek, John D.; Niedra, Janis M.
2012-01-01
Since 1989 NASA Glenn Research Center (GRC) has performed some small-scale limited experiments that show evidence of effects claimed by some to be evidence of Low Energy Nuclear Reactions (LENR). The research at GRC has involved observations and work on measurement techniques for observing the temperature effects in reactions of isotopes of hydrogen with palladium hydrides. The various experiments performed involved loading Pd with gaseous H2 and D2, and exposing Pd thin films to multi-bubble sonoluminescence in regular and deuterated water. An overview of these experiments and their results will be presented.
NASA Glenn Research Center Experience with "LENR Phenomenon"
NASA Technical Reports Server (NTRS)
Wrbanek, Susan Y.; Fralick, Gustave C.; Wrbanek, John D.; Niedra, Janis M.
2012-01-01
Since 1989 NASA Glenn Research Center (GRC) has performed some small-scale limited experiments that show evidence of effects claimed by some to be evidence of Low Energy Nuclear Reactions (LENR). The research at GRC has involved observations and work on measurement techniques for observing the temperature effects in reactions of isotopes of hydrogen with palladium hydrides. The various experiments performed involved loading Pd with gaseous H2 and D2, and exposing Pd thin films to multi-bubble sonoluminescence in regular and deuterated water. An overview of these experiments and their results will be presented.
Fluid dynamics structures in a fire environment observed in laboratory-scale experiments
J. Lozano; W. Tachajapong; D.R. Weise; S. Mahalingam; M. Princevac
2010-01-01
Particle Image Velocimetry (PIV) measurements were performed in laboratory-scale experimental fires spreading across horizontal fuel beds composed of aspen (Populus tremuloides Michx) excelsior. The continuous flame, intermittent flame, and thermal plume regions of a fire were investigated. Utilizing a PIV system, instantaneous velocity fields for...
A protocol was developed to rapidly assess the efficiency of chemical washing for the removal of excess biomass from biotrickling filters for waste air treatment. Although the experiment was performed on a small scale, conditions were chosen to simulate application in full-scale ...
Micromechanisms of deformation in shales
NASA Astrophysics Data System (ADS)
Bonnelye, A.; Gharbi, H.; Hallais, S.; Dimanov, A.; Bornert, M.; Picard, D.; Mezni, M.; Conil, N.
2017-12-01
One of the envisaged solutions for nuclear wastes disposal is underground repository in shales. For this purpose, the Callovo Oxfordian (Cox) argillaceous formation is extensively studied. The hydro-mechanical behavior of the argillaceous rock is complex, like the multiphase and multi-scale structured material itself. The argilaceous matrix is composed of interstratified illite-smectite particles, it contains detritic quartz and calcite, accessory pyrite, and the rock porosity ranges from micrometre to nanometre scales. Besides the bedding anisotropy, structural variabilities exist at all scales, from the decametric-metric scales of the geological formation to the respectively millimetric and micrometric scales of the aggregates of particles and clay particles Our study aims at understanding the complex mechanisms which are activated at the micro-scale and are involved in the macroscopic inelastic deformation of such a complex material. Two sets of experiments were performed, at two scales on three bedding orientations (90°, 45° and 0°). The first set was dedicated to uniaxial deformation followed with an optical set-up with a pixel resolution of 0.55µm. These experiments allowed us to see the fracture propagation with different patterns depending on the bedding orientation. For the second set of experiments, an experimental protocol was developed in order to perform uniaxial deformation experiment at controlled displacement rate, inside an environmental scanning electron microscope (ESEM), under controlled relative humidity, in order to preserve as much as possible the natural state of saturation of shales. We aimed at characterizing the mechanical anisotropy and the mechanisms involved in the deformation, with an image resolution below the micormeter. The observed sample surfaces were polished by broad ion beam in order to reveal the fine microstructures of the argillaceous matrix. In both cases, digital images were acquired at different loading stages during the deformation process and Digital Image Correlation Technique (DIC) was applied in order to retrieve full strain fields at various scales from sample scale to microstructure scale. The analysis allows for identification of the active mechanisms, their relationships to the microstructure and their interactions.
NASA Astrophysics Data System (ADS)
Frolov, V. L.; Komrakov, G. P.; Glukhov, Ya. V.; Andreeva, E. S.; Kunitsyn, V. E.; Kurbatov, G. A.
2016-07-01
We consider the experimental results obtained by studying the large-scale structure of the HF-disturbed ionospheric region. The experiments were performed using the SURA heating facility. The disturbed ionospheric region was sounded by signals radiated by GPS navigation satellite beacons as well as by signals of low-orbit satellites (radio tomography). The results of the experiments show that large-scale plasma density perturbations induced at altitudes higher than the F2 layer maximum can contribute significantly to the measured variations of the total electron density and can, with a certain arrangement of the reception points, be measured by the GPS sounding method.
The Full Scale Seal Experiment - A Seal Industrial Prototype for Cigeo - 13106
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebon, P.; Bosgiraud, J.M.; Foin, R.
2013-07-01
The Full Scale Seal (FSS) Experiment is one of various experiments implemented by Andra, within the frame of the Cigeo (the French Deep Geological Repository) Project development, to demonstrate the technical construction feasibility and performance of seals to be constructed, at time of Repository components (shafts, ramps, drifts, disposal vaults) progressive closure. FSS is built inside a drift model fabricated on surface for the purpose. Prior to the scale 1:1 seal construction test, various design tasks are scheduled. They include the engineering work on the drift model to make it fit with the experimental needs, on the various work sequencesmore » anticipated for the swelling clay core emplacement and the concrete containment plugs construction, on the specialized handling tools (and installation equipment) manufactured and delivered for the purpose, and of course on the various swelling clay materials and low pH (below 11) concrete formulations developed for the application. The engineering of the 'seal-as-built' commissioning means (tools and methodology) must also be dealt with. The FSS construction experiment is a technological demonstrator, thus it is not focused on the phenomenological survey (and by consequence, on the performance and behaviour forecast). As such, no hydration (forced or natural) is planned. However, the FSS implementation (in particular via the construction and commissioning activities carried out) is a key milestone in view of comforting phenomenological extrapolation in time and scale. The FSS experiment also allows for qualifying the commissioning methods of a real sealing system in the Repository, as built, at time of industrial operations. (authors)« less
Quality Metrics Of Digitally Derived Imagery And Their Relation To Interpreter Performance
NASA Astrophysics Data System (ADS)
Burke, James J.; Snyder, Harry L.
1981-12-01
Two hundred-fifty transparencies, displaying a new digital database consisting of 25 degraded versions (5 blur levels x 5 noise levels) of each of 10 digitized, first-generation positive transparencies, were used in two experiments involving 15 trained military photo-interpreters. Each image is 86 mm square and represents 40962 8-bit pixels. In the "interpretation" experiment, each photo-interpreter (judge) spent approximately two days extracting Essential Elements of Information (EEI's) from one degraded version of each scene at a constant blur level (FWHM = 40, 84 or 322 μm). In the scaling experiment, each judge assigned a numerical value to each of the 250 images, according to its perceived position on a 10-point NATO-standardized scale (0 = useless through 9 = nearly perfect), to the nearest 0.1 unit. Eighty-eight of the 100 possible values were used by the judges, indicating that 62 categories are needed to scale these hardcopy images. The overall correlation between the scaling and interpretation results was 0.9. Though the main effect of blur was not significant (p = 0.146) in the interpretation experiment, that of noise was significant (p = 0.005), and all main factors (blur, noise, scene, order of battle) and most interactions were statistically significant in the scaling experiment.
Extrasensory Perception Experiences and Childhood Trauma: A Rorschach Investigation.
Scimeca, Giuseppe; Bruno, Antonio; Pandolfo, Gianluca; La Ciura, Giulia; Zoccali, Rocco A; Muscatello, Maria R A
2015-11-01
This study investigated whether people who report recurrent extrasensory perception (ESP) experiences (telepathy, clairvoyance, and precognition) have suffered more traumatic experiences and traumatic intrusions. Thirty-one nonclinical participants reporting recurrent ESP experiences were compared with a nonclinical sample of 31 individuals who did not report recurrent ESP phenomena. Past traumatic experiences were assessed via a self-report measure of trauma history (Childhood Trauma Questionnaire); traumatic intrusions were assessed via a performance-based personality measure (Rorschach Traumatic Content Index). Participants also completed the Anomalous Experience Inventory, the Minnesota Multiphasic Personality Inventory-2, the Dissociative Experience Scale, and the Revised Paranormal Belief Scale. The ESP group reported higher levels of emotional abuse, sexual abuse, emotional neglect, physical neglect, and traumatic intrusions. The association between ESP experiences and trauma was partly mediated by the effects of dissociation and emotional distress. Implications for health professionals are discussed. Results also showed the reliability of the twofold method of assessment of trauma.
A full-scale STOVL ejector experiment
NASA Technical Reports Server (NTRS)
Barankiewicz, Wendy S.
1993-01-01
The design and development of thrust augmenting short take-off and vertical landing (STOVL) ejectors has typically been an iterative process. In this investigation, static performance tests of a full-scale vertical lift ejector were performed at primary flow temperatures up to 1560 R (1100 F). Flow visualization (smoke generators, yarn tufts and paint dots) was used to assess inlet flowfield characteristics, especially around the primary nozzle and end plates. Performance calculations are presented for ambient temperatures close to 480 R (20 F) and 535 R (75 F) which simulate 'seasonal' aircraft operating conditions. Resulting thrust augmentation ratios are presented as functions of nozzle pressure ratio and temperature. Full-scale experimental tests such as this are expensive, and difficult to implement at engine exhaust temperatures. For this reason the utility of using similarity principles -- in particular, the Munk and Prim similarity principle for isentropic flow -- was explored. At different primary temperatures, exit pressure contours are compared for similarity. A nondimensional flow parameter is then shown to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments. Under the assumption that an appropriate similarity principle can be established, then properly chosen performance parameters should be similar for both hot flow and cold flow model tests.
Scaling, soil moisture and evapotranspiration in runoff models
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.
Fabrication and performance analysis of 4-sq cm indium tin oxide/InP photovoltaic solar cells
NASA Technical Reports Server (NTRS)
Gessert, T. A.; Li, X.; Phelps, P. W.; Coutts, T. J.; Tzafaras, N.
1991-01-01
Large-area photovoltaic solar cells based on direct current magnetron sputter deposition of indium tin oxide (ITO) into single-crystal p-InP substrates demonstrated both the radiation hardness and high performance necessary for extraterrestrial applications. A small-scale production project was initiated in which approximately 50 ITO/InP cells are being produced. The procedures used in this small-scale production of 4-sq cm ITO/InP cells are presented and discussed. The discussion includes analyses of performance range of all available production cells, and device performance data of the best cells thus far produced. Additionally, processing experience gained from the production of these cells is discussed, indicating other issues that may be encountered when large-scale productions are begun.
PIC Simulations of Hypersonic Plasma Instabilities
NASA Astrophysics Data System (ADS)
Niehoff, D.; Ashour-Abdalla, M.; Niemann, C.; Decyk, V.; Schriver, D.; Clark, E.
2013-12-01
The plasma sheaths formed around hypersonic aircraft (Mach number, M > 10) are relatively unexplored and of interest today to both further the development of new technologies and solve long-standing engineering problems. Both laboratory experiments and analytical/numerical modeling are required to advance the understanding of these systems; it is advantageous to perform these tasks in tandem. There has already been some work done to study these plasmas by experiments that create a rapidly expanding plasma through ablation of a target with a laser. In combination with a preformed magnetic field, this configuration leads to a magnetic "bubble" formed behind the front as particles travel at about Mach 30 away from the target. Furthermore, the experiment was able to show the generation of fast electrons which could be due to instabilities on electron scales. To explore this, future experiments will have more accurate diagnostics capable of observing time- and length-scales below typical ion scales, but simulations are a useful tool to explore these plasma conditions theoretically. Particle in Cell (PIC) simulations are necessary when phenomena are expected to be observed at these scales, and also have the advantage of being fully kinetic with no fluid approximations. However, if the scales of the problem are not significantly below the ion scales, then the initialization of the PIC simulation must be very carefully engineered to avoid unnecessary computation and to select the minimum window where structures of interest can be studied. One method of doing this is to seed the simulation with either experiment or ion-scale simulation results. Previous experiments suggest that a useful configuration for studying hypersonic plasma configurations is a ring of particles rapidly expanding transverse to an external magnetic field, which has been simulated on the ion scale with an ion-hybrid code. This suggests that the PIC simulation should have an equivalent configuration; however, modeling a plasma expanding radially in every direction is computationally expensive. In order to reduce the computational expense, we use a radial density profile from the hybrid simulation results to seed a self-consistent PIC simulation in one direction (x), while creating a current in the direction (y) transverse to both the drift velocity and the magnetic field (z) to create the magnetic bubble observed in experiment. The simulation will be run in two spatial dimensions but retain three velocity dimensions, and the results will be used to explore the growth of micro-instabilities present in hypersonic plasmas in the high-density region as it moves through the simulation box. This will still require a significantly large box in order to compare with experiment, as the experiments are being performed over distances of 104 λDe and durations of 105 ωpe-1.
Meteor Crater: Energy of formation - Implications of centrifuge scaling
NASA Technical Reports Server (NTRS)
Schmidt, R. M.
1980-01-01
Recent work on explosive cratering has demonstrated the utility of performing subscale experiments on a geotechnic centrifuge to develop scaling rules for very large energy events. The present investigation is concerned with an extension of this technique to impact cratering. Experiments have been performed using a projectile gun mounted directly on the centrifuge rotor to launch projectiles into a suitable soil container undergoing centripetal accelerations in excess of 500 G. The pump tube of a two-stage light-gas gun was used to attain impact velocities of approximately 2 km/sec. The results of the experiments indicate that the energy of formation of any large impact crater depends upon the impact velocity. This dependence, shown for the case of Meteor Crater, is consistent with analogous results for the specific energy dependence of explosives and is expected to persist to impact velocities in excess of 25 km/sec.
Two-probe STM experiments at the atomic level.
Kolmer, Marek; Olszowski, Piotr; Zuzak, Rafal; Godlewski, Szymon; Joachim, Christian; Szymonski, Marek
2017-11-08
Direct characterization of planar atomic or molecular scale devices and circuits on a supporting surface by multi-probe measurements requires unprecedented stability of single atom contacts and manipulation of scanning probes over large, nanometer scale area with atomic precision. In this work, we describe the full methodology behind atomically defined two-probe scanning tunneling microscopy (STM) experiments performed on a model system: dangling bond dimer wire supported on a hydrogenated germanium (0 0 1) surface. We show that 70 nm long atomic wire can be simultaneously approached by two independent STM scanners with exact probe to probe distance reaching down to 30 nm. This allows direct wire characterization by two-probe I-V characteristics at distances below 50 nm. Our technical results presented in this work open a new area for multi-probe research, which can be now performed with precision so far accessible only by single-probe scanning probe microscopy (SPM) experiments.
NASA Astrophysics Data System (ADS)
Herrington, A. R.; Reed, K. A.
2018-02-01
A set of idealized experiments are developed using the Community Atmosphere Model (CAM) to understand the vertical velocity response to reductions in forcing scale that is known to occur when the horizontal resolution of the model is increased. The test consists of a set of rising bubble experiments, in which the horizontal radius of the bubble and the model grid spacing are simultaneously reduced. The test is performed with moisture, through incorporating moist physics routines of varying complexity, although convection schemes are not considered. Results confirm that the vertical velocity in CAM is to first-order, proportional to the inverse of the horizontal forcing scale, which is consistent with a scale analysis of the dry equations of motion. In contrast, experiments in which the coupling time step between the moist physics routines and the dynamical core (i.e., the "physics" time step) are relaxed back to more conventional values results in severely damped vertical motion at high resolution, degrading the scaling. A set of aqua-planet simulations using different physics time steps are found to be consistent with the results of the idealized experiments.
A novel computational approach towards the certification of large-scale boson sampling
NASA Astrophysics Data System (ADS)
Huh, Joonsuk
Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.
NASA Astrophysics Data System (ADS)
Pak, A.; Dewald, E. L.; Landen, O. L.; Milovich, J.; Strozzi, D. J.; Berzak Hopkins, L. F.; Bradley, D. K.; Divol, L.; Ho, D. D.; MacKinnon, A. J.; Meezan, N. B.; Michel, P.; Moody, J. D.; Moore, A. S.; Schneider, M. B.; Town, R. P. J.; Hsing, W. W.; Edwards, M. J.
2015-12-01
Temporally resolved measurements of the hohlraum radiation flux asymmetry incident onto a bismuth coated surrogate capsule have been made over the first two nanoseconds of ignition relevant laser pulses. Specifically, we study the P2 asymmetry of the incoming flux as a function of cone fraction, defined as the inner-to-total laser beam power ratio, for a variety of hohlraums with different scales and gas fills. This work was performed to understand the relevance of recent experiments, conducted in new reduced-scale neopentane gas filled hohlraums, to full scale helium filled ignition targets. Experimental measurements, matched by 3D view factor calculations, are used to infer differences in symmetry, relative beam absorption, and cross beam energy transfer (CBET), employing an analytic model. Despite differences in hohlraum dimensions and gas fill, as well as in laser beam pointing and power, we find that laser absorption, CBET, and the cone fraction, at which a symmetric flux is achieved, are similar to within 25% between experiments conducted in the reduced and full scale hohlraums. This work demonstrates a close surrogacy in the dynamics during the first shock between reduced-scale and full scale implosion experiments and is an important step in enabling the increased rate of study for physics associated with inertial confinement fusion.
Experiments in autonomous robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamel, W.R.
1987-01-01
The Center for Engineering Systems Advanced Research (CESAR) is performing basic research in autonomous robotics for energy-related applications in hazardous environments. The CESAR research agenda includes a strong experimental component to assure practical evaluation of new concepts and theories. An evolutionary sequence of mobile research robots has been planned to support research in robot navigation, world sensing, and object manipulation. A number of experiments have been performed in studying robot navigation and path planning with planar sonar sensing. Future experiments will address more complex tasks involving three-dimensional sensing, dexterous manipulation, and human-scale operations.
Adventure Behavior Seeking Scale
Próchniak, Piotr
2017-01-01
This article presents a new tool—the Adventure Behavior Seeking Scale (ABSS). The Adventure Behavior Seeking Scale was developed to assess individuals’ highly stimulating behaviors in natural environments. An exploratory factor analysis was conducted with 466 participants and resulted in one factor. The internal consistency was 0.80. A confirmatory factor analysis was performed using another sample of 406 participants, and results verified the one-factor structure. The findings indicate that people with a lot of experience in outdoor adventure have a higher score on the ABSS scale than control groups without such experience. The results also suggest that the 8-item ABSS scores were highly related to sensation seeking. The author discusses findings in regard to the ABSS as an instrument to measure outdoor adventure. However, further studies need to be carried out in other sample groups to further validate the scale. PMID:28555018
Choice with frequently changing food rates and food ratios.
Baum, William M; Davison, Michael
2014-03-01
In studies of operant choice, when one schedule of a concurrent pair is varied while the other is held constant, the constancy of the constant schedule may exert discriminative control over performance. In our earlier experiments, schedules varied reciprocally across components within sessions, so that while food ratio varied food rate remained constant. In the present experiment, we held one variable-interval (VI) schedule constant while varying the concurrent VI schedule within sessions. We studied five conditions, each with a different constant left VI schedule. On the right key, seven different VI schedules were presented in seven different unsignaled components. We analyzed performances at several different time scales. At the longest time scale, across conditions, behavior ratios varied with food ratios as would be expected from the generalized matching law. At shorter time scales, effects due to holding the left VI constant became more and more apparent, the shorter the time scale. In choice relations across components, preference for the left key leveled off as the right key became leaner. Interfood choice approximated strict matching for the varied right key, whereas interfood choice hardly varied at all for the constant left key. At the shortest time scale, visit patterns differed for the left and right keys. Much evidence indicated the development of a fix-and-sample pattern. In sum, the procedural difference made a large difference to performance, except for choice at the longest time scale and the fix-and-sample pattern at the shortest time scale. © Society for the Experimental Analysis of Behavior.
Crater size estimates for large-body terrestrial impact
NASA Technical Reports Server (NTRS)
Schmidt, Robert M.; Housen, Kevin R.
1988-01-01
Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.
Dykema, John A; Keith, David W; Anderson, James G; Weisenstein, Debra
2014-12-28
Although solar radiation management (SRM) through stratospheric aerosol methods has the potential to mitigate impacts of climate change, our current knowledge of stratospheric processes suggests that these methods may entail significant risks. In addition to the risks associated with current knowledge, the possibility of 'unknown unknowns' exists that could significantly alter the risk assessment relative to our current understanding. While laboratory experimentation can improve the current state of knowledge and atmospheric models can assess large-scale climate response, they cannot capture possible unknown chemistry or represent the full range of interactive atmospheric chemical physics. Small-scale, in situ experimentation under well-regulated circumstances can begin to remove some of these uncertainties. This experiment-provisionally titled the stratospheric controlled perturbation experiment-is under development and will only proceed with transparent and predominantly governmental funding and independent risk assessment. We describe the scientific and technical foundation for performing, under external oversight, small-scale experiments to quantify the risks posed by SRM to activation of halogen species and subsequent erosion of stratospheric ozone. The paper's scope includes selection of the measurement platform, relevant aspects of stratospheric meteorology, operational considerations and instrument design and engineering.
Performance Assessment of a Large Scale Pulsejet- Driven Ejector System
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.
2006-01-01
Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Zhijie; Lai, Canhai; Marcy, Peter William
2017-05-01
A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less
Development and Validation of a Mathematics Anxiety Scale for Students
ERIC Educational Resources Information Center
Ko, Ho Kyoung; Yi, Hyun Sook
2011-01-01
This study developed and validated a Mathematics Anxiety Scale for Students (MASS) that can be used to measure the level of mathematics anxiety that students experience in school settings and help them overcome anxiety and perform better in mathematics achievement. We conducted a series of preliminary analyses and panel reviews to evaluate quality…
A PILOT-SCALE STUDY ON THE COMBUSTION OF WASTE ...
Symposium Paper Post-consumer carpet is a potential substitute fuel for high temperature thermal processes such as cement kilns and boilers.This paper reports on results examining emissions of PCDDs/Fs from a series of pilot-scale experiments performed on the EPA's rotary kiln incinerator simulator facility in Research triangle Park, NC.
MULTICOMPONENT AEROSOL DYNAMICS OF THE PB-O2 SYSTEM IN A BENCH SCALE FLAME INCINERATOR
A study was carried out to understand the formation and growth of lead particles in a flame incinerator. A bench scale flame incinerator was used to perform controlled experiments with lead acetate as a test compound. A dilution probe in conjunction with real-time aerosol instrum...
Bostelmann, Friederike; Hammer, Hans R.; Ortensi, Javier; ...
2015-12-30
Within the framework of the IAEA Coordinated Research Project on HTGR Uncertainty Analysis in Modeling, criticality calculations of the Very High Temperature Critical Assembly experiment were performed as the validation reference to the prismatic MHTGR-350 lattice calculations. Criticality measurements performed at several temperature points at this Japanese graphite-moderated facility were recently included in the International Handbook of Evaluated Reactor Physics Benchmark Experiments, and represent one of the few data sets available for the validation of HTGR lattice physics. Here, this work compares VHTRC criticality simulations utilizing the Monte Carlo codes Serpent and SCALE/KENO-VI. Reasonable agreement was found between Serpent andmore » KENO-VI, but only the use of the latest ENDF cross section library release, namely the ENDF/B-VII.1 library, led to an improved match with the measured data. Furthermore, the fourth beta release of SCALE 6.2/KENO-VI showed significant improvements from the current SCALE 6.1.2 version, compared to the experimental values and Serpent.« less
Scaling Relations for Intercalation Induced Damage in Electrodes
Chen, Chien-Fan; Barai, Pallab; Smith, Kandler; ...
2016-04-02
Mechanical degradation, owing to intercalation induced stress and microcrack formation, is a key contributor to the electrode performance decay in lithium-ion batteries (LIBs). The stress generation and formation of microcracks are caused by the solid state diffusion of lithium in the active particles. Here in this work, scaling relations are constructed for diffusion induced damage in intercalation electrodes based on an extensive set of numerical experiments with a particle-level description of microcrack formation under disparate operating and cycling conditions, such as temperature, particle size, C-rate, and drive cycle. The microcrack formation and evolution in active particles is simulated based onmore » a stochastic methodology. A reduced order scaling law is constructed based on an extensive set of data from the numerical experiments. The scaling relations include combinatorial constructs of concentration gradient, cumulative strain energy, and microcrack formation. Lastly, the reduced order relations are further employed to study the influence of mechanical degradation on cell performance and validated against the high order model for the case of damage evolution during variable current vehicle drive cycle profiles.« less
Impact of scaling on the nitric-glycolic acid flowsheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, D.
Savannah River Remediation (SRR) is considering using glycolic acid as a replacement for formic acid in Sludge Receipt and Adjustment Tank (SRAT) processing in the Defense Waste Processing Facility (DWPF). Catalytic decomposition of formic acid is responsible for the generation of hydrogen, a potentially flammable gas, during processing. To prevent the formation of a flammable mixture in the offgas, an air purge is used to dilute the hydrogen concentration below the 60% of the Composite Lower Flammability Limit (CLFL). The offgas is continuously monitored for hydrogen using Gas Chromatographs (GCs). Since formic acid is much more volatile and toxic thanmore » glycolic acid, a formic acid spill would lead to the release of much larger quantities to the environment. Switching from formic acid to glycolic acid is expected to eliminate the hydrogen flammability hazard leading to lower air purges, thus downgrading of Safety Significant GCs to Process Support GCs, and minimizing the consequence of a glycolic acid tank leak in DWPF. Overall this leads to a reduction in process operation costs and an increase in safety margin. Experiments were completed at three different scales to demonstrate that the nitric-glycolic acid flowsheet scales from the 4-L lab scale to the 22-L bench scale and 220-L engineering scale. Ten process demonstrations of the sludge-only flowsheet for SRAT and Slurry Mix Evaporator (SME) cycles were performed using Sludge Batch 8 (SB8)-Tank 40 simulant. No Actinide Removal Process (ARP) product or strip effluent was added during the runs. Six experiments were completed at the 4-L scale, two experiments were completed at the 22-L scale, and two experiments were completed at the 220-L scale. Experiments completed at the 4-L scale (100 and 110% acid stoichiometry) were repeated at the 22-L and 220-L scale for scale comparisons.« less
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Boone, Marijn A.; Boone, Matthieu N.; De Schryver, Thomas; Masschaele, Bert; Van Hoorebeke, Luc; Cnudde, Veerle
2016-09-01
Over the past decade, the wide-spread implementation of laboratory-based X-ray micro-computed tomography (micro-CT) scanners has revolutionized both the experimental and numerical research on pore-scale transport in geological materials. The availability of these scanners has opened up the possibility to image a rock's pore space in 3D almost routinely to many researchers. While challenges do persist in this field, we treat the next frontier in laboratory-based micro-CT scanning: in-situ, time-resolved imaging of dynamic processes. Extremely fast (even sub-second) micro-CT imaging has become possible at synchrotron facilities over the last few years, however, the restricted accessibility of synchrotrons limits the amount of experiments which can be performed. The much smaller X-ray flux in laboratory-based systems bounds the time resolution which can be attained at these facilities. Nevertheless, progress is being made to improve the quality of measurements performed on the sub-minute time scale. We illustrate this by presenting cutting-edge pore scale experiments visualizing two-phase flow and solute transport in real-time with a lab-based environmental micro-CT set-up. To outline the current state of this young field and its relevance to pore-scale transport research, we critically examine its current bottlenecks and their possible solutions, both on the hardware and the software level. Further developments in laboratory-based, time-resolved imaging could prove greatly beneficial to our understanding of transport behavior in geological materials and to the improvement of pore-scale modeling by providing valuable validation.
DOT National Transportation Integrated Search
2009-03-01
The thirteenth full-scale Accelerated Pavement Test (APT) experiment at the Civil Infrastructure Laboratory (CISL) : of Kansas State University aimed to determine the response and the failure mode of thin concrete overlays. Four : pavement structures...
Predictors of Computer Anxiety and Performance in Information Systems.
ERIC Educational Resources Information Center
Anderson, Alastair A.
1996-01-01
Reports on the results of a study of business undergraduates in Australia that was conducted to determine whether or not perceived knowledge of software, microcomputer experience, overall knowledge of computers, programming experience, and gender were predictors of computer anxiety. Use of the Computer Anxiety Rating Scale is discussed.…
The Disposable Syringe: More Experiments and Uses
ERIC Educational Resources Information Center
Farmer, Andrew
1973-01-01
Describes a variety of experiments that can be performed using the disposable syringe. Among others, these include the removal of oxygen during rusting, convection in a liquid and in air, gas collection in an electrolysis cell, small scale production of a fog, and hydrogen/oxygen extraction from a voltameter. (JR)
NASA Astrophysics Data System (ADS)
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
Wongpakaran, Tinakon; Wongpakaran, Nahathai
2012-01-01
This study seeks to investigate the psychometric properties of the short version of the revised 'Experience of Close Relationships' questionnaire, comparing non-clinical and clinical samples. In total 702 subjects participated in this study, of whom 531 were non-clinical participants and 171 were psychiatric patients. They completed the short version of the revised 'Experience of Close Relationships' questionnaire (ECR-R-18), the Perceived Stress Scale-10(PSS-10), the Rosenberg Self-Esteem Scale (RSES) and the UCLA Loneliness scale. A retest of the ECR-R-18 was then performed at four-week intervals. Then, confirmatory factor analyses were performed to test the validity of the new scale. The ECR-R-18 showed a fair to good internal consistency (α 0.77 to 0.87) for both samples, and the test-retest reliability was found to be satisfactory (ICC = 0.75). The anxiety sub-scale demonstrated concurrent validity with PSS-10 and RSES, while the avoidance sub-scale showed concurrent validity with the UCLA Loneliness Scale. Confirmatory factor analysis using method factors yielded two factors with an acceptable model fit for both groups. An invariance test revealed that the ECR-R-18 when used on the clinical group differed from when used with the non-clinical group. The ECR-R-18 questionnaire revealed an overall better level of fit than the original 36 item questionnaire, indicating its suitability for use with a broader group of samples, including clinical samples. The reliability of the ECR-R- 18 might be increased if a modified scoring system is used and if our suggestions with regard to future studies are followed up.
NASA Astrophysics Data System (ADS)
Kim, E.; Tedesco, M.; de Roo, R.; England, A. W.; Gu, H.; Pham, H.; Boprie, D.; Graf, T.; Koike, T.; Armstrong, R.; Brodzik, M.; Hardy, J.; Cline, D.
2004-12-01
The NASA Cold Land Processes Field Experiment (CLPX-1) was designed to provide microwave remote sensing observations and ground truth for studies of snow and frozen ground remote sensing, particularly issues related to scaling. CLPX-1 was conducted in 2002 and 2003 in Colorado, USA. One of the goals of the experiment was to test the capabilities of microwave emission models at different scales. Initial forward model validation work has concentrated on the Local-Scale Observation Site (LSOS), a 0.8~ha study site consisting of open meadows separated by trees where the most detailed measurements were made of snow depth and temperature, density, and grain size profiles. Results obtained in the case of the 3rd Intensive Observing Period (IOP3) period (February, 2003, dry snow) suggest that a model based on Dense Medium Radiative Transfer (DMRT) theory is able to model the recorded brightness temperatures using snow parameters derived from field measurements. This paper focuses on the ability of forward DMRT modelling, combined with snowpack measurements, to reproduce the radiobrightness signatures observed by the University of Michigan's Truck-Mounted Radiometer System (TMRS) at 19 and 37~GHz during the 4th IOP (IOP4) in March, 2003. Unlike in IOP3, conditions during IOP4 include both wet and dry periods, providing a valuable test of DMRT model performance. In addition, a comparison will be made for the one day of coincident observations by the University of Tokyo's Ground-Based Microwave Radiometer-7 (GBMR-7) and the TMRS. The plot-scale study in this paper establishes a baseline of DMRT performance for later studies at successively larger scales. And these scaling studies will help guide the choice of future snow retrieval algorithms and the design of future Cold Lands observing systems.
The effects of practice on tracking and subjective workload
NASA Technical Reports Server (NTRS)
Hancock, P. A.; Robinson, M. A.; Chu, A. L.; Hansen, D. R.; Vercruyssen, M.
1989-01-01
Six college-age male subjects performed one hundred, two-minute trials on a second-order tracking task. After each trial, subjects estimated perceived workload using both the NASA TLX and SWAT workload assessment procedures. Results confirmed an expected performance improvement on the tracking task which followed traditional learning curves within the performance of each individual. Perceived workload also decreased for both scales across trials. While performance variability significantly decreased across trials, workload variability remained constant. One month later, the same subjects returned to complete the second experiment in the sequence which was a retention replication of the first experiment. Results replicated those for the first experiment except that both performance error and workload were at reduced overall levels. Results in general affirm a parallel workload reduction with performance improvement, an observation consistent with a resource-based view of automaticity.
An experimental investigation of the flow physics of high-lift systems
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.
1995-01-01
This progress report is a series of overviews outlining experiments on the flow physics of confluent boundary layers for high-lift systems. The research objectives include establishing the role of confluent boundary layer flow physics in high-lift production; contrasting confluent boundary layer structures for optimum and non-optimum C(sub L) cases; forming a high quality, detailed archival data base for CFD/modelling; and examining the role of relaminarization and streamline curvature. Goals of this research include completing LDV study of an optimum C(sub L) case; performing detailed LDV confluent boundary layer surveys for multiple non-optimum C(sub L) cases; obtaining skin friction distributions for both optimum and non-optimum C(sub L) cases for scaling purposes; data analysis and inner and outer variable scaling; setting-up and performing relaminarization experiments; and a final report establishing the role of leading edge confluent boundary layer flow physics on high-lift performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallarno, George; Rogers, James H; Maxwell, Don E
The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learnedmore » in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.« less
CONTROL OF CRYPTOSPORIDIUM OOCYSTS BY STEADY-STATE CONVENTIONAL TREATMENT
Pilot-scale experiments have been performed to assess the ability of conventional treatment to control Cryptosporidium oocysts under steady-state conditions. The work was performed with a pilot plant that was designed to minimize flow rates and, as a result, the number of oocyst...
Unsteady Aerodynamics of a Wortmann FX-63-137 Wing in a Fluctuating Wind Field.
1987-11-01
FX -63-137 Wing in a Fluctuating Wind Field 12. PERSONAL AUTHOR(S) % H.-T. Liu 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month...on the performance of a full-scale Wortmann FX -63-137 wing. Experiments were conducted in the atmospheric boundary layer by directing the elevated...effects of atmospheric un- steadiness on the performance of a full-scale Wortmann FX -63-137 wing. The status of the knowledge of aerodynamics for low
Modeling of impulsive propellant reorientation
NASA Technical Reports Server (NTRS)
Hochstein, John I.; Patag, Alfredo E.; Chato, David J.
1988-01-01
The impulsive propellant reorientation process is modeled using the (Energy Calculations for Liquid Propellants in a Space Environment (ECLIPSE) code. A brief description of the process and the computational model is presented. Code validation is documented via comparison to experimentally derived data for small scale tanks. Predictions of reorientation performance are presented for two tanks designed for use in flight experiments and for a proposed full scale OTV tank. A new dimensionless parameter is developed to correlate reorientation performance in geometrically similar tanks. Its success is demonstrated.
Bonnet, J L; Bohatier, J; Pépin, D
1999-06-01
Two experiments were performed to assess the impact of cadmium on the sewage lagoon wastewater treatment process. For each one, three laboratory-scale pilot plants with one tank receiving the same raw effluent were used; one plant served as control and the other two were contaminated once only with cadmium. In the first study, the effects of a shock load of two concentrations of cadmium chloride (60 and 300 micrograms/l) on the plant performance, microbial populations (protists and bacteria) and enzyme activities were determined. Initially, most of the performance parameters were affected concentration-dependently. A reduction in the protist population density and some influence on the total bacterial population were observed, and the potential enzymatic activities were also modified. A second experiment with a lower cadmium concentration (30 micrograms/l), supplied as chloride or sulphate, still perturbed most of the parameters studied, and the effects of the two cadmium salts were identical.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tessier, Francois; Vishwanath, Venkatram
2017-11-28
Reading and writing data efficiently from different tiers of storage is necessary for most scientific simulations to achieve good performance at scale. Many software solutions have been developed to decrease the I/O bottleneck. One wellknown strategy, in the context of collective I/O operations, is the two-phase I/O scheme. This strategy consists of selecting a subset of processes to aggregate contiguous pieces of data before performing reads/writes. In our previous work, we implemented the two-phase I/O scheme with a MPI-based topology-aware algorithm. Our algorithm showed very good performance at scale compared to the standard I/O libraries such as POSIX I/O andmore » MPI I/O. However, the algorithm had several limitations hindering a satisfying reproducibility of our experiments. In this paper, we extend our work by 1) identifying the obstacles we face to reproduce our experiments and 2) discovering solutions that reduce the unpredictability of our results.« less
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion
Calabro, Finnegan J.; Vaina, Lucia Maria
2016-01-01
Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.
Calabro, Finnegan J; Vaina, Lucia Maria
2016-05-27
BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.
A new method for testing the scale-factor performance of fiber optical gyroscope
NASA Astrophysics Data System (ADS)
Zhao, Zhengxin; Yu, Haicheng; Li, Jing; Li, Chao; Shi, Haiyang; Zhang, Bingxin
2015-10-01
Fiber optical gyro (FOG) is a kind of solid-state optical gyroscope with good environmental adaptability, which has been widely used in national defense, aviation, aerospace and other civilian areas. In some applications, FOG will experience environmental conditions such as vacuum, radiation, vibration and so on, and the scale-factor performance is concerned as an important accuracy indicator. However, the scale-factor performance of FOG under these environmental conditions is difficult to test using conventional methods, as the turntable can't work under these environmental conditions. According to the phenomenon that the physical effects of FOG produced by the sawtooth voltage signal under static conditions is consistent with the physical effects of FOG produced by a turntable in uniform rotation, a new method for the scale-factor performance test of FOG without turntable is proposed in this paper. In this method, the test system of the scale-factor performance is constituted by an external operational amplifier circuit and a FOG which the modulation signal and Y waveguied are disconnected. The external operational amplifier circuit is used to superimpose the externally generated sawtooth voltage signal and the modulation signal of FOG, and to exert the superimposed signal on the Y waveguide of the FOG. The test system can produce different equivalent angular velocities by changing the cycle of the sawtooth signal in the scale-factor performance test. In this paper, the system model of FOG superimposed with an externally generated sawtooth is analyzed, and a conclusion that the effect of the equivalent input angular velocity produced by the sawtooth voltage signal is consistent with the effect of input angular velocity produced by the turntable is obtained. The relationship between the equivalent angular velocity and the parameters such as sawtooth cycle and so on is presented, and the correction method for the equivalent angular velocity is also presented by analyzing the influence of each parameter error on the equivalent angular velocity. A comparative experiment of the method proposed in this paper and the method of turntable calibration was conducted, and the scale-factor performance test results of the same FOG using the two methods were consistent. Using the method proposed in this paper to test the scale-factor performance of FOG, the input angular velocity is the equivalent effect produced by a sawtooth voltage signal, and there is no need to use a turntable to produce mechanical rotation, so this method can be used to test the performance of FOG at the ambient conditions which turntable can not work.
Pannell, J Scott; Santiago-Dieppa, David R; Wali, Arvin R; Hirshman, Brian R; Steinberg, Jeffrey A; Cheung, Vincent J; Oveisi, David; Hallstrom, Jon; Khalessi, Alexander A
2016-08-29
This study establishes performance metrics for angiography and neuroendovascular surgery procedures based on longitudinal improvement in individual trainees with differing levels of training and experience. Over the course of 30 days, five trainees performed 10 diagnostic angiograms, coiled 10 carotid terminus aneurysms in the setting of subarachnoid hemorrhage, and performed 10 left middle cerebral artery embolectomies on a Simbionix Angio Mentor™ simulator. All procedures were nonconsecutive. Total procedure time, fluoroscopy time, contrast dose, heart rate, blood pressures, medications administered, packing densities, the number of coils used, and the number of stent-retriever passes were recorded. Image quality was rated, and the absolute value of technically unsafe events was recorded. The trainees' device selection, macrovascular access, microvascular access, clinical management, and the overall performance of the trainee was rated during each procedure based on a traditional Likert scale score of 1=fail, 2=poor, 3=satisfactory, 4=good, and 5=excellent. These ordinal values correspond with published assessment scales on surgical technique. After performing five diagnostic angiograms and five embolectomies, all participants demonstrated marked decreases in procedure time, fluoroscopy doses, contrast doses, and adverse technical events; marked improvements in image quality, device selection, access scores, and overall technical performance were additionally observed (p < 0.05). Similarly, trainees demonstrated marked improvement in technical performance and clinical management after five coiling procedures (p < 0.05). However, trainees with less prior experience deploying coils continued to experience intra-procedural ruptures up to the eighth embolization procedure; this observation likely corresponded with less tactile procedural experience to an exertion of greater force than appropriate for coil placement. Trainees across all levels of training and prior experience demonstrated a significant performance improvement after completion of our simulator curriculum consisting of five diagnostic angiograms, five embolectomy cases, and 10 aneurysm coil embolizations.
Channeling of multikilojoule high-intensity laser beams in an inhomogeneous plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivancic, S.; Haberberger, D.; Habara, H.
Channeling experiments were performed that demonstrate the transport of high-intensity (>10¹⁸ W/cm²), multikilojoule laser light through a millimeter-sized, inhomogeneous (~300-μm density scale length) laser produced plasma up to overcritical density, which is an important step forward for the fast-ignition concept. The background plasma density and the density depression inside the channel were characterized with a novel optical probe system. The channel progression velocity was measured, which agrees well with theoretical predictions based on large scale particle-in-cell simulations, confirming scaling laws for the required channeling laser energy and laser pulse duration, which are important parameters for future integrated fast-ignition channeling experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Linfeng
A literature survey has been conducted to collect information on the International R&D activities in the extraction of uranium from seawater for the period from the 1960s till the year of 2010. The reported activities, on both the laboratory scale bench experiments and the large scale marine experiments, were summarized by country/region in this report. Among all countries where such activities have been reported, Japan has carried out the most advanced large scale marine experiments with the amidoxime-based system, and achieved the collection efficiency (1.5 g-U/kg-adsorbent for 30 days soaking in the ocean) that could justify the development of industrialmore » scale marine systems to produce uranium from seawater at the price competitive with those from conventional uranium resources. R&D opportunities are discussed for improving the system performance (selectivity for uranium, loading capacity, chemical stability and mechanical durability in the sorption-elution cycle, and sorption kinetics) and making the collection of uranium from seawater more economically competitive.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart Zweben; Samuel Cohen; Hantao Ji
Small ''concept exploration'' experiments have for many years been an important part of the fusion research program at the Princeton Plasma Physics Laboratory (PPPL). this paper describes some of the present and planned fusion concept exploration experiments at PPPL. These experiments are a University-scale research level, in contrast with the larger fusion devices at PPPL such as the National Spherical Torus Experiment (NSTX) and the Tokamak Fusion Test Reactor (TFTR), which are at ''proof-of-principle'' and ''proof-of-performance'' levels, respectively.
Pilots strategically compensate for display enlargements in surveillance and flight control tasks.
Stelzer, Emily Muthard; Wickens, Christopher D
2006-01-01
Experiments were conducted to assess the impact of display size on flight control, airspace surveillance, and goal-directed target search. Research of 3-D displays has shown that display scale compression influences the perception of flight path deviation, though less is known about the causes that drive this effect. In addition, research on attention-based tasks has shown that information displaced to significant eccentricities can amplify effort, but it is unclear whether the effect generates a performance difference in complex displays. In Experiment 1, 16 pilots completed a low-fidelity flight control task under single- and dual-axis control. In Experiment 2, the control task from Experiment 1 was scaled up to a more realistic flight environment, and pilots performed hazard surveillance and target search tasks. For flight control, pilots exhibited less path error and greater stick activity with a large display, which was attributed both to greater enhanced resolution and to the fact that larger depictions of error lead to greater urgency in correcting deviations. Size did not affect hazard surveillance or search, as pilots were adaptive in altering scanning patterns in response to the enlargement of the displays. Although pilots were adaptive to display changes in search and surveillance, display size reduction diminished estimates of flight path deviation and control performance because of lowered resolution and control urgency. Care should be taken when manipulating display size, as size reduction can diminish control performance.
McRae, Marion E; Chan, Alice; Hulett, Renee; Lee, Ai Jin; Coleman, Bernice
2017-06-01
There are few reports of the effectiveness or satisfaction with simulation to learn cardiac surgical resuscitation skills. To test the effect of simulation on the self-confidence of nurses to perform cardiac surgical resuscitation simulation and nurses' satisfaction with the simulation experience. A convenience sample of sixty nurses rated their self-confidence to perform cardiac surgical resuscitation skills before and after two simulations. Simulation performance was assessed. Subjects completed the Satisfaction with Simulation Experience scale and demographics. Self-confidence scores to perform all cardiac surgical skills as measured by paired t-tests were significantly increased after the simulation (d=-0.50 to 1.78). Self-confidence and cardiac surgical work experience were not correlated with time to performance. Total satisfaction scores were high (mean 80.2, SD 1.06) indicating satisfaction with the simulation. There was no correlation of the satisfaction scores with cardiac surgical work experience (τ=-0.05, ns). Self-confidence scores to perform cardiac surgical resuscitation procedures were higher after the simulation. Nurses were highly satisfied with the simulation experience. Copyright © 2016 Elsevier Ltd. All rights reserved.
Test of phi(sup 2) model predictions near the (sup 3)He liquid-gas critical point
NASA Technical Reports Server (NTRS)
Barmatz, M.; Zhong, F.; Hahn, I.
2000-01-01
NASA is supporting the development of an experiment called MISTE (Microgravity Scaling Theory Experiment) for future International Space Station mission. The main objective of this flight experiment is to perform in-situ PVT, heat capacity at constant volume, C(sub v) and chi(sub tau), measurements in the asymptotic region near the (sup 3)He liquid-gas critical point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.
2016-07-26
It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less
Ogawa, S.; Komini Babu, S.; Chung, H. T.; ...
2016-08-22
The nano/micro-scale geometry of polymer electrolyte fuel cell (PEFC) catalyst layers critically affects cell performance. The small length scales and complex structure of these composite layers make it challenging to analyze cell performance and physics at the particle scale by experiment. We present a computational method to simulate transport and chemical reaction phenomena at the pore/particle-scale and apply it to a PEFC cathode with platinum group metal free (PGM-free) catalyst. Here, we numerically solve the governing equations for the physics with heterogeneous oxygen diffusion coefficient and proton conductivity evaluated using the actual electrode structure and ionomer distribution obtained using nano-scalemore » resolution X-ray computed tomography (nano-CT). Using this approach, the oxygen concentration and electrolyte potential distributions imposed by the oxygen reduction reaction are solved and the impact of the catalyst layer structure on performance is evaluated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogawa, S.; Komini Babu, S.; Chung, H. T.
The nano/micro-scale geometry of polymer electrolyte fuel cell (PEFC) catalyst layers critically affects cell performance. The small length scales and complex structure of these composite layers make it challenging to analyze cell performance and physics at the particle scale by experiment. We present a computational method to simulate transport and chemical reaction phenomena at the pore/particle-scale and apply it to a PEFC cathode with platinum group metal free (PGM-free) catalyst. Here, we numerically solve the governing equations for the physics with heterogeneous oxygen diffusion coefficient and proton conductivity evaluated using the actual electrode structure and ionomer distribution obtained using nano-scalemore » resolution X-ray computed tomography (nano-CT). Using this approach, the oxygen concentration and electrolyte potential distributions imposed by the oxygen reduction reaction are solved and the impact of the catalyst layer structure on performance is evaluated.« less
Fritt-Rasmussen, Janne; Brandvik, Per Johan
2011-08-01
This paper compares the ignitability of Troll B crude oil weathered under simulated Arctic conditions (0%, 50% and 90% ice cover). The experiments were performed in different scales at SINTEF's laboratories in Trondheim, field research station on Svalbard and in broken ice (70-90% ice cover) in the Barents Sea. Samples from the weathering experiments were tested for ignitability using the same laboratory burning cell. The measured ignitability from the experiments in these different scales showed a good agreement for samples with similar weathering. The ice conditions clearly affected the weathering process, and 70% ice or more reduces the weathering and allows a longer time window for in situ burning. The results from the Barents Sea revealed that weathering and ignitability can vary within an oil slick. This field use of the burning cell demonstrated that it can be used as an operational tool to monitor the ignitability of oil spills. Copyright © 2011 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ali, Shainna; Lambie, Glenn; Bloom, Zachary D.
2017-01-01
The Sexual Orientation Counselor Competency Scale (SOCCS), developed by Bidell in 2005, measures counselors' levels of skills, awareness, and knowledge in assisting lesbian, gay, or bisexual (LGB) clients. In an effort to gain an increased understanding of the construct validity of the SOCCS, researchers performed an exploratory factor analysis on…
Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D
2017-12-01
In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Los Alamos Explosives Performance Key to Stockpile Stewardship
Dattelbaum, Dana
2018-02-14
As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- and small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.
Additional experiments on flowability improvements of aviation fuels at low temperatures, volume 2
NASA Technical Reports Server (NTRS)
Stockemer, F. J.; Deane, R. L.
1982-01-01
An investigation was performed to study flow improver additives and scale-model fuel heating systems for use with aviation hydrocarbon fuel at low temperatures. Test were performed in a facility that simulated the heat transfer and temperature profiles anticipated in wing fuel tanks during flight of long-range commercial aircraft. The results are presented of experiments conducted in a test tank simulating a section of an outer wing integral fuel tank approximately full-scale in height, chilled through heat exchange panels bonded to the upper and lower horizontal surfaces. A separate system heated lubricating oil externally by a controllable electric heater, to transfer heat to fuel pumped from the test tank through an oil-to-fuel heat exchanger, and to recirculate the heated fuel back to the test tank.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pak, A.; Dewald, E. L.; Landen, O. L.
2015-12-15
Temporally resolved measurements of the hohlraum radiation flux asymmetry incident onto a bismuth coated surrogate capsule have been made over the first two nanoseconds of ignition relevant laser pulses. Specifically, we study the P2 asymmetry of the incoming flux as a function of cone fraction, defined as the inner-to-total laser beam power ratio, for a variety of hohlraums with different scales and gas fills. This work was performed to understand the relevance of recent experiments, conducted in new reduced-scale neopentane gas filled hohlraums, to full scale helium filled ignition targets. Experimental measurements, matched by 3D view factor calculations, are usedmore » to infer differences in symmetry, relative beam absorption, and cross beam energy transfer (CBET), employing an analytic model. Despite differences in hohlraum dimensions and gas fill, as well as in laser beam pointing and power, we find that laser absorption, CBET, and the cone fraction, at which a symmetric flux is achieved, are similar to within 25% between experiments conducted in the reduced and full scale hohlraums. This work demonstrates a close surrogacy in the dynamics during the first shock between reduced-scale and full scale implosion experiments and is an important step in enabling the increased rate of study for physics associated with inertial confinement fusion.« less
Steigerwald, Sarah N.; Park, Jason; Hardy, Krista M.; Gillman, Lawrence; Vergis, Ashley S.
2015-01-01
Background Considerable resources have been invested in both low- and high-fidelity simulators in surgical training. The purpose of this study was to investigate if the Fundamentals of Laparoscopic Surgery (FLS, low-fidelity box trainer) and LapVR (high-fidelity virtual reality) training systems correlate with operative performance on the Global Operative Assessment of Laparoscopic Skills (GOALS) global rating scale using a porcine cholecystectomy model in a novice surgical group with minimal laparoscopic experience. Methods Fourteen postgraduate year 1 surgical residents with minimal laparoscopic experience performed tasks from the FLS program and the LapVR simulator as well as a live porcine laparoscopic cholecystectomy. Performance was evaluated using standardized FLS metrics, automatic computer evaluations, and a validated global rating scale. Results Overall, FLS score did not show an association with GOALS global rating scale score on the porcine cholecystectomy. None of the five LapVR task scores were significantly associated with GOALS score on the porcine cholecystectomy. Conclusions Neither the low-fidelity box trainer or the high-fidelity virtual simulator demonstrated significant correlation with GOALS operative scores. These findings offer caution against the use of these modalities for brief assessments of novice surgical trainees, especially for predictive or selection purposes. PMID:26641071
Simulating Small-Scale Experiments of In-Tunnel Airblast Using STUN and ALE3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuscamman, Stephanie; Glenn, Lewis; Schebler, Gregory
2011-09-12
This report details continuing validation efforts for the Sphere and Tunnel (STUN) and ALE3D codes. STUN has been validated previously for blast propagation through tunnels using several sets of experimental data with varying charge sizes and tunnel configurations, including the MARVEL nuclear driven shock tube experiment (Glenn, 2001). The DHS-funded STUNTool version is compared to experimental data and the LLNL ALE3D hydrocode. In this particular study, we compare the performance of the STUN and ALE3D codes in modeling an in-tunnel airblast to experimental results obtained by Lunderman and Ohrt in a series of small-scale high explosive experiments (1997).
Combernoux, Nicolas; Schrive, Luc; Labed, Véronique; Wyart, Yvan; Carretier, Emilie; Moulin, Philippe
2017-10-15
The recent use of the reverse osmosis (RO) process at the damaged Fukushima-Daiichi nuclear power plant generated a growing interest in the application of this process for decontamination purposes. This study focused on the development of a robust RO process for decontamination of two kinds of liquid effluents: a contaminated groundwater after a nuclear disaster and a contaminated seawater during a nuclear accident. The SW30 HR membrane was selected among other in this study due to higher retentions (96% for Cs and 98% for Sr) in a true groundwater. Significant fouling and scaling phenomenon, attributed to calcium and strontium precipitation, were evidenced in this work: this underscored the importance of the lab scale experiment in the process. Validation of the separation performances on trace radionuclides concentration was performed with similar retention around 96% between surrogates Cs (inactive) and 137 Cs (radioactive). The scale up to a 2.6 m 2 spiral wound membrane led to equivalent retentions (around 96% for Cs and 99% for Sr) but lower flux values: this underlined that the hydrodynamic parameters (flowrate/cross-flow velocity) should be optimized. This methodology was also applied on the reconstituted seawater effluent: retentions were slightly lower than for the groundwater and the same hydrodynamic effects were observed on the pilot scale. Then, ageing of the membrane through irradiation experiments were performed. Results showed that the membrane active layer composition influenced the membrane resistance towards γ irradiation: the SW30 HR membrane performances (retention and permeability) were better than the Osmonics SE at 1 MGy. Finally, to supplement the scale up approach, the irradiation of a spiral wound membrane revealed a limited effect on the permeability and retention. This indicated that irradiation conditions need to be controlled for a further development of the process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Summary of engineering-scale experiments for the Solar Detoxification of Water project
NASA Astrophysics Data System (ADS)
Pacheco, J. E.; Yellowhorse, L.
1992-03-01
This report contains a summary of large-scale experiments conducted at Sandia National Laboratories under the Solar Detoxification of Water project. The objectives of the work performed were to determine the potential of using solar radiation to destroy organic contaminants in water by photocatalysis and to develop the process and improve its performance. For these experiments, we used parabolic troughs to focus sunlight onto glass pipes mounted at the trough's focus. Water spiked with a contaminant and containing suspended titanium dioxide catalyst was pumped through the illuminated glass pipe, activating the catalyst with the ultraviolet portion of the solar spectrum. The activated catalyst creates oxidizers that attack and destroy the organics. Included in this report are a summary and discussion of the implications of experiments conducted to determine: the effect of process kinetics on the destruction of chlorinated solvents (such as trichloroethylene, perchloroethylene, trichloroethane, methylene chloride, chloroform and carbon tetrachloride), the enhancement due to added hydrogen peroxide, the optimal catalyst loading, the effect of light intensity, the inhibition due to bicarbonates, and catalyst issues.
NASA Astrophysics Data System (ADS)
Sinsky, E.; Zhu, Y.; Li, W.; Guan, H.; Melhauser, C.
2017-12-01
Optimal forecast quality is crucial for the preservation of life and property. Improving monthly forecast performance over both the tropics and extra-tropics requires attention to various physical aspects such as the representation of the underlying SST, model physics and the representation of the model physics uncertainty for an ensemble forecast system. This work focuses on the impact of stochastic physics, SST and the convection scheme on forecast performance for the sub-seasonal scale over the tropics and extra-tropics with emphasis on the Madden-Julian Oscillation (MJO). A 2-year period is evaluated using the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS). Three experiments with different configurations than the operational GEFS were performed to illustrate the impact of the stochastic physics, SST and convection scheme. These experiments are compared against a control experiment (CTL) which consists of the operational GEFS but its integration is extended from 16 to 35 days. The three configurations are: 1) SPs, which uses a Stochastically Perturbed Physics Tendencies (SPPT), Stochastic Perturbed Humidity (SHUM) and Stochastic Kinetic Energy Backscatter (SKEB); 2) SPs+SST_bc, which uses a combination of SPs and a bias-corrected forecast SST from the NCEP Climate Forecast System Version 2 (CFSv2); and 3) SPs+SST_bc+SA_CV, which combines SPs, a bias-corrected forecast SST and a scale aware convection scheme. When comparing to the CTL experiment, SPs shows substantial improvement. The MJO skill has improved by about 4 lead days during the 2-year period. Improvement is also seen over the extra-tropics due to the updated stochastic physics, where there is a 3.1% and a 4.2% improvement during weeks 3 and 4 over the northern hemisphere and southern hemisphere, respectively. Improvement is also seen when the bias-corrected CFSv2 SST is combined with SPs. Additionally, forecast performance enhances when the scale aware convection scheme (SPs+SST_bc+SA_CV) is added, especially over the tropics. Among the three experiments, the SPs+SST_bc+SA_CV is the best configuration in MJO forecast skill.
The Design of PSB-VVER Experiments Relevant to Accident Management
NASA Astrophysics Data System (ADS)
Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander
Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.
Radjenović, Jelena; Sirtori, Carla; Petrović, Mira; Barceló, Damià; Malato, Sixto
2010-04-01
In the present study the mechanisms of solar photodegradation of H(2)-receptor antagonist ranitidine (RNTD) were studied in a well-defined system of a pilot plant scale Compound Parabolic Collector (CPC) reactor. Two types of heterogeneous photocatalytic experiments were performed: catalysed by titanium-dioxide (TiO(2)) semiconductor and by Fenton reagent (Fe(2+)/H(2)O(2)), each one with distilled water and synthetic wastewater effluent matrix. Complete disappearance of the parent compounds and discreet mineralization were attained in all experiments. Furthermore, kinetic parameters, main intermediate products, release of heteroatoms and formation of carboxylic acids are discussed. The main intermediate products of photocatalytic degradation of RNTD have been structurally elucidated by tandem mass spectrometry (MS(2)) experiments performed at quadrupole-time of flight (QqToF) mass analyzer coupled to ultra-performance liquid chromatograph (UPLC). RNTD displayed high reactivity towards OH radicals, although a product of conduction band electrons reduction was also present in the experiment with TiO(2). In the absence of standards, quantification of intermediates was not possible and only qualitative profiles of their evolution could be determined. The proposed TiO(2) and photo-Fenton degradation routes of RNTD are reported for the first time. (c) 2010 Elsevier Ltd. All rights reserved.
How Mathematics Propels the Development of Physical Knowledge
ERIC Educational Resources Information Center
Schwartz, Daniel L.; Martin, Taylor; Pfaffman, Jay
2005-01-01
Three studies examined whether mathematics can propel the development of physical understanding. In Experiment 1, 10-year-olds solved balance scale problems that used easy-to-count discrete quantities or hard-to-count continuous quantities. Discrete quantities led to age typical performances. Continuous quantities caused performances like those of…
Homesickness at College: Its Impact on Academic Performance and Retention
ERIC Educational Resources Information Center
Sun, Jie; Hagedorn, Linda Serra; Zhang, Yi
2016-01-01
For this study we identified factors exerting significant influence on homesickness and explored the impact of the homesick experience on students' academic performance and retention in the first year in college. The findings reveal 2 constructs underlying the homesickness scale: homesick separation and homesick distress. Demographic variables…
Simulating flow around scaled model of a hypersonic vehicle in wind tunnel
NASA Astrophysics Data System (ADS)
Markova, T. V.; Aksenov, A. A.; Zhluktov, S. V.; Savitsky, D. V.; Gavrilov, A. D.; Son, E. E.; Prokhorov, A. N.
2016-11-01
A prospective hypersonic HEXAFLY aircraft is considered in the given paper. In order to obtain the aerodynamic characteristics of a new construction design of the aircraft, experiments with a scaled model have been carried out in a wind tunnel under different conditions. The runs have been performed at different angles of attack with and without hydrogen combustion in the scaled propulsion engine. However, the measured physical quantities do not provide all the information about the flowfield. Numerical simulation can complete the experimental data as well as to reduce the number of wind tunnel experiments. Besides that, reliable CFD software can be used for calculations of the aerodynamic characteristics for any possible design of the full-scale aircraft under different operation conditions. The reliability of the numerical predictions must be confirmed in verification study of the software. The given work is aimed at numerical investigation of the flowfield around and inside the scaled model of the HEXAFLY-CIAM module under wind tunnel conditions. A cold run (without combustion) was selected for this study. The calculations are performed in the FlowVision CFD software. The flow characteristics are compared against the available experimental data. The carried out verification study confirms the capability of the FlowVision CFD software to calculate the flows discussed.
Sánchez, Renata; Rodríguez, Omaira; Rosciano, José; Vegas, Liumariel; Bond, Verónica; Rojas, Aram; Sanchez-Ismayel, Alexis
2016-09-01
The objective of this study is to determine the ability of the GEARS scale (Global Evaluative Assessment of Robotic Skills) to differentiate individuals with different levels of experience in robotic surgery, as a fundamental validation. This is a cross-sectional study that included three groups of individuals with different levels of experience in robotic surgery (expert, intermediate, novice) their performance were assessed by GEARS applied by two reviewers. The difference between groups was determined by Mann-Whitney test and the consistency between the reviewers was studied by Kendall W coefficient. The agreement between the reviewers of the scale GEARS was 0.96. The score was 29.8 ± 0.4 to experts, 24 ± 2.8 to intermediates and 16 ± 3 to novices, with a statistically significant difference between all of them (p < 0.05). All parameters from the scale allow discriminating between different levels of experience, with exception of the depth perception item. We conclude that the scale GEARS was able to differentiate between individuals with different levels of experience in robotic surgery and, therefore, is a validated and useful tool to evaluate surgeons in training.
Song, Zhiyong; Zhu, Weiyao; Sun, Gangzheng; Blanckaert, Koen
2015-08-01
Microbial enhanced oil recovery (MEOR) depends on the in situ microbial activity to release trapped oil in reservoirs. In practice, undesired consumption is a universal phenomenon but cannot be observed effectively in small-scale physical simulations due to the scale effect. The present paper investigates the dynamics of oil recovery, biomass and nutrient consumption in a series of flooding experiments in a dedicated large-scale sand-pack column. First, control experiments of nutrient transportation with and without microbial consumption were conducted, which characterized the nutrient loss during transportation. Then, a standard microbial flooding experiment was performed recovering additional oil (4.9 % Original Oil in Place, OOIP), during which microbial activity mostly occurred upstream, where oil saturation declined earlier and steeper than downstream in the column. Subsequently, more oil remained downstream due to nutrient shortage. Finally, further research was conducted to enhance the ultimate recovery by optimizing the injection strategy. An extra 3.5 % OOIP was recovered when the nutrients were injected in the middle of the column, and another additional 11.9 % OOIP were recovered by altering the timing of nutrient injection.
Irradiation of materials with short, intense ion pulses at NDCX-II
NASA Astrophysics Data System (ADS)
Seidl, P. A.; Barnard, J. J.; Feinberg, E.; Friedman, A.; Gilson, E. P.; Grote, D. P.; Ji, Q.; Kaganovich, I. D.; Ludewigt, B.; Persaud, A.; Sierra, C.; Silverman, M.; Stepanov, A. D.; Sulyman, A.; Treffert, F.; Waldron, W. L.; Zimmer, M.; Schenkel, T.
2017-06-01
We present an overview of the performance of the Neutralized Drift Compression Experiment-II (NDCX-II) accelerator at Berkeley Lab, and report on recent target experiments on beam driven melting and transmission ion energy loss measurements with nanosecond and millimeter-scale ion beam pulses and thin tin foils. Bunches with around 10^11 ions, 1-mm radius, and 2-30 ns FWHM duration have been created with corresponding fluences in the range of 0.1 to 0.7 J/cm^2. To achieve these short pulse durations and mm-scale focal spot radii, the 1.1 MeV He+ ion beam is neutralized in a drift compression section, which removes the space charge defocusing effect during final compression and focusing. The beam space charge and drift compression techniques resemble necessary beam conditions and manipulations in heavy ion inertial fusion accelerators. Quantitative comparison of detailed particle-in-cell simulations with the experiment play an important role in optimizing accelerator performance.
Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús
2009-01-01
Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins. PMID:19181660
Log-polar mapping-based scale space tracking with adaptive target response
NASA Astrophysics Data System (ADS)
Li, Dongdong; Wen, Gongjian; Kuai, Yangliu; Zhang, Ximing
2017-05-01
Correlation filter-based tracking has exhibited impressive robustness and accuracy in recent years. Standard correlation filter-based trackers are restricted to translation estimation and equipped with fixed target response. These trackers produce an inferior performance when encountered with a significant scale variation or appearance change. We propose a log-polar mapping-based scale space tracker with an adaptive target response. This tracker transforms the scale variation of the target in the Cartesian space into a shift along the logarithmic axis in the log-polar space. A one-dimensional scale correlation filter is learned online to estimate the shift along the logarithmic axis. With the log-polar representation, scale estimation is achieved accurately without a multiresolution pyramid. To achieve an adaptive target response, a variance of the Gaussian function is computed from the response map and updated online with a learning rate parameter. Our log-polar mapping-based scale correlation filter and adaptive target response can be combined with any correlation filter-based trackers. In addition, the scale correlation filter can be extended to a two-dimensional correlation filter to achieve joint estimation of the scale variation and in-plane rotation. Experiments performed on an OTB50 benchmark demonstrate that our tracker achieves superior performance against state-of-the-art trackers.
ORNL Pre-test Analyses of A Large-scale Experiment in STYLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T; Yin, Shengjun; Klasky, Hilda B
Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less
Oilfield scales: controls on precipitation and crystal morphology of barite (barium sulphate)
NASA Astrophysics Data System (ADS)
Stark, A. I. R.; Wogelius, R. A.; Vaughan, D. J.
2003-04-01
The precipitation and subsequent build up of barite (barium sulphate) inside extraction tubing presents a costly problem for off shore oil wells which use seawater to mobilize oil during hydrocarbon recovery. Mixing of reservoir formation water containing Ba2+ ions and seawater containing SO_42- ions results in barite precipitation within the reservoir well-bore region and piping. Great effort has been expended in designing strategies to minimize scale formation but details of the reaction mechanism and sensitivity to thermodynamic variables are poorly constrained. Furthermore, few detailed studies have been carried out under simulated field conditions. Hence an experimental programme was designed to study barite formation under environmentally relevant conditions with control of several system variables during the precipitation reaction. Synthetic sea-water and formation-water brines containing sodium sulphate and barium chloride, respectively, were mixed to induce BaSO_4 precipitation. Experiments were carried out at high temperature (100^oC) and high pressure (500 bars) in double rocking autoclave bombs. Barite formation as a function of the addition of calcium, magnesium, and a generic phosphonate based scale inhibitor was investigated whilst maintaining constant pH, temperature and ionic strength (0.5159). Additional experiments were performed at ambient conditions for comparison. Data concerning nucleation, growth rates, and crystal morphology were obtained. ICP-AES data from the supernatant product solutions showed considerable variation in quantity of barium sulphate precipitated as a function of the listed experimental variables. For example, ESEM analysis of barium sulphate crystals showed a dramatic shift in crystal habit from the typical tabular habit produced in control experiments; experiments performed in the presence of foreign cations produced more equant crystals, while those experiments completed in the presence of the phosphonate scale inhibitor produced precipitates with distorted anhedral shapes. Based on these preliminary results, further experiments which monitor rate and morphology as a function of Ba/Ca ratio, ionic strength, and ion activity product for barite will also be completed.
Afterbody External Aerodynamic and Performance Prediction at High Reynolds Numbers
NASA Technical Reports Server (NTRS)
Carlson, John R.
1999-01-01
This CFD experiment concludes that the potential difference between the flow between a flight Reynolds number test and a sub-scale wind tunnel test are substantial for this particular nozzle boattail geometry. The early study was performed using a linear k-epsilon turbulence model. The present study was performed using the Girimaji formulation of a algebraic Reynolds stress turbulent simulation.
NASA Astrophysics Data System (ADS)
Egbers, Christoph; Futterer, Birgit; Zaussinger, Florian; Harlander, Uwe
2014-05-01
Baroclinic waves are responsible for the transport of heat and momentum in the oceans, in the Earth's atmosphere as well as in other planetary atmospheres. The talk will give an overview on possibilities to simulate such large scale as well as co-existing small scale structures with the help of well defined laboratory experiments like the baroclinic wave tank (annulus experiment). The analogy between the Earth's atmosphere and the rotating cylindrical annulus experiment only driven by rotation and differential heating between polar and equatorial regions is obvious. From the Gulf stream single vortices seperate from time to time. The same dynamics and the co-existence of small and large scale structures and their separation can be also observed in laboratory experiments as in the rotating cylindrical annulus experiment. This experiment represents the mid latitude dynamics quite well and is part as a central reference experiment in the German-wide DFG priority research programme ("METSTRÖM", SPP 1276) yielding as a benchmark for lot of different numerical methods. On the other hand, those laboratory experiments in cylindrical geometry are limited due to the fact, that the surface and real interaction between polar and equatorial region and their different dynamics can not be really studied. Therefore, I demonstrate how to use the very successful Geoflow I and Geoflow II space experiment hardware on ISS with future modifications for simulations of small and large scale planetary atmospheric motion in spherical geometry with differential heating between inner and outer spheres as well as between the polar and equatorial regions. References: Harlander, U., Wenzel, J., Wang, Y., Alexandrov, K. & Egbers, Ch., 2012, Simultaneous PIV- and thermography measurements of partially blocked flow in a heated rotating annulus, Exp. in Fluids, 52 (4), 1077-1087 Futterer, B., Krebs, A., Plesa, A.-C., Zaussinger, F., Hollerbach, R., Breuer, D. & Egbers, Ch., 2013, Sheet-like and plume-like thermal flow in a spherical convection experiment performed under microgravity, J. Fluid Mech., vol. 75, p 647-683
NASA Astrophysics Data System (ADS)
Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.
2002-11-01
The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.
Shock compression response of cold-rolled Ni/Al multilayer composites
Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.
2017-01-06
Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. Finally, these simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.
Combined climate and carbon-cycle effects of large-scale deforestation
Bala, G.; Caldeira, K.; Wickett, M.; Phillips, T. J.; Lobell, D. B.; Delire, C.; Mirin, A.
2007-01-01
The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO2 to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate. PMID:17420463
Combined climate and carbon-cycle effects of large-scale deforestation.
Bala, G; Caldeira, K; Wickett, M; Phillips, T J; Lobell, D B; Delire, C; Mirin, A
2007-04-17
The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO(2) to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These simulations were performed by using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has a net cooling influence on Earth's climate, because the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. Although these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.
Combined Climate and Carbon-Cycle Effects of Large-Scale Deforestation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bala, G; Caldeira, K; Wickett, M
2006-10-17
The prevention of deforestation and promotion of afforestation have often been cited as strategies to slow global warming. Deforestation releases CO{sub 2} to the atmosphere, which exerts a warming influence on Earth's climate. However, biophysical effects of deforestation, which include changes in land surface albedo, evapotranspiration, and cloud cover also affect climate. Here we present results from several large-scale deforestation experiments performed with a three-dimensional coupled global carbon-cycle and climate model. These are the first such simulations performed using a fully three-dimensional model representing physical and biogeochemical interactions among land, atmosphere, and ocean. We find that global-scale deforestation has amore » net cooling influence on Earth's climate, since the warming carbon-cycle effects of deforestation are overwhelmed by the net cooling associated with changes in albedo and evapotranspiration. Latitude-specific deforestation experiments indicate that afforestation projects in the tropics would be clearly beneficial in mitigating global-scale warming, but would be counterproductive if implemented at high latitudes and would offer only marginal benefits in temperate regions. While these results question the efficacy of mid- and high-latitude afforestation projects for climate mitigation, forests remain environmentally valuable resources for many reasons unrelated to climate.« less
Assessing Performance in Shoulder Arthroscopy: The Imperial Global Arthroscopy Rating Scale (IGARS).
Bayona, Sofia; Akhtar, Kash; Gupte, Chinmay; Emery, Roger J H; Dodds, Alexander L; Bello, Fernando
2014-07-02
Surgical training is undergoing major changes with reduced resident work hours and an increasing focus on patient safety and surgical aptitude. The aim of this study was to create a valid, reliable method for an assessment of arthroscopic skills that is independent of time and place and is designed for both real and simulated settings. The validity of the scale was tested using a virtual reality shoulder arthroscopy simulator. The study consisted of two parts. In the first part, an Imperial Global Arthroscopy Rating Scale for assessing technical performance was developed using a Delphi method. Application of this scale required installing a dual-camera system to synchronously record the simulator screen and body movements of trainees to allow an assessment that is independent of time and place. The scale includes aspects such as efficient portal positioning, angles of instrument insertion, proficiency in handling the arthroscope and adequately manipulating the camera, and triangulation skills. In the second part of the study, a validation study was conducted. Two experienced arthroscopic surgeons, blinded to the identities and experience of the participants, each assessed forty-nine subjects performing three different tests using the Imperial Global Arthroscopy Rating Scale. Results were analyzed using two-way analysis of variance with measures of absolute agreement. The intraclass correlation coefficient was calculated for each test to assess inter-rater reliability. The scale demonstrated high internal consistency (Cronbach alpha, 0.918). The intraclass correlation coefficient demonstrated high agreement between the assessors: 0.91 (p < 0.001). Construct validity was evaluated using Kruskal-Wallis one-way analysis of variance (chi-square test, 29.826; p < 0.001), demonstrating that the Imperial Global Arthroscopy Rating Scale distinguishes significantly between subjects with different levels of experience utilizing a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale has a high internal consistency and excellent inter-rater reliability and offers an approach for assessing technical performance in basic arthroscopy on a virtual reality simulator. The Imperial Global Arthroscopy Rating Scale provides detailed information on surgical skills. Although it requires further validation in the operating room, this scale, which is independent of time and place, offers a robust and reliable method for assessing arthroscopic technical skills. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Role of substrate quality on IC performance and yields
NASA Technical Reports Server (NTRS)
Thomas, R. N.
1981-01-01
The development of silicon and gallium arsenide crystal growth for the production of large diameter substrates are discussed. Large area substrates of significantly improved compositional purity, dopant distribution and structural perfection on a microscopic as well as macroscopic scale are important requirements. The exploratory use of magnetic fields to suppress convection effects in Czochralski crystal growth is addressed. The growth of large crystals in space appears impractical at present however the efforts to improve substrate quality could benefit from the experiences gained in smaller scale growth experiments conducted in the zero gravity environment of space.
Porter, William; Gallagher, Sean; Torma-Krajewski, Janet
2010-05-01
Hand scaling is a physically demanding task responsible for numerous overexertion injuries in underground mining. Scaling requires the miner to use a long pry bar to remove loose rock, reducing the likelihood of rock fall injuries. The experiments described in this article simulated "rib" scaling (scaling a mine wall) from an elevated bucket to examine force generation and electromyographic responses using two types of scaling bars (steel and fiberglass-reinforced aluminum) at five target heights ranging from floor level to 176 cm. Ten male and six female subjects were tested in separate experiments. Peak and average force applied at the scaling bar tip and normalized electromyography (EMG) of the left and right pairs of the deltoid and erectores spinae muscles were obtained. Work height significantly affected peak prying force during scaling activities with highest force capacity at the lower levels. Bar type did not affect force generation. However, use of the lighter fiberglass bar required significantly more muscle activity to achieve the same force. Results of these studies suggest that miners scale points on the rock face that are below their knees, and reposition the bucket as often as necessary to do so. Published by Elsevier Ltd.
Complex Plasmas under free fall conditions aboard the International Space Station
NASA Astrophysics Data System (ADS)
Konopka, Uwe; Thomas, Edward, Jr.; Funk, Dylan; Doyle, Brandon; Williams, Jeremiah; Knapek, Christina; Thomas, Hubertus
2017-10-01
Complex Plasmas are dynamically dominated by massive, highly negatively charged, micron-sized particles. They are usually strongly coupled and as a result can show fluid-like behavior or undergo phase transitions to form crystalline structures. The dynamical time scale of these systems is easily accessible in experiments because of the relatively high mass/inertia of the particles. However, the high mass also leads to sedimentation effects and as a result prevents the conduction of large scale, fully three dimensional experiments that are necessary to utilize complex plasmas as model systems in the transition to continuous media. To reduce sedimentation influences it becomes necessary to perform experiments in a free-fall (``microgravity'') environment, such as the ISS based experiment facility ``Plasma-Kristall-4'' (``PK-4''). In our paper we will present our recently started research activities to investigate the basic properties of complex plasmas by utilizing the PK-4 experiment facility aboard the ISS. We further give an overview of developments towards the next generation experiment facility ``Ekoplasma'' (formerly named ``PlasmaLab'') and discuss potential additional small-scale space-based experiment scenarios. This work was supported by the JPL/NASA (JPL-RSA 1571699), the US Dept. of Energy (DE-SC0016330) and the NSF (PHY-1613087).
Improving Design Efficiency for Large-Scale Heterogeneous Circuits
NASA Astrophysics Data System (ADS)
Gregerson, Anthony
Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.
Los Alamos Explosives Performance Key to Stockpile Stewardship
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dattelbaum, Dana
2014-11-03
As the U.S. Nuclear Deterrent ages, one essential factor in making sure that the weapons will continue to perform as designed is understanding the fundamental properties of the high explosives that are part of a nuclear weapons system. As nuclear weapons go through life extension programs, some changes may be advantageous, particularly through the addition of what are known as "insensitive" high explosives that are much less likely to accidentally detonate than the already very safe "conventional" high explosives that are used in most weapons. At Los Alamos National Laboratory explosives research includes a wide variety of both large- andmore » small-scale experiments that include small contained detonations, gas and powder gun firings, larger outdoor detonations, large-scale hydrodynamic tests, and at the Nevada Nuclear Security Site, underground sub-critical experiments.« less
Thatcher, T L; Wilson, D J; Wood, E E; Craig, M J; Sextro, R G
2004-08-01
Scale modeling is a useful tool for analyzing complex indoor spaces. Scale model experiments can reduce experimental costs, improve control of flow and temperature conditions, and provide a practical method for pretesting full-scale system modifications. However, changes in physical scale and working fluid (air or water) can complicate interpretation of the equivalent effects in the full-scale structure. This paper presents a detailed scaling analysis of a water tank experiment designed to model a large indoor space, and experimental results obtained with this model to assess the influence of furniture and people in the pollutant concentration field at breathing height. Theoretical calculations are derived for predicting the effects from losses of molecular diffusion, small scale eddies, turbulent kinetic energy, and turbulent mass diffusivity in a scale model, even without Reynolds number matching. Pollutant dispersion experiments were performed in a water-filled 30:1 scale model of a large room, using uranine dye injected continuously from a small point source. Pollutant concentrations were measured in a plane, using laser-induced fluorescence techniques, for three interior configurations: unobstructed, table-like obstructions, and table-like and figure-like obstructions. Concentrations within the measurement plane varied by more than an order of magnitude, even after the concentration field was fully developed. Objects in the model interior had a significant effect on both the concentration field and fluctuation intensity in the measurement plane. PRACTICAL IMPLICATION: This scale model study demonstrates both the utility of scale models for investigating dispersion in indoor environments and the significant impact of turbulence created by furnishings and people on pollutant transport from floor level sources. In a room with no furniture or occupants, the average concentration can vary by about a factor of 3 across the room. Adding furniture and occupants can increase this spatial variation by another factor of 3.
Multi-scale modeling in cell biology
Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick
2009-01-01
Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808
NASA Astrophysics Data System (ADS)
Khatri, Kshitij; Pu, Yi; Klein, Joshua A.; Wei, Juan; Costello, Catherine E.; Lin, Cheng; Zaia, Joseph
2018-04-01
Analysis of singly glycosylated peptides has evolved to a point where large-scale LC-MS analyses can be performed at almost the same scale as proteomics experiments. While collisionally activated dissociation (CAD) remains the mainstay of bottom-up analyses, it performs poorly for the middle-down analysis of multiply glycosylated peptides. With improvements in instrumentation, electron-activated dissociation (ExD) modes are becoming increasingly prevalent for proteomics experiments and for the analysis of fragile modifications such as glycosylation. While these methods have been applied for glycopeptide analysis in isolated studies, an organized effort to compare their efficiencies, particularly for analysis of multiply glycosylated peptides (termed here middle-down glycoproteomics), has not been made. We therefore compared the performance of different ExD modes for middle-down glycopeptide analyses. We identified key features among the different dissociation modes and show that increased electron energy and supplemental activation provide the most useful data for middle-down glycopeptide analysis. [Figure not available: see fulltext.
Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Wenbin
2014-08-29
This report documents the work performed by General Motors (GM) under the Cooperative agreement No. DE-EE0000470, “Investigation of Micro- and Macro-Scale Transport Processes for Improved Fuel Cell Performance,” in collaboration with the Penn State University (PSU), University of Tennessee Knoxville (UTK), Rochester Institute of Technology (RIT), and University of Rochester (UR) via subcontracts. The overall objectives of the project are to investigate and synthesize fundamental understanding of transport phenomena at both the macro- and micro-scales for the development of a down-the-channel model that accounts for all transport domains in a broad operating space. GM as a prime contractor focused onmore » cell level experiments and modeling, and the Universities as subcontractors worked toward fundamental understanding of each component and associated interface.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Rene Gerardo; Hutchinson, Jesson D.; Mcclure, Patrick Ray
2015-08-20
The intent of the integral experiment request IER 299 (called KiloPower by NASA) is to assemble and evaluate the operational performance of a compact reactor configuration that closely resembles the flight unit to be used by NASA to execute a deep space exploration mission. The reactor design will include heat pipes coupled to Stirling engines to demonstrate how one can generate electricity when extracting energy from a “nuclear generated” heat source. This series of experiments is a larger scale follow up to the DUFF series of experiments1,2 that were performed using the Flat-Top assembly.
Accelerating research into bio-based FDCA-polyesters by using small scale parallel film reactors.
Gruter, Gert-Jan M; Sipos, Laszlo; Adrianus Dam, Matheus
2012-02-01
High Throughput experimentation has been well established as a tool in early stage catalyst development and catalyst and process scale-up today. One of the more challenging areas of catalytic research is polymer catalysis. The main difference with most non-polymer catalytic conversions is the fact that the product is not a well defined molecule and the catalytic performance cannot be easily expressed only in terms of catalyst activity and selectivity. In polymerization reactions, polymer chains are formed that can have various lengths (resulting in a molecular weight distribution rather than a defined molecular weight), that can have different compositions (when random or block co-polymers are produced), that can have cross-linking (often significantly affecting physical properties), that can have different endgroups (often affecting subsequent processing steps) and several other variations. In addition, for polyolefins, mass and heat transfer, oxygen and moisture sensitivity, stereoregularity and many other intrinsic features make relevant high throughput screening in this field an incredible challenge. For polycondensation reactions performed in the melt often the viscosity becomes already high at modest molecular weights, which greatly influences mass transfer of the condensation product (often water or methanol). When reactions become mass transfer limited, catalyst performance comparison is often no longer relevant. This however does not mean that relevant experiments for these application areas cannot be performed on small scale. Relevant catalyst screening experiments for polycondensation reactions can be performed in very efficient small scale parallel equipment. Both transesterification and polycondensation as well as post condensation through solid-stating in parallel equipment have been developed. Next to polymer synthesis, polymer characterization also needs to be accelerated without making concessions to quality in order to draw relevant conclusions.
The Geophysical Fluid Flow Cell Experiment
NASA Technical Reports Server (NTRS)
Hart, J. E.; Ohlsen, D.; Kittleman, S.; Borhani, N.; Leslie, F.; Miller, T.
1999-01-01
The Geophysical Fluid Flow Cell (GFFC) experiment performed visualizations of thermal convection in a rotating differentially heated spherical shell of fluid. In these experiments dielectric polarization forces are used to generate a radially directed buoyancy force. This enables the laboratory simulation of a number of geophysically and astrophysically important situations in which sphericity and rotation both impose strong constraints on global scale fluid motions. During USML-2 a large set of experiments with spherically symmetric heating were carried out. These enabled the determination of critical points for the transition to various forms of nonaxisymmetric convection and, for highly turbulent flows, the transition latitudes separating the different modes of motion. This paper presents a first analysis of these experiments as well as data on the general performance of the instrument during the USML-2 flight.
The impact of climate change measured at relevant spatial scales: new hope for tropical lizards.
Logan, Michael L; Huynh, Ryan K; Precious, Rachel A; Calsbeek, Ryan G
2013-10-01
Much attention has been given to recent predictions that widespread extinctions of tropical ectotherms, and tropical forest lizards in particular, will result from anthropogenic climate change. Most of these predictions, however, are based on environmental temperature data measured at a maximum resolution of 1 km(2), whereas individuals of most species experience thermal variation on a much finer scale. To address this disconnect, we combined thermal performance curves for five populations of Anolis lizard from the Bay Islands of Honduras with high-resolution temperature distributions generated from physical models. Previous research has suggested that open-habitat species are likely to invade forest habitat and drive forest species to extinction. We test this hypothesis, and compare the vulnerabilities of closely related, but allopatric, forest species. Our data suggest that the open-habitat populations we studied will not invade forest habitat and may actually benefit from predicted warming for many decades. Conversely, one of the forest species we studied should experience reduced activity time as a result of warming, while two others are unlikely to experience a significant decline in performance. Our results suggest that global-scale predictions generated using low-resolution temperature data may overestimate the vulnerability of many tropical ectotherms to climate change. © 2013 John Wiley & Sons Ltd.
Bioreactor design studies for a hydrogen-producing bacterium.
Wolfrum, Edward J; Watt, Andrew S
2002-01-01
Carbon monoxide (CO) can be metabolized by a number of microorganisms along with water to produce hydrogen (H2) and carbon dioxide. National Renewable Energy Laboratory researchers have isolated a number of bacteria that perform this so-called water-gas shift reaction at ambient temperatures. We performed experiments to measure the rate of CO conversion and H2 production in a trickle-bed reactor (TBR). The liquid recirculation rate and the reactor support material both affected the mass transfer coefficient, which controls the overall performance of the reactor. A simple reactor model taken from the literature was used to quantitatively compare the performance of the TBR geometry at two different size scales. Good agreement between the two reactor scales was obtained.
Improving resolution of dynamic communities in human brain networks through targeted node removal
Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.
2017-01-01
Current approaches to dynamic community detection in complex networks can fail to identify multi-scale community structure, or to resolve key features of community dynamics. We propose a targeted node removal technique to improve the resolution of community detection. Using synthetic oscillator networks with well-defined “ground truth” communities, we quantify the community detection performance of a common modularity maximization algorithm. We show that the performance of the algorithm on communities of a given size deteriorates when these communities are embedded in multi-scale networks with communities of different sizes, compared to the performance in a single-scale network. We demonstrate that targeted node removal during community detection improves performance on multi-scale networks, particularly when removing the most functionally cohesive nodes. Applying this approach to network neuroscience, we compare dynamic functional brain networks derived from fMRI data taken during both repetitive single-task and varied multi-task experiments. After the removal of regions in visual cortex, the most coherent functional brain area during the tasks, community detection is better able to resolve known functional brain systems into communities. In addition, node removal enables the algorithm to distinguish clear differences in brain network dynamics between these experiments, revealing task-switching behavior that was not identified with the visual regions present in the network. These results indicate that targeted node removal can improve spatial and temporal resolution in community detection, and they demonstrate a promising approach for comparison of network dynamics between neuroscientific data sets with different resolution parameters. PMID:29261662
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Dantzig, Jonathan A.; Goldenfeld, Nigel
2001-01-01
Modeling solidification microstructures has become an area of intense study in recent years. The properties of large scale cast products, ranging from automobile engine blocks to aircraft components and other industrial applications, are strongly dependent on the physics that occur at the mesoscopic and microscopic length scales during solidification. The predominant morphology found in solidification microstructures is the dendrite, a tree-like pattern of solid around which solidification proceeds. The microscopic properties of cast products are determined by the length scales of these dendrites, and their associated segregation profiles. For this reason understanding the mechanisms for pattern selection in dendritic growth has attracted a great deal of interest from the experimental and theoretical communities. In particular, a great deal of research has been undertaken to understand such issues as dendrite morphology, shape and growth speed. Experiments on dendrite evolution in pure materials by Glicksman and coworkers on succinonitrile (SCN), and more recently pivalic acid (PVA), as well as other transparent analogs of metals, have provided tests of theories for dendritic growth, and have stimulated considerable theoretical progress. These experiments have clearly demonstrated that in certain parameter ranges the physics of the dendrite tip can be characterized by a steady value for the dendrite tip velocity, radius of curvature and shape. Away from the tip, the time-dependent dendrite exhibits a characteristic sidebranching as it propagates, which is not yet well understood. These experiments are performed by observing individual dendrites growing into an undercooled melt. The experiments are characterized by the dimensionless undercooling. Most experiments are performed at low undercooling.
A second generation experiment in fault-tolerant software
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
Information was collected on the efficacy of fault-tolerant software by conducting two large-scale controlled experiments. In the first, an empirical study of multi-version software (MVS) was conducted. The second experiment is an empirical evaluation of self testing as a method of error detection (STED). The purpose ot the MVS experiment was to obtain empirical measurement of the performance of multi-version systems. Twenty versions of a program were prepared at four different sites under reasonably realistic development conditions from the same specifications. The purpose of the STED experiment was to obtain empirical measurements of the performance of assertions in error detection. Eight versions of a program were modified to include assertions at two different sites under controlled conditions. The overall structure of the testing environment for the MVS experiment and its status are described. Work to date in the STED experiment is also presented.
Experiments on integral length scale control in atmospheric boundary layer wind tunnel
NASA Astrophysics Data System (ADS)
Varshney, Kapil; Poddar, Kamal
2011-11-01
Accurate predictions of turbulent characteristics in the atmospheric boundary layer (ABL) depends on understanding the effects of surface roughness on the spatial distribution of velocity, turbulence intensity, and turbulence length scales. Simulation of the ABL characteristics have been performed in a short test section length wind tunnel to determine the appropriate length scale factor for modeling, which ensures correct aeroelastic behavior of structural models for non-aerodynamic applications. The ABL characteristics have been simulated by using various configurations of passive devices such as vortex generators, air barriers, and slot in the test section floor which was extended into the contraction cone. Mean velocity and velocity fluctuations have been measured using a hot-wire anemometry system. Mean velocity, turbulence intensity, turbulence scale, and power spectral density of velocity fluctuations have been obtained from the experiments for various configuration of the passive devices. It is shown that the integral length scale factor can be controlled using various combinations of the passive devices.
Flourishing-at-Work: The Role of Positive Organizational Practices.
Redelinghuys, Kleinjan; Rothmann, Sebastiaan; Botha, Elrie
2018-01-01
The first aim of the study was to investigate the effects of flourishing at work (as measured by the Flourishing-at-Work Scale-Short Form) on intention to leave, performance, and organizational citizenship behavior. The second aim was to determine the prevalence of workplace flourishing and to examine differences in the perceived flourishing levels of teachers based on the positive practices they experience in their organization. A sample of 258 secondary school educators in the Gauteng province of South Africa was used in the cross-sectional design. The Flourishing-at-Work Scale-Short Form, Turnover Intention Scale, In-Role Behavior Scale, Organizational Citizenship Behavior Scale, and the Positive Practices Questionnaire were administered. The results showed acceptable psychometric properties for the short scale which measures flourishing. Workplace flourishing negatively predicted intention to leave, while positively predicting in-role performance and organizational citizenship behavior. A total of 44.19% of the population flourished, while 49.22% were moderately mentally healthy and 6.59% languished. Positive organizational practices were associated with flourishing at work.
Experimentally Modeling Black and White Hole Event Horizons via Fluid Flow
NASA Astrophysics Data System (ADS)
Manheim, Marc E.; Lindner, John F.; Manz, Niklas
We will present a scaled down experiment that hydrodynamically models the interaction between electromagnetic waves and black/white holes. It has been mathematically proven that gravity waves in water can behave analogously to electromagnetic waves traveling through spacetime. In this experiment, gravity waves will be generated in a water tank and propagate in a direction opposed to a flow of varying rate. We observe a noticeable change in the wave's spreading behavior as it travels through the simulated horizon with decreased wave speeds up to standing waves, depending on the opposite flow rate. Such an experiment has already been performed in a 97.2 cubic meter tank. We reduced the size significantly to be able to perform the experiment under normal lab conditions.
NASA Astrophysics Data System (ADS)
Ostermayr, T. M.; Gebhard, J.; Haffa, D.; Kiefer, D.; Kreuzer, C.; Allinger, K.; Bömer, C.; Braenzel, J.; Schnürer, M.; Cermak, I.; Schreiber, J.; Hilz, P.
2018-01-01
We report on a Paul-trap system with large access angles that allows positioning of fully isolated micrometer-scale particles with micrometer precision as targets in high-intensity laser-plasma interactions. This paper summarizes theoretical and experimental concepts of the apparatus as well as supporting measurements that were performed for the trapping process of single particles.
NASA Astrophysics Data System (ADS)
Lo Iudice, N.; Bianco, D.; Andreozzi, F.; Porrino, A.; Knapp, F.
2012-10-01
Large scale shell model calculations based on a new diagonalization algorithm are performed in order to investigate the mixed symmetry states in chains of nuclei in the proximity of N=82. The resulting spectra and transitions are in agreement with the experiments and consistent with the scheme provided by the interacting boson model.
The Zero Boil-Off Tank Experiment Contributions to the Development of Cryogenic Fluid Management
NASA Technical Reports Server (NTRS)
Chato, David J.; Kassemi, Mohammad
2015-01-01
The Zero Boil-Off Technology (ZBOT) Experiment involves performing a small scale ISS experiment to study tank pressurization and pressure control in microgravity. The ZBOT experiment consists of a vacuum jacketed test tank filled with an inert fluorocarbon simulant liquid. Heaters and thermo-electric coolers are used in conjunction with an axial jet mixer flow loop to study a range of thermal conditions within the tank. The objective is to provide a high quality database of low gravity fluid motions and thermal transients which will be used to validate Computational Fluid Dynamic (CFD) modeling. This CFD can then be used in turn to predict behavior in larger systems with cryogens. This paper will discuss the current status of the ZBOT experiment as it approaches its flight to installation on the International Space Station, how its findings can be scaled to larger and more ambitious cryogenic fluid management experiments, as well as ideas for follow-on investigations using ZBOT like hardware to study other aspects of cryogenic fluid management.
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Lamellae spatial distribution modulates fracture behavior and toughness of african pangolin scales
Chon, Michael J.; Daly, Matthew; Wang, Bin; ...
2017-06-10
Pangolin scales form a durable armor whose hierarchical structure offers an avenue towards high performance bio-inspired materials design. In this paper, the fracture resistance of African pangolin scales is examined using single edge crack three-point bend fracture testing in order to understand toughening mechanisms arising from the structures of natural mammalian armors. In these mechanical tests, the influence of material orientation and hydration level are examined. The fracture experiments reveal an exceptional fracture resistance due to crack deflection induced by the internal spatial orientation of lamellae. An order of magnitude increase in the measured fracture resistance due to scale hydration,more » reaching up to ~ 25 kJ/m 2 was measured. Post-mortem analysis of the fracture samples was performed using a combination of optical and electron microscopy, and X-ray computerized tomography. Interestingly, the crack profile morphologies are observed to follow paths outlined by the keratinous lamellae structure of the pangolin scale. Most notably, the inherent structure of pangolin scales offers a pathway for crack deflection and fracture toughening. Finally, the results of this study are expected to be useful as design principles for high performance biomimetic applications.« less
Lamellae spatial distribution modulates fracture behavior and toughness of african pangolin scales.
Chon, Michael J; Daly, Matthew; Wang, Bin; Xiao, Xianghui; Zaheri, Alireza; Meyers, Marc A; Espinosa, Horacio D
2017-12-01
Pangolin scales form a durable armor whose hierarchical structure offers an avenue towards high performance bio-inspired materials design. In this study, the fracture resistance of African pangolin scales is examined using single edge crack three-point bend fracture testing in order to understand toughening mechanisms arising from the structures of natural mammalian armors. In these mechanical tests, the influence of material orientation and hydration level are examined. The fracture experiments reveal an exceptional fracture resistance due to crack deflection induced by the internal spatial orientation of lamellae. An order of magnitude increase in the measured fracture resistance due to scale hydration, reaching up to ~ 25kJ/m 2 was measured. Post-mortem analysis of the fracture samples was performed using a combination of optical and electron microscopy, and X-ray computerized tomography. Interestingly, the crack profile morphologies are observed to follow paths outlined by the keratinous lamellae structure of the pangolin scale. Most notably, the inherent structure of pangolin scales offers a pathway for crack deflection and fracture toughening. The results of this study are expected to be useful as design principles for high performance biomimetic applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lamellae spatial distribution modulates fracture behavior and toughness of african pangolin scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chon, Michael J.; Daly, Matthew; Wang, Bin
Pangolin scales form a durable armor whose hierarchical structure offers an avenue towards high performance bio-inspired materials design. In this paper, the fracture resistance of African pangolin scales is examined using single edge crack three-point bend fracture testing in order to understand toughening mechanisms arising from the structures of natural mammalian armors. In these mechanical tests, the influence of material orientation and hydration level are examined. The fracture experiments reveal an exceptional fracture resistance due to crack deflection induced by the internal spatial orientation of lamellae. An order of magnitude increase in the measured fracture resistance due to scale hydration,more » reaching up to ~ 25 kJ/m 2 was measured. Post-mortem analysis of the fracture samples was performed using a combination of optical and electron microscopy, and X-ray computerized tomography. Interestingly, the crack profile morphologies are observed to follow paths outlined by the keratinous lamellae structure of the pangolin scale. Most notably, the inherent structure of pangolin scales offers a pathway for crack deflection and fracture toughening. Finally, the results of this study are expected to be useful as design principles for high performance biomimetic applications.« less
Temperature Evolution During Plane Strain Compression Of Tertiary Oxide Scale On Steel
NASA Astrophysics Data System (ADS)
Suarez, L.; Vanden Eynde, X.; Lamberigts, M.; Houbaert, Y.
2007-04-01
An oxide scale layer always forms at the steel surface during hot rolling. This scale layer separates the work roll from the metal substrate. Understanding the deformation behaviour and mechanical properties of the scale is of great interest because it affects the frictional conditions during hot rolling and the heat-transfer behaviour at the strip-roll interface. A thin wustite scale layer (<20 μm) was created under controlled conditions in an original laboratory device adequately positioned in a compression testing machine to investigate plane strain compression. Oxidation tests were performed on an ULC steel grade. After the oxide growth at 1050°C, plane strain compression (PSC) was performed immediately to simulate the hot rolling process. PSC experiments were performed at a deformation temperature of 1050°C, with reduction ratios from 5 to 70%, and strain rates of 10s-1 under controlled gas atmospheres. Results show that for wustite, ductility is obvious at 1050°C. Even after deformation oxide layers exhibit good adhesion to the substrate and homogeneity over the thickness. The tool/sample temperature difference seems to be the reason for the unexpected ductile behaviour of the scale layer.
Periodontal Examination Profiles and Treatment Approaches of a Group of Turkish General Dentists.
Ercan, Esra; Uysal, Cihan; Uzun, Cansu; Yılmaz, Mümün
2015-01-01
To investigate the periodontal examination profiles and treatment approaches of a group of Turkish general dentists. 457 general dentists were called and 173 dentists agreed to participate in the study. The questionnaire comprised 10 questions including gender, years of experience, periodontal probing during examination, oral hygiene motivation methods (do you perform, yes/no; the oral hygiene motivation method; verbal expression or using visual materials), periodontal treatments (supragingival scaling, subgingival scaling and planing or surgery) and knowledge about diagnosis and treatment for aggressive and chronic periodontitis. The participants were grouped according to their years of clinical experience: group 1: 0 to 10 years of clinical practice (n = 58); group 2: 10 to 20 years (n = 68); group 3: >20 years (n = 47). The 'periodontal probing' performance percentages were 70.69%, 26.47% and 40.43% in groups 1, 2 and 3, respectively. The oral hygiene motivation rate was high in the first 10 years of clinical practice (60.3%). In addition, 72.4% of the dentists in group 1 used visual materials in addition to verbal expression during oral hygiene motivation. 72.25% of the general dentists performed supragingival scaling. The knowledge of diagnosis and treatment of chronic periodontitis was present in >90% of the dentists surveyed. In contrast, >50% of the general dentists were not knowledgeable in the diagnosis and treatment of aggressive periodontitis. Periodontal probing is a gold standard for periodontal diagnosis, but as the dentists' clinical experience increases, the frequency of its performance decreases. The percentage of the knowledge and treatment of chronic periodontitis is higher than that of aggressive periodontitis. Postgraduate education in periodontology is important to keep general dentists up to date on current periodontal practice and improve awareness of periodontal diseases.
Reactor antineutrino shoulder explained by energy scale nonlinearities?
NASA Astrophysics Data System (ADS)
Mention, G.; Vivier, M.; Gaffiot, J.; Lasserre, T.; Letourneau, A.; Materna, T.
2017-10-01
The Daya Bay, Double Chooz and RENO experiments recently observed a significant distortion in their detected reactor antineutrino spectra, being at odds with the current predictions. Although such a result suggests to revisit the current reactor antineutrino spectra modeling, an alternative scenario, which could potentially explain this anomaly, is explored in this letter. Using an appropriate statistical method, a study of the Daya Bay experiment energy scale is performed. While still being in agreement with the γ calibration data and 12B measured spectrum, it is shown that a O (1%) deviation of the energy scale reproduces the distortion observed in the Daya Bay spectrum, remaining within the quoted calibration uncertainties. Potential origins of such a deviation, which challenge the energy calibration of these detectors, are finally discussed.
ERIC Educational Resources Information Center
Maretzki, A.; Shimabukuro, S.
Nutrition curriculum design research was undertaken to address the issue of linkage between school food experiences and home food experiences of elementary school children in Hawaii. One hundred and forty-four parents judged the relative importance of seventeen food-related activites. The sample consisted of Asian, Caucasian, and Polynesian…
Agents Overcoming Resource Independent Scaling Threats (AORIST)
2004-10-01
20 Table 8: Tilted Consumer Preferences Experiment (m=8, N=61, G=2, C=60, Mean over 13 experiments...probabilities. Non-uniform consumer preferences create a new potential for sub-optimal system performance and thus require an additional adaptive...distribu- tion of the capacities across the sup- plier population must match the non- uniform consumer preferences . The second plot in Table 8
Simulation of pump-turbine prototype fast mode transition for grid stability support
NASA Astrophysics Data System (ADS)
Nicolet, C.; Braun, O.; Ruchonnet, N.; Hell, J.; Béguin, A.; Avellan, F.
2017-04-01
The paper explores the additional services that Full Size Frequency Converter, FSFC, solution can provide for the case of an existing pumped storage power plant of 2x210 MW, for which conversion from fixed speed to variable speed is investigated with a focus on fast mode transition. First, reduced scale model tests experiments of fast transition of Francis pump-turbine which have been performed at the ANDRITZ HYDRO Hydraulic Laboratory in Linz Austria are presented. The tests consist of linear speed transition from pump to turbine and vice versa performed with constant guide vane opening. Then existing pumped storage power plant with pump-turbine quasi homologous to the reduced scale model is modelled using the simulation software SIMSEN considering the reservoirs, penstocks, the two Francis pump-turbines, the two downstream surge tanks, and the tailrace tunnel. For the electrical part, an FSFC configuration is considered with a detailed electrical model. The transitions from turbine to pump and vice versa are simulated, and similarities between prototype simulation results and reduced scale model experiments are highlighted.
NASA Astrophysics Data System (ADS)
Tawfik, M. S.; Karpyn, Z.
2017-12-01
Carbonate reservoirs host more than half of the remaining oil reserves worldwide. Due to their complex pore structure and intermediate to oil-wet nature, it is challenging to produce the remaining oil from these formations. For two decades, chemically tuned waterflooding (CTWF) has gained the attention of many researchers. Experimental, numerical, and field studies suggest that changes in ion composition of injected brine can increase oil recovery in carbonate reservoirs via wettability alteration. However, previous studies explaining the improvement in oil recovery by wettability alteration deduce wettability based on indirect measurements, including sessile drop contact angle measurements on polished rocks, relative permeability, chromatographic separation of SCN- and potential determining ions (PDIs), etc. CTWF literature offers no direct measurement of wettability alteration at the pore scale. This study proposes a direct pore-scale measurement of changes in interfacial curvatures before and after CTWF. Micro-coreflood experiments are performed to investigate the effect of injection brine salinity, ion composition and temperature on rock wettability at the pore scale. X-ray micro-CT scanning is used to obtain 3D image sets to calculate in-situ contact angle distributions. The study also aims to find a correlation between the magnitude of improvement in oil recovery at the macro-scale and the corresponding contact angle distribution at the pore-scale at different experimental conditions. Hence, macro-scale coreflood experiments are performed using the same conditions as the micro-corefloods. Macro-scale coreflood experiments have shown that brines with higher concentration of Ca2+, Mg2+ and SO42- ions have higher recoveries compared to standard seawater. This translates to wettability alteration into a more intermediate-wet state. This study enhances the understanding of the pore-scale physico-chemical mechanisms controlling wettability alteration via CTWF, which helps tune existing CTWF models, and therefore results in more well-informed candidate reservoir selection and the development of a workflow to determine the optimum injection brine properties for a given crude oil-brine-rock system.
Asteroid entry in Venusian atmosphere: Pressure and density fields effect on crater formation
NASA Technical Reports Server (NTRS)
Schmidt, Robert
1995-01-01
The objectives are to look at time scales of overpressure compared to cratering and to determine: what are the transient pressure and density due to atmospheric entry; do shock waves evacuate ambient gas; do transient atmospheric disturbances 'settle down' during cratering; can the pressure/density field be approximated as quasi-static; how does disturbance scale with impactor size; and what is the role of atmospheric thickness. The general approach is to perform inexpensive exploratory calculations, perform experiments to validate code and observe crater growth, and to follow up with more realistic coupling calculations. This viewgraph presentation presents progress made with the objective to obtain useful scaling relationships for crater formation when atmospheric effects are important.
Munro, Sarah A; Lund, Steven P; Pine, P Scott; Binder, Hans; Clevert, Djork-Arné; Conesa, Ana; Dopazo, Joaquin; Fasold, Mario; Hochreiter, Sepp; Hong, Huixiao; Jafari, Nadereh; Kreil, David P; Łabaj, Paweł P; Li, Sheng; Liao, Yang; Lin, Simon M; Meehan, Joseph; Mason, Christopher E; Santoyo-Lopez, Javier; Setterquist, Robert A; Shi, Leming; Shi, Wei; Smyth, Gordon K; Stralis-Pavese, Nancy; Su, Zhenqiang; Tong, Weida; Wang, Charles; Wang, Jian; Xu, Joshua; Ye, Zhan; Yang, Yong; Yu, Ying; Salit, Marc
2014-09-25
There is a critical need for standard approaches to assess, report and compare the technical performance of genome-scale differential gene expression experiments. Here we assess technical performance with a proposed standard 'dashboard' of metrics derived from analysis of external spike-in RNA control ratio mixtures. These control ratio mixtures with defined abundance ratios enable assessment of diagnostic performance of differentially expressed transcript lists, limit of detection of ratio (LODR) estimates and expression ratio variability and measurement bias. The performance metrics suite is applicable to analysis of a typical experiment, and here we also apply these metrics to evaluate technical performance among laboratories. An interlaboratory study using identical samples shared among 12 laboratories with three different measurement processes demonstrates generally consistent diagnostic power across 11 laboratories. Ratio measurement variability and bias are also comparable among laboratories for the same measurement process. We observe different biases for measurement processes using different mRNA-enrichment protocols.
NASA Astrophysics Data System (ADS)
Hu, R.; Wan, J.; Chen, Y.
2016-12-01
Wettability is a factor controlling the fluid-fluid displacement pattern in porous media and significantly affects the flow and transport of supercritical (sc) CO2 in geologic carbon sequestration. Using a high-pressure micromodel-microscopy system, we performed drainage experiments of scCO2 invasion into brine-saturated water-wet and intermediate-wet micromodels; we visualized the scCO2 invasion morphology at pore-scale under reservoir conditions. We also performed pore-scale numerical simulations of the Navier-Stokes equations to obtain 3D details of fluid-fluid displacement processes. Simulation results are qualitatively consistent with the experiments, showing wider scCO2 fingering, higher percentage of scCO2 and more compact displacement pattern in intermediate-wet micromodel. Through quantitative analysis based on pore-scale simulation, we found that the reduced wettability reduces the displacement front velocity, promotes the pore-filling events in the longitudinal direction, delays the breakthrough time of invading fluid, and then increases the displacement efficiency. Simulated results also show that the fluid-fluid interface area follows a unified power-law relation with scCO2 saturation, and show smaller interface area in intermediate-wet case which suppresses the mass transfer between the phases. These pore-scale results provide insights for the wettability effects on CO2 - brine immiscible displacement in geologic carbon sequestration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buljubasich, Lisandro; Dente, Axel D.; Levstein, Patricia R.
2015-10-28
We performed Loschmidt echo nuclear magnetic resonance experiments to study decoherence under a scaled dipolar Hamiltonian by means of a symmetrical time-reversal pulse sequence denominated Proportionally Refocused Loschmidt (PRL) echo. The many-spin system represented by the protons in polycrystalline adamantane evolves through two steps of evolution characterized by the secular part of the dipolar Hamiltonian, scaled down with a factor |k| and opposite signs. The scaling factor can be varied continuously from 0 to 1/2, giving access to a range of complexity in the dynamics. The experimental results for the Loschmidt echoes showed a spreading of the decay rates thatmore » correlate directly to the scaling factors |k|, giving evidence that the decoherence is partially governed by the coherent dynamics. The average Hamiltonian theory was applied to give an insight into the spin dynamics during the pulse sequence. The calculations were performed for every single radio frequency block in contrast to the most widely used form. The first order of the average Hamiltonian numerically computed for an 8-spin system showed decay rates that progressively decrease as the secular dipolar Hamiltonian becomes weaker. Notably, the first order Hamiltonian term neglected by conventional calculations yielded an explanation for the ordering of the experimental decoherence rates. However, there is a strong overall decoherence observed in the experiments which is not reflected by the theoretical results. The fact that the non-inverted terms do not account for this effect is a challenging topic. A number of experiments to further explore the relation of the complete Hamiltonian with this dominant decoherence rate are proposed.« less
NASA Astrophysics Data System (ADS)
Goodman, H.
2017-12-01
This investigation seeks to develop sealant technology that can restore containment to completed wells that suffer CO2 gas leakages currently untreatable using conventional technologies. Experimentation is performed at the Mont Terri Underground Research Laboratory (MT-URL) located in NW Switzerland. The laboratory affords investigators an intermediate-scale test site that bridges the gap between the laboratory bench and full field-scale conditions. Project focus is the development of CO2 leakage remediation capability using sealant technology. The experimental concept includes design and installation of a field scale completion package designed to mimic well systems heating-cooling conditions that may result in the development of micro-annuli detachments between the casing-cement-formation boundaries (Figure 1). Of particular interest is to test novel sealants that can be injected in to relatively narrow micro-annuli flow-paths of less than 120 microns aperture. Per a special report on CO2 storage submitted to the IPCC[1], active injection wells, along with inactive wells that have been abandoned, are identified as one of the most probable sources of leakage pathways for CO2 escape to the surface. Origins of pressure leakage common to injection well and completions architecture often occur due to tensile cracking from temperature cycles, micro-annulus by casing contraction (differential casing to cement sheath movement) and cement sheath channel development. This discussion summarizes the experiment capability and sealant testing results. The experiment concludes with overcoring of the entire mock-completion test site to assess sealant performance in 2018. [1] IPCC Special Report on Carbon Dioxide Capture and Storage (September 2005), section 5.7.2 Processes and pathways for release of CO2 from geological storage sites, page 244
Development of a survey instrument to measure patient experience of integrated care.
Walker, Kara Odom; Stewart, Anita L; Grumbach, Kevin
2016-06-01
Healthcare systems are working to move towards more integrated, patient-centered care. This study describes the development and testing of a multidimensional self-report measure of patients' experiences of integrated care. Random-digit-dial telephone survey in 2012 of 317 adults aged 40 years or older in the San Francisco region who had used healthcare at least twice in the past 12 months. One-time cross-sectional survey; psychometric evaluation to confirm dimensions and create multi-item scales. Survey data were analyzed using VARCLUS and confirmatory factor analysis and internal consistency reliability testing. Scales measuring five domains were confirmed: coordination within and between care teams, navigation (arranging appointments and visits), communication between specialist and primary care doctor, and communication between primary care doctor and specialist. Four of these demonstrated excellent internal consistency reliability. Mean scale scores indicated low levels of integration. These scales measuring integrated care capture meaningful domains of patients' experiences of health care. The low levels of care integration reported by patients in the study sample suggest that these types of measures should be considered in ongoing evaluations of health system performance and improvement. Further research should examine whether differences in patient experience of integrated care are associated with differences in the processes and outcomes of care received.
The effect of wind and currents on gas exchange in an estuarine system
NASA Technical Reports Server (NTRS)
Broecker, W. S.; Ledwell, J. R.; Bopp, R.
1987-01-01
The objectives were to develop a non-volatile tracer to use in gas exchange experiments in laterally unconfined systems and to study applications of deliberate tracers in limnology and oceanography. Progress was made on both fronts but work on the development of the non-volatile tracer proved to be more difficult and labor intensive that anticipated so no field experiments using non-volatile tracers was performed as yet. In the search for a suitable non-volatile tracer for an ocean scale gas exchange experiment a tracer was discovered which does not have the required sensitivity for a large scale experiment, but is very easy to analyze and will be well suited for smaller experiments such as gas exchange determinations on rivers and streams. Sulfur hexafluoride, SF6, was used successfully as a volatile tracer along with tritium as a non-volatile tracer to study gas exchange rates from a primary stream. This is the first gas exchange experiment in which gas exchange rates were determined on a head water stream where significant groundwater input occurs along the reach. In conjunction with SF6, Radon-222 measurements were performed on the groundwater and in the stream. The feasibility of using a combination of SF6 and radon is being studied to determine groundwater inputs and gas exchange of rates in streams with significant groundwater input without using a non-volatile tracer.
ERIC Educational Resources Information Center
Bobbett, Gordon C.; And Others
This study examines the relationships among a variety of secondary/postsecondary experiences and activities and postsecondary students' musical independence (MI). The paper reports on the impact Instrumental Performance Skills (IPSs) have on the students" MI development during private lessons, band rehearsal, and individual practicing. The study…
Flow among Musicians: Measuring Peak Experiences of Student Performers
ERIC Educational Resources Information Center
Sinnamon, Sarah; Moran, Aidan; O'Connell, Michael
2012-01-01
"Flow" is a highly coveted yet elusive state of mind that is characterized by complete absorption in the task at hand as well as by enhanced skilled performance. Unfortunately, because most measures of this construct have been developed in physical activity and sport settings, little is known about the applicability of flow scales to the…
Hossack, Blake R.; Corn, P. Stephen; , Winsor H. Lowe; , Molly A. H. Webb; , Mariah J. Talbott; , Kevin M. Kappenman
2013-01-01
5. Our experiments with a cold-water species show that population-level performance varies across small geographic scales and is linked to local environmental heterogeneity. This variation could influence the rate and mode of species-level responses to climate change, both by facilitating local persistence in the face of change
Experiment-scale molecular simulation study of liquid crystal thin films
NASA Astrophysics Data System (ADS)
Nguyen, Trung Dac; Carrillo, Jan-Michael Y.; Matheson, Michael A.; Brown, W. Michael
2014-03-01
Supercomputers have now reached a performance level adequate for studying thin films with molecular detail at the relevant scales. By exploiting the power of GPU accelerators on Titan, we have been able to perform simulations of characteristic liquid crystal films that provide remarkable qualitative agreement with experimental images. We have demonstrated that key features of spinodal instability can only be observed with sufficiently large system sizes, which were not accessible with previous simulation studies. Our study emphasizes the capability and significance of petascale simulations in providing molecular-level insights in thin film systems as well as other interfacial phenomena.
Shock compression response of cold-rolled Ni/Al multilayer composites
NASA Astrophysics Data System (ADS)
Specht, Paul E.; Weihs, Timothy P.; Thadhani, Naresh N.
2017-01-01
Uniaxial strain, plate-on-plate impact experiments were performed on cold-rolled Ni/Al multilayer composites and the resulting Hugoniot was determined through time-resolved measurements combined with impedance matching. The experimental Hugoniot agreed with that previously predicted by two dimensional (2D) meso-scale calculations [Specht et al., J. Appl. Phys. 111, 073527 (2012)]. Additional 2D meso-scale simulations were performed using the same computational method as the prior study to reproduce the experimentally measured free surface velocities and stress profiles. These simulations accurately replicated the experimental profiles, providing additional validation for the previous computational work.
NASA Astrophysics Data System (ADS)
Le Touz, N.; Toullier, T.; Dumoulin, J.
2017-05-01
The present study addresses the thermal behaviour of a modified pavement structure to prevent icing at its surface in adverse winter time conditions or overheating in hot summer conditions. First a multi-physic model based on infinite elements method was built to predict the evolution of the surface temperature. In a second time, laboratory experiments on small specimen were carried out and the surface temperature was monitored by infrared thermography. Results obtained are analyzed and performances of the numerical model for real scale outdoor application are discussed. Finally conclusion and perspectives are proposed.
Large Scale Flutter Data for Design of Rotating Blades Using Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.
2012-01-01
A procedure to compute flutter boundaries of rotating blades is presented; a) Navier-Stokes equations. b) Frequency domain method compatible with industry practice. Procedure is initially validated: a) Unsteady loads with flapping wing experiment. b) Flutter boundary with fixed wing experiment. Large scale flutter computation is demonstrated for rotating blade: a) Single job submission script. b) Flutter boundary in 24 hour wall clock time with 100 cores. c) Linearly scalable with number of cores. Tested with 1000 cores that produced data in 25 hrs for 10 flutter boundaries. Further wall-clock speed-up is possible by performing parallel computations within each case.
Absolute Scale Quantitative Off-Axis Electron Holography at Atomic Resolution
NASA Astrophysics Data System (ADS)
Winkler, Florian; Barthel, Juri; Tavabi, Amir H.; Borghardt, Sven; Kardynal, Beata E.; Dunin-Borkowski, Rafal E.
2018-04-01
An absolute scale match between experiment and simulation in atomic-resolution off-axis electron holography is demonstrated, with unknown experimental parameters determined directly from the recorded electron wave function using an automated numerical algorithm. We show that the local thickness and tilt of a pristine thin WSe2 flake can be measured uniquely, whereas some electron optical aberrations cannot be determined unambiguously for a periodic object. The ability to determine local specimen and imaging parameters directly from electron wave functions is of great importance for quantitative studies of electrostatic potentials in nanoscale materials, in particular when performing in situ experiments and considering that aberrations change over time.
Accelerated testing for studying pavement design and performance (FY 2004) : research summary.
DOT National Transportation Integrated Search
2009-03-01
The thirteenth full-scale Accelerated Pavement Test (APT) experiment at the Civil Infrastructure Laboratory (CISL) of Kansas State University aimed to determine the response and the failure mode of thin concrete overlays.
Windsor, John A; Diener, Scott; Zoha, Farah
2008-06-01
People learn in different ways, and training techniques and technologies should accommodate individual learning needs. This pilot study looks at the relationship between learning style, as measured with the Multiple Intelligences Developmental Assessment Scales (MIDAS), laparoscopic surgery experience and psychomotor skill performance using the MIST VR surgical simulator. Five groups of volunteer subjects were selected from undergraduate tertiary students, medical students, novice surgical trainees, advanced surgical trainees and experienced laparoscopic surgeons. Each group was administered the MIDAS followed by two simulated surgical tasks on the MIST VR simulator. There was a striking homogeny of learning styles amongst experienced laparoscopic surgeons. Significant differences in the distribution of primary learning styles were found (P < .01) between subjects with minimal surgical training and those with considerable experience. A bodily-kinesthetic learning style, irrespective of experience, was associated with the best performance of the laparoscopic tasks. This is the first study to highlight the relationship between learning style, psychomotor skill and laparoscopic surgical experience with implications for surgeon selection, training and credentialling.
Brain-Computer Interface Based on Generation of Visual Images
Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander
2011-01-01
This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206
Progress of LMJ-relevant implosions experiments on OMEGA
NASA Astrophysics Data System (ADS)
Casner, A.; Philippe, F.; Tassin, V.; Seytor, P.; Monteil, M.-C.; Gauthier, P.; Park, H. S.; Robey, H.; Ross, J.; Amendt, P.; Girard, F.; Villette, B.; Reverdin, C.; Loiseau, P.; Caillaud, T.; Landoas, O.; Li, C. K.; Petrasso, R.; Seguin, F.; Rosenberg, M.; Renaudin, P.
2013-11-01
In preparation of the first ignition attempts on the Laser Mégajoule (LMJ), an experimental program is being pursued on OMEGA to investigate LMJ-relevant hohlraums. First, radiation temperature levels close to 300 eV were recently achieved in reduced-scale hohlraums with modest backscatter losses. Regarding the baseline target design for fusion experiments on LMJ, an extensive experimental database has also been collected for scaled implosions experiments in both empty and gas-filled rugby-shaped hohlraums. We acquired a full picture of hohlraum energetics and implosion dynamics. Not only did the rugby hohlraums show significantly higher x-ray drive energy over the cylindrical hohlraums, but symmetry control by power balance was demonstrated, as well as high-performance D2 implosions enabling the use of a complete suite of neutrons diagnostics. Charged particle diagnostics provide complementary insights into the physics of these x-ray driven implosions. An overview of these results demonstrates our ability to control the key parameters driving the implosion, lending more confidence in extrapolations to ignition-scale targets.
Park, Young Seo; Jang, Yeong Min; Joo, Kyung Kwang
2018-04-01
This paper describes in brief features of various experimental devices constructed for half-ton synthesis of gadolinium(Gd)-loaded liquid scintillator (GdLS) and also includes the performances and detailed chemical and physical results of a 0.5% high-concentration GdLS. Various feasibility studies on useful apparatus used for loading Gd into solvents have been carried out. The transmittance, Gd concentration, density, light yield, and moisture content were measured for quality control. We show that with the help of adequate automated experimental devices and tools, it is possible to perform ton scale synthesis of GdLS at moderate laboratory scale without difficulty. The synthesized GdLS was satisfactory to meet chemical, optical, and physical properties and various safety requirements. These synthesizing devices can be expanded into massive scale next-generation neutrino experiments of several hundred tons.
NASA Astrophysics Data System (ADS)
Park, Young Seo; Jang, Yeong Min; Joo, Kyung Kwang
2018-04-01
This paper describes in brief features of various experimental devices constructed for half-ton synthesis of gadolinium(Gd)-loaded liquid scintillator (GdLS) and also includes the performances and detailed chemical and physical results of a 0.5% high-concentration GdLS. Various feasibility studies on useful apparatus used for loading Gd into solvents have been carried out. The transmittance, Gd concentration, density, light yield, and moisture content were measured for quality control. We show that with the help of adequate automated experimental devices and tools, it is possible to perform ton scale synthesis of GdLS at moderate laboratory scale without difficulty. The synthesized GdLS was satisfactory to meet chemical, optical, and physical properties and various safety requirements. These synthesizing devices can be expanded into massive scale next-generation neutrino experiments of several hundred tons.
Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A
2018-06-22
Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.
MIT-NASA/KSC space life science experiments - A telescience testbed
NASA Technical Reports Server (NTRS)
Oman, Charles M.; Lichtenberg, Byron K.; Fiser, Richard L.; Vordermark, Deborah S.
1990-01-01
Experiments performed at MIT to better define Space Station information system telescience requirements for effective remote coaching of astronauts by principal investigators (PI) on the ground are described. The experiments were conducted via satellite video, data, and voice links to surrogate crewmembers working in a laboratory at NASA's Kennedy Space Center. Teams of two PIs and two crewmembers performed two different space life sciences experiments. During 19 three-hour interactive sessions, a variety of test conditions were explored. Since bit rate limits are necessarily imposed on Space Station video experiments surveillance video was varied down to 50 Kb/s and the effectiveness of PI controlled frame rate, resolution, grey scale, and color decimation was investigated. It is concluded that remote coaching by voice works and that dedicated crew-PI voice loops would be of great value on the Space Station.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Sterile Neutrino Search with the PROSPECT Experiment
NASA Astrophysics Data System (ADS)
Surukuchi Venkata, Pranava Teja
2017-01-01
PROSPECT is a multi-phased short-baseline reactor antineutrino experiment with primary goals of performing a search for sterile neutrinos and making a precise measurement of 235U reactor antineutrino spectrum from the High Flux Isotope Reactor at Oak Ridge National Laboratory. PROSPECT will provide a model independent oscillation measurement of electron antineutrinos by performing relative spectral comparison between a wide range of baselines. By covering the baselines of 7-12 m with Phase-I and extending the coverage to 19m with Phase-II, the PROSPECT experiment will be able to address the current eV-scale sterile neutrino oscillation best-fit region within a single year of data-taking and covers a major portion of suggested parameter space within 3 years of Phase-II data-taking. Additionally, with a Phase-II detector PROSPECT will be able to distinguish between 3+1 mixing, 3+N mixing and other non-standard oscillations. In this talk, we describe the PROSPECT oscillation fitting framework and expected detector sensitivity to the oscillations arising from eV-scale sterile neutrinos. DOE
Irradiation of materials with short, intense ion pulses at NDCX-II
Seidl, P. A.; Barnard, J. J.; Feinberg, E.; ...
2017-05-31
Abstract We present an overview of the performance of the Neutralized Drift Compression Experiment-II (NDCX-II) accelerator at Berkeley Lab, and report on recent target experiments on beam-driven melting and transmission ion energy loss measurements with nanosecond and millimeter-scale ion beam pulses and thin tin foils. Bunches with around 10 11ions, 1 mm radius, and 2–30 ns full width at half maximum duration have been created with corresponding fluences in the range of 0.1–0.7 J/cm 2. To achieve these short pulse durations and mm-scale focal spot radii, the 1.1 MeV [megaelectronvolt (10 6eV)] He +ion beam is neutralized in a driftmore » compression section, which removes the space charge defocusing effect during final compression and focusing. The beam space charge and drift compression techniques resemble necessary beam conditions and manipulations in heavy ion inertial fusion accelerators. Quantitative comparison of detailed particle-in-cell simulations with the experiment plays an important role in optimizing accelerator performance.« less
Irradiation of materials with short, intense ion pulses at NDCX-II
Seidl, P. A.; Barnard, J. J.; Feinberg, E.; ...
2017-05-31
Here, we present an overview of the performance of the Neutralized Drift Compression Experiment-II (NDCX-II) accelerator at Berkeley Lab, and report on recent target experiments on beam-driven melting and transmission ion energy loss measurements with nanosecond and millimeter-scale ion beam pulses and thin tin foils. Bunches with around 10 11 ions, 1 mm radius, and 2–30 ns full width at half maximum duration have been created with corresponding fluences in the range of 0.1–0.7 J/cm 2. To achieve these short pulse durations and mm-scale focal spot radii, the 1.1 MeV [megaelectronvolt (10 6 eV)] He + ion beam is neutralizedmore » in a drift compression section, which removes the space charge defocusing effect during final compression and focusing. The beam space charge and drift compression techniques resemble necessary beam conditions and manipulations in heavy ion inertial fusion accelerators. In conclusion, quantitative comparison of detailed particle-in-cell simulations with the experiment plays an important role in optimizing accelerator performance« less
Irradiation of materials with short, intense ion pulses at NDCX-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seidl, P. A.; Barnard, J. J.; Feinberg, E.
Abstract We present an overview of the performance of the Neutralized Drift Compression Experiment-II (NDCX-II) accelerator at Berkeley Lab, and report on recent target experiments on beam-driven melting and transmission ion energy loss measurements with nanosecond and millimeter-scale ion beam pulses and thin tin foils. Bunches with around 10 11ions, 1 mm radius, and 2–30 ns full width at half maximum duration have been created with corresponding fluences in the range of 0.1–0.7 J/cm 2. To achieve these short pulse durations and mm-scale focal spot radii, the 1.1 MeV [megaelectronvolt (10 6eV)] He +ion beam is neutralized in a driftmore » compression section, which removes the space charge defocusing effect during final compression and focusing. The beam space charge and drift compression techniques resemble necessary beam conditions and manipulations in heavy ion inertial fusion accelerators. Quantitative comparison of detailed particle-in-cell simulations with the experiment plays an important role in optimizing accelerator performance.« less
Using Application-Domain Knowledge in the Runtime Support of Multi-Experiment Computational Studies
2009-01-01
PAGES 255 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8...178 6.4 Scenario A scaling and breakdown . . . . . . . . . . . . . . . . . 191 6.5 Scenario B scaling and breakdown...Scenario A, as a function of the amount of change in the performance metric parameter. . . . . . . . . . . 186 6.5 Response time in Scenario B , as a
2011-07-01
TECHNOLOGIES INTO DEFENSE ACqUISITION UNIVERSITY LEARNING ASSETS Nada Dabbagh, Kevin Clark, Susan Dass , Salim Al Waaili, Sally Byrd, Susan...demographic data, four Likert- scale questions that targeted respondents’ familiarity with ALT, and one Likert- scale question addressing the...use of technology in learning with under- served populations. (E-mail address: kclark6@gmu.edu) Ms. Susan Dass has over 20 years’ experi- ence in
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...
2017-01-28
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De
Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less
Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor
NASA Astrophysics Data System (ADS)
Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei
2013-08-01
Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.
Dyer, Bryce
2015-06-01
This study introduces the importance of the aerodynamics to prosthetic limb design for athletes with either a lower-limb or upper-limb amputation. The study comprises two elements: 1) An initial experiment investigating the stability of outdoor velodrome-based field tests, and 2) An experiment evaluating the application of outdoor velodrome aerodynamic field tests to detect small-scale changes in aerodynamic drag respective of prosthetic limb componentry changes. An outdoor field-testing method is used to detect small and repeatable changes in the aerodynamic drag of an able-bodied cyclist. These changes were made at levels typical of alterations in prosthetic componentry. The field-based test method of assessment is used at a smaller level of resolution than previously reported. With a carefully applied protocol, the field test method proved to be statistically stable. The results of the field test experiments demonstrate a noticeable change in overall athlete performance. Aerodynamic refinement of artificial limbs is worthwhile for athletes looking to maximise their competitive performance. A field-testing method illustrates the importance of the aerodynamic optimisation of prosthetic limb components. The field-testing protocol undertaken in this study gives an accessible and affordable means of doing so by prosthetists and sports engineers. Using simple and accessible field-testing methods, this exploratory experiment demonstrates how small changes to riders' equipment, consummate of the scale of a small change in prosthetics componentry, can affect the performance of an athlete. Prosthetists should consider such opportunities for performance enhancement when possible. © The International Society for Prosthetics and Orthotics 2014.
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
A dynamic multi-scale Markov model based methodology for remaining life prediction
NASA Astrophysics Data System (ADS)
Yan, Jihong; Guo, Chaozhong; Wang, Xing
2011-05-01
The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.
Fire extinguishing tests -80 with methyl alcohol gasoline
NASA Astrophysics Data System (ADS)
Holmstedt, G.; Ryderman, A.; Carlsson, B.; Lennmalm, B.
1980-10-01
Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15 fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results; and performances with small pools can hardly be correlated with results from large scale fires.
Cascading failure in the wireless sensor scale-free networks
NASA Astrophysics Data System (ADS)
Liu, Hao-Ran; Dong, Ming-Ru; Yin, Rong-Rong; Han, Li
2015-05-01
In the practical wireless sensor networks (WSNs), the cascading failure caused by a failure node has serious impact on the network performance. In this paper, we deeply research the cascading failure of scale-free topology in WSNs. Firstly, a cascading failure model for scale-free topology in WSNs is studied. Through analyzing the influence of the node load on cascading failure, the critical load triggering large-scale cascading failure is obtained. Then based on the critical load, a control method for cascading failure is presented. In addition, the simulation experiments are performed to validate the effectiveness of the control method. The results show that the control method can effectively prevent cascading failure. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. F2014203239), the Autonomous Research Fund of Young Teacher in Yanshan University (Grant No. 14LGB017) and Yanshan University Doctoral Foundation, China (Grant No. B867).
Self-similarity of waiting times in fracture systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niccolini, G.; Bosia, F.; Carpinteri, A.
2009-08-15
Experimental and numerical results are presented for a fracture experiment carried out on a fiber-reinforced element under flexural loading, and a statistical analysis is performed for acoustic emission waiting-time distributions. By an optimization procedure, a recently proposed scaling law describing these distributions for different event magnitude scales is confirmed by both experimental and numerical data, thus reinforcing the idea that fracture of heterogeneous materials has scaling properties similar to those found for earthquakes. Analysis of the different scaling parameters obtained for experimental and numerical data leads us to formulate the hypothesis that the type of scaling function obtained depends onmore » the level of correlation among fracture events in the system.« less
VIBRATING PERVAPORATION MODULES: EFFECT OF MODULE DESIGN ON PERFORMANCE
A third commercial-scale vibrating pervaporation membrane module was fabricated and evaluated for the separation of volatile organic compounds (VOCs) from aqueous solutions. Experiments with surrogate solutions of four hydrophobic VOCs (1,1,1-trichloroethane (TCA), trichloroethy...
A strategy for clone selection under different production conditions.
Legmann, Rachel; Benoit, Brian; Fedechko, Ronald W; Deppeler, Cynthia L; Srinivasan, Sriram; Robins, Russell H; McCormick, Ellen L; Ferrick, David A; Rodgers, Seth T; Russo, A Peter
2011-01-01
Top performing clones have failed at the manufacturing scale while the true best performer may have been rejected early in the screening process. Therefore, the ability to screen multiple clones in complex fed-batch processes using multiple process variations can be used to assess robustness and to identify critical factors. This dynamic ranking of clones' strategy requires the execution of many parallel experiments than traditional approaches. Therefore, this approach is best suited for micro-bioreactor models which can perform hundreds of experiments quickly and efficiently. In this study, a fully monitored and controlled small scale platform was used to screen eight CHO clones producing a recombinant monoclonal antibody across several process variations, including different feeding strategies, temperature shifts and pH control profiles. The first screen utilized 240 micro-bioreactors were run for two weeks for this assessment of the scale-down model as a high-throughput tool for clone evaluation. The richness of the outcome data enable to clearly identify the best and worst clone as well as process in term of maximum monoclonal antibody titer. The follow-up comparison study utilized 180 micro-bioreactors in a full factorial design and a subset of 12 clone/process combinations was selected to be run parallel in duplicate shake flasks. Good correlation between the micro-bioreactor predictions and those made in shake flasks with a Pearson correlation value of 0.94. The results also demonstrate that this micro-scale system can perform clone screening and process optimization for gaining significant titer improvements simultaneously. This dynamic ranking strategy can support better choices of production clones. Copyright © 2011 American Institute of Chemical Engineers (AIChE).
DEEP UNDERGROUND NEUTRINO EXPERIMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Robert J.
2016-03-03
The Deep Underground Neutrino Experiment (DUNE) collaboration will perform an experiment centered on accelerator-based long-baseline neutrino studies along with nucleon decay and topics in neutrino astrophysics. It will consist of a modular 40-kt (fiducial) mass liquid argon TPC detector located deep underground at the Sanford Underground Research Facility in South Dakota and a high-resolution near detector at Fermilab in Illinois. This conguration provides a 1300-km baseline in a megawatt-scale neutrino beam provided by the Fermilab- hosted international Long-Baseline Neutrino Facility.
Overview and First Results of an In-situ Stimulation Experiment in Switzerland
NASA Astrophysics Data System (ADS)
Amann, F.; Gischig, V.; Doetsch, J.; Jalali, M.; Valley, B.; Evans, K. F.; Krietsch, H.; Dutler, N.; Villiger, L.
2017-12-01
A decameter-scale in-situ stimulation and circulation (ISC) experiment is currently being conducted at the Grimsel Test Site in Switzerland with the objective of improving our understanding of key seismo-hydro-mechanical coupled processes associated with high pressure fluid injections in a moderately fractured crystalline rock mass. The ISC experiment activities aim to support the development of EGS technology by 1) advancing the understanding of fundamental processes that occur within the rock mass in response to relatively large-volume fluid injections at high pressures, 2) improving the ability to estimate and model induced seismic hazard and risks, 3) assessing the potential of different injection protocols to keep seismic event magnitudes below an acceptable threshold, 4) developing novel monitoring and imaging techniques for pressure, temperature, stress, strain and displacement as well as geophysical methods such as ground penetration radar, passive and active seismic and 5) generating a high-quality benchmark datasets that facilitates the development and validation of numerical modelling tools. The ISC experiment includes six fault slip and five hydraulic fracturing experiments at an intermediate scale (i.e. 20*20*20m) at 480m depth, which allows high resolution monitoring of the evolution of pore pressure in the stimulated fault zone and the surrounding rock matrix, fault dislocations including shear and dilation, and micro-seismicity in an exceptionally well characterized structural setting. In February 2017 we performed the fault-slip experiments on interconnected faults. Subsequently an intense phase of post-stimulation hydraulic characterization was performed. In Mai 2017 we performed hydraulic fracturing tests within test intervals that were free of natural fractures. In this contribution we give an overview and show first results of the above mentioned stimulation tests.
Effects of laser-plasma instabilities on hydro evolution in an OMEGA-EP long-scale-length experiment
Li, J.; Hu, S. X.; Ren, C.
2017-02-28
Laser-plasma instabilities and hydro evolution of the coronal plasma in an OMEGA EP long-scale-length experiment with planar targets were studied with particle-in-cell (PIC) and hydrodynamic simulations. Plasma and laser conditions were first obtained in a two-dimensional DRACO hydro simulation with only inverse-bremsstrahlung absorption. Using these conditions, an OSIRIS PIC simulation was performed to study laser absorption and hot-electron generation caused by laser-plasma instabilities (LPIs) near the quarter-critical region. The obtained PIC information was subsequently coupled to another DRACO simulation to examine how the LPIs affect the overall hydrodynamics. Lastly, the results showed that the LPI-induced laser absorption increased the electronmore » temperature but did not significantly change the density scale length in the corona.« less
Effects of laser-plasma instabilities on hydro evolution in an OMEGA-EP long-scale-length experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J.; Hu, S. X.; Ren, C.
Laser-plasma instabilities and hydro evolution of the coronal plasma in an OMEGA EP long-scale-length experiment with planar targets were studied with particle-in-cell (PIC) and hydrodynamic simulations. Plasma and laser conditions were first obtained in a two-dimensional DRACO hydro simulation with only inverse-bremsstrahlung absorption. Using these conditions, an OSIRIS PIC simulation was performed to study laser absorption and hot-electron generation caused by laser-plasma instabilities (LPIs) near the quarter-critical region. The obtained PIC information was subsequently coupled to another DRACO simulation to examine how the LPIs affect the overall hydrodynamics. Lastly, the results showed that the LPI-induced laser absorption increased the electronmore » temperature but did not significantly change the density scale length in the corona.« less
Dykema, John A.; Keith, David W.; Anderson, James G.; Weisenstein, Debra
2014-01-01
Although solar radiation management (SRM) through stratospheric aerosol methods has the potential to mitigate impacts of climate change, our current knowledge of stratospheric processes suggests that these methods may entail significant risks. In addition to the risks associated with current knowledge, the possibility of ‘unknown unknowns’ exists that could significantly alter the risk assessment relative to our current understanding. While laboratory experimentation can improve the current state of knowledge and atmospheric models can assess large-scale climate response, they cannot capture possible unknown chemistry or represent the full range of interactive atmospheric chemical physics. Small-scale, in situ experimentation under well-regulated circumstances can begin to remove some of these uncertainties. This experiment—provisionally titled the stratospheric controlled perturbation experiment—is under development and will only proceed with transparent and predominantly governmental funding and independent risk assessment. We describe the scientific and technical foundation for performing, under external oversight, small-scale experiments to quantify the risks posed by SRM to activation of halogen species and subsequent erosion of stratospheric ozone. The paper's scope includes selection of the measurement platform, relevant aspects of stratospheric meteorology, operational considerations and instrument design and engineering. PMID:25404681
On effects of topography in rotating flows
NASA Astrophysics Data System (ADS)
Burmann, Fabian; Noir, Jerome; Jackson, Andrew
2017-11-01
Both, seismological studies and geodynamic arguments suggest that there is significant topography at the core mantle boundary (CMB). This leads to the question whether the topography of the CMB could influence the flow in the Earth's outer core. As a preliminary experiment, we investigate the effects of bottom topography in the so-called Spin-Up, where motion of a contained fluid is created by a sudden increase of rotation rate. Experiments are performed in a cylindrical container mounted on a rotating table and quantitative results are obtained with particle image velocimetry. Several horizontal length scales of topography (λ) are investigated, ranging from cases where λ is much smaller then the lateral extend of the experiment (R) to cases where λ is a fraction of R. We find that there is an optimal λ that creates maximum dissipation of kinetic energy. Depending on the length scale of the topography, kinetic energy is either dissipated in the boundary layer or in the bulk of the fluid. Two different phases of fluid motion are present: a starting flow in the from of solid rotation (phase I), which is later replaced by meso scale vortices on the length scale of bottom topography (phase II).
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development suchmore » that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.« less
ERIC Educational Resources Information Center
Chou, Huey-Wen; Wang, Yu-Fang
1999-01-01
Compares the effects of two training methods on computer attitude and performance in a World Wide Web page design program in a field experiment with high school students in Taiwan. Discusses individual differences, Kolb's Experiential Learning Theory and Learning Style Inventory, Computer Attitude Scale, and results of statistical analyses.…
High subsonic flow tests of a parallel pipe followed by a large area ratio diffuser
NASA Technical Reports Server (NTRS)
Barna, P. S.
1975-01-01
Experiments were performed on a pilot model duct system in order to explore its aerodynamic characteristics. The model was scaled from a design projected for the high speed operation mode of the Aircraft Noise Reduction Laboratory. The test results show that the model performed satisfactorily and therefore the projected design will most likely meet the specifications.
Optimizing height presentation for aircraft cockpit displays
NASA Astrophysics Data System (ADS)
Jordan, Chris S.; Croft, D.; Selcon, Stephen J.; Markin, H.; Jackson, M.
1997-02-01
This paper describes an experiment conducted to investigate the type of display symbology that most effectively conveys height information to users of head-down plan-view radar displays. The experiment also investigated the use of multiple information sources (redundancy) in the design of such displays. Subjects were presented with eight different height display formats. These formats were constructed from a control, and/or one, two, or three sources of redundant information. The three formats were letter coding, analogue scaling, and toggling (spatially switching the position of the height information from above to below the aircraft symbol). Subjects were required to indicate altitude awareness via a four-key, forced-choice keyboard response. Error scores and response times were taken as performance measures. There were three main findings. First, there was a significant performance advantage when the altitude information was presented above and below the symbol to aid the representation of height information. Second, the analogue scale, a line whose length indicated altitude, proved significantly detrimental to performance. Finally, no relationship was found between the number of redundant information sources employed and performance. The implications for future aircraft and displays are discussed in relation to current aircraft tactical displays and in the context of perceptual psychological theory.
From the Binet-Simon to the Wechsler-Bellevue: tracing the history of intelligence testing.
Boake, Corwin
2002-05-01
The history of David Wechsler's intelligence scales is reviewed by tracing the origins of the subtests in the 1939 Wechsler-Bellevue Intelligence Scale. The subtests originated from tests developed between 1880 and World War I, and was based on approaches to mental testing including anthropometrics, association psychology, the Binet-Simon scales, language-free performance testing of immigrants and school children, and group testing of military recruits. Wechsler's subtest selection can be understood partly from his clinical experiences during World War I. The structure of the Wechsler-Bellevue Scale, which introduced major innovations in intelligence testing, has remained almost unchanged through later revisions.
Enhancing the Performance of Passive Teleoperation Systems via Cutaneous Feedback.
Pacchierotti, Claudio; Tirmizi, Asad; Bianchini, Gianni; Prattichizzo, Domenico
2015-01-01
We introduce a novel method to improve the performance of passive teleoperation systems with force reflection. It consists of integrating kinesthetic haptic feedback provided by common grounded haptic interfaces with cutaneous haptic feedback. The proposed approach can be used on top of any time-domain control technique that ensures a stable interaction by scaling down kinesthetic feedback when this is required to satisfy stability conditions (e.g., passivity) at the expense of transparency. Performance is recovered by providing a suitable amount of cutaneous force through custom wearable cutaneous devices. The viability of the proposed approach is demonstrated through an experiment of perceived stiffness and an experiment of teleoperated needle insertion in soft tissue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corradin, Michael; Anderson, M.; Muci, M.
This experimental study investigates the thermal hydraulic behavior and the heat removal performance for a scaled Reactor Cavity Cooling System (RCCS) with air. A quarter-scale RCCS facility was designed and built based on a full-scale General Atomics (GA) RCCS design concept for the Modular High Temperature Gas Reactor (MHTGR). The GA RCCS is a passive cooling system that draws in air to use as the cooling fluid to remove heat radiated from the reactor pressure vessel to the air-cooled riser tubes and discharged the heated air into the atmosphere. Scaling laws were used to preserve key aspects and to maintainmore » similarity. The scaled air RCCS facility at UW-Madison is a quarter-scale reduced length experiment housing six riser ducts that represent a 9.5° sector slice of the full-scale GA air RCCS concept. Radiant heaters were used to simulate the heat radiation from the reactor pressure vessel. The maximum power that can be achieved with the radiant heaters is 40 kW with a peak heat flux of 25 kW per meter squared. The quarter-scale RCCS was run under different heat loading cases and operated successfully. Instabilities were observed in some experiments in which one of the two exhaust ducts experienced a flow reversal for a period of time. The data and analysis presented show that the RCCS has promising potential to be a decay heat removal system during an accident scenario.« less
Defraeye, Thijs; Blocken, Bert; Koninckx, Erwin; Hespel, Peter; Carmeliet, Jan
2010-08-26
This study aims at assessing the accuracy of computational fluid dynamics (CFD) for applications in sports aerodynamics, for example for drag predictions of swimmers, cyclists or skiers, by evaluating the applied numerical modelling techniques by means of detailed validation experiments. In this study, a wind-tunnel experiment on a scale model of a cyclist (scale 1:2) is presented. Apart from three-component forces and moments, also high-resolution surface pressure measurements on the scale model's surface, i.e. at 115 locations, are performed to provide detailed information on the flow field. These data are used to compare the performance of different turbulence-modelling techniques, such as steady Reynolds-averaged Navier-Stokes (RANS), with several k-epsilon and k-omega turbulence models, and unsteady large-eddy simulation (LES), and also boundary-layer modelling techniques, namely wall functions and low-Reynolds number modelling (LRNM). The commercial CFD code Fluent 6.3 is used for the simulations. The RANS shear-stress transport (SST) k-omega model shows the best overall performance, followed by the more computationally expensive LES. Furthermore, LRNM is clearly preferred over wall functions to model the boundary layer. This study showed that there are more accurate alternatives for evaluating flow around bluff bodies with CFD than the standard k-epsilon model combined with wall functions, which is often used in CFD studies in sports. 2010 Elsevier Ltd. All rights reserved.
Acoustic Experiment to Measure the Bulk Viscosity of Near-Critical Xenon in Microgravity
NASA Technical Reports Server (NTRS)
Gillis, K. A.; Shinder, I.; Moldover, M. R.; Zimmerli, G. A.
2002-01-01
We plan a rigorous test of the theory of dynamic scaling by accurately measuring the bulk viscosity of xenon in microgravity 50 times closer to the critical temperature T(sub c) than previous experiments. The bulk viscosity zeta (or "second viscosity" or "dilational viscosity") will be determined by measuring the attenuation length of sound alpha lambda and also measuring the frequency-dependence of the speed of sound. For these measurements, we developed a unique Helmholtz resonator and specialized electro-acoustic transducers. We describe the resonator, the transducers, their performance on Earth, and their expected performance in microgravity.
Evaluating drywells for stormwater management and enhanced aquifer recharge
USDA-ARS?s Scientific Manuscript database
Drywells are increasingly used for stormwater management and enhanced aquifer recharge, but only limited research has quantitatively determined the performance of drywells. Numerical and field scale experiments were, therefore, conducted to improve our understanding and ability to characterize the d...
Evaluating drywells for stormwater management and enhanced aquifer recharge
Drywells are increasingly used for stormwater management and enhanced aquifer recharge, but only limited research has quantitatively determined drywells' performance. Numerical and field scale experiments were conducted to characterize the drywell behavior. HYDRUS (2D/3D) was mod...
Development of processes for the production of solar grade silicon from halides and alkali metals
NASA Technical Reports Server (NTRS)
Dickson, C. R.; Gould, R. K.
1980-01-01
High temperature reactions of silicon halides with alkali metals for the production of solar grade silicon in volume at low cost were studied. Experiments were performed to evaluate product separation and collection processes, measure heat release parameters for scaling purposes, determine the effects of reactants and/or products on materials of reactor construction, and make preliminary engineering and economic analyses of a scaled-up process.
Structural Analysis and Testing of the Inflatable Re-entry Vehicle Experiment (IRVE)
NASA Technical Reports Server (NTRS)
Lindell, Michael C.; Hughes, Stephen J.; Dixon, Megan; Wiley, Cliff E.
2006-01-01
The Inflatable Re-entry Vehicle Experiment (IRVE) is a 3.0 meter, 60 degree half-angle sphere cone, inflatable aeroshell experiment designed to demonstrate various aspects of inflatable technology during Earth re-entry. IRVE will be launched on a Terrier-Improved Orion sounding rocket from NASA s Wallops Flight Facility in the fall of 2006 to an altitude of approximately 164 kilometers and re-enter the Earth s atmosphere. The experiment will demonstrate exo-atmospheric inflation, inflatable structure leak performance throughout the flight regime, structural integrity under aerodynamic pressure and associated deceleration loads, thermal protection system performance, and aerodynamic stability. Structural integrity and dynamic response of the inflatable will be monitored with photogrammetric measurements of the leeward side of the aeroshell during flight. Aerodynamic stability and drag performance will be verified with on-board inertial measurements and radar tracking from multiple ground radar stations. In addition to demonstrating inflatable technology, IRVE will help validate structural, aerothermal, and trajectory modeling and analysis techniques for the inflatable aeroshell system. This paper discusses the structural analysis and testing of the IRVE inflatable structure. Equations are presented for calculating fabric loads in sphere cone aeroshells, and finite element results are presented which validate the equations. Fabric material properties and testing are discussed along with aeroshell fabrication techniques. Stiffness and dynamics tests conducted on a small-scale development unit and a full-scale prototype unit are presented along with correlated finite element models to predict the in-flight fundamental mod
Learning to control an SSVEP-based BCI speller in naïve subjects.
Zhihua Tang; Yijun Wang; Guoya Dong; Weihua Pei; Hongda Chen
2017-07-01
High-speed steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) has been demonstrated in several recent studies. This study aimed to investigate some issues regarding feasibility of learning to control an SSVEP-based BCI speller in naïve subjects. An experiment with new BCI users was designed to answer the following questions: (1) How many people can use the SSVEP-BCI speller? (2) How much time is required to train the user? (3) Does continuous system use lead to user fatigue and deteriorated BCI performance? The experiment consisted of three tasks including a 40-class BCI spelling task, a psychomotor vigilance test (PVT) task, and a test of sleepiness scale. Subjects' reaction time (RT) in the PVT task and the fatigue rank in the sleepiness scale test were used as objective and subjective parameters to evaluate subjects' alertness level. Among 11 naïve subjects, 10 of them fulfilled the 9-block experiment. Four of them showed clear learning effects (i.e., an increasing trend of classification accuracy and information transfer rate (ITR)) over time. The remaining subjects showed stable BCI performance during the whole experiment. The results of RT and fatigue rank showed a gradually increasing trend, which is not significant across blocks. In summary, the results of this study suggest that controlling an SSVEP-based BCI speller is in general feasible to learn by naïve subjects after a short training procedure, showing no clear performance deterioration related to fatigue.
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
NASA Astrophysics Data System (ADS)
Kostopoulos, Vassilis; Vavouliotis, Antonios; Baltopoulos, Athanasios; Sotiririadis, George; Masouras, Athanasios; Pambaguian, Laurent
2014-06-01
The past decade, extensive efforts have been invested in understanding the nano-scale and revealing the capabilities offered by nanotechnology products to structural materials. Nevertheless, a major issue faced lately more seriously due to the interest of industry is on how to incorporate these nano-species into the final composite structure through existing manufacturing processes and infrastructure. In this work, we present the experience obtained from the latest nanotechnology research activities supported by ESA. The paper focuses on prepreg composite manufacturing technology and addresses:- Approaches for nano-enabling of composites- Up-scaling strategies towards final structures- Latest results on performance of nano-enabledfiber reinforced compositesSeveral approaches for the utilization of nanotechnology products in structural composite structures have been proposed and are reviewed, in short along with respective achieved results. A variety of nano-fillers has been proposed and employed, individually or in combination in hybrid forms, to approach the desired performance. A major part of the work deals with the up-scaling routes of these technologies to reach final products and industrial scales and processes while meeting end-user performance.
NASA Astrophysics Data System (ADS)
Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.
2013-12-01
In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed on JUQUEEN with processor counts on the order of 10,000. The instrumentation is used in weak and strong scaling studies with real data cases and hypothetical idealized numerical experiments for detailed profiling and tracing analysis. The profiling is not only useful in identifying wait states that are due to the MPMD execution model, but also in fine-tuning resource allocation to the component models in search of the most suitable load balancing. This is especially necessary, as with numerical experiments that cover multiple (high resolution) spatial scales, the time stepping, coupling frequencies, and communication overheads are constantly shifting, which makes it necessary to re-determine the model setup with each new experimental design.
Wilkerson, J. Michael; Smolensk, Derek J.; Brady, Sonya S.; Rosser, B. R. Simon
2012-01-01
Religiosity is associated with behaviors that reduce the risk of HIV/STI infection among general-population and heterosexual-specific samples. Whether this association is similar for homosexual persons is unknown. Measures of religiosity have not been evaluated psychometrically among men who have sex with men (MSM), a population who, because of stigma, experience religiosity differently than heterosexual persons. We assessed the DUREL and the SWB (short form) in two samples of MSM. Neither instrument produced adequate model fit. To study the association between religiosity and HIV/STI risk behaviors among MSM, scales are needed that measure the religious and spiritual experiences of MSM. PMID:22441843
Wilkerson, J Michael; Smolensk, Derek J; Brady, Sonya S; Rosser, B R Simon
2013-06-01
Religiosity is associated with behaviors that reduce the risk of HIV/STI infection among general-population and heterosexual-specific samples. Whether this association is similar to homosexual persons is unknown. Measures of religiosity have not been evaluated psychometrically among men who have sex with men (MSM), a population who, because of stigma, experience religiosity differently than heterosexual persons. We assessed the duke religion index and the spiritual well-being in two samples of MSM. Neither instrument produced adequate model fit. To study the association between religiosity and HIV/STI risk behaviors among MSM, scales are needed that measure the religious and spiritual experiences of MSM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peipho, R.R.; Dougan, D.R.
1981-01-01
Experience has shown that the grinding characteristics of low rank coals are best determined by testing them in a pulverizer. Test results from a small MPS-32 Babcock and Wilcox pulverizer to predict large, full-scale pulverizer performance are presented. The MPS-32 apparatus, test procedure and evaluation of test results is described. The test data show that the Hardgrove apparatus and the ASTM test method must be used with great caution when considering low-rank fuels. The MPS-32 meets the needs for real-machine simulation but with some disadvantages. A smaller pulverizer is desirable. 1 ref.
Emotional Intelligence and Emotions Associated with Optimal and Dysfunctional Athletic Performance
Lane, Andrew M.; Devonport, Tracey J.; Soos, Istvan; Karsai, Istvan; Leibinger, Eva; Hamar, Pal
2010-01-01
This study investigated relationships between self-report measures of emotional intelligence and memories of pre-competitive emotions before optimal and dysfunctional athletic performance. Participant-athletes (n = 284) completed a self-report measure of emotional intelligence and two measures of pre-competitive emotions; a) emotions experienced before an optimal performance, and b) emotions experienced before a dysfunctional performance. Consistent with theoretical predictions, repeated MANOVA results demonstrated pleasant emotions associated with optimal performance and unpleasant emotions associated with dysfunctional performance. Emotional intelligence correlated with pleasant emotions in both performances with individuals reporting low scores on the self-report emotional intelligence scale appearing to experience intense unpleasant emotions before dysfunctional performance. We suggest that future research should investigate relationships between emotional intelligence and emotion-regulation strategies used by athletes. Key points Athletes reporting high scores of self-report emotional intelligence tend to experience pleasant emotions. Optimal performance is associated with pleasant emotions and dysfunctional performance is associated with unpleasant emotions. Emotional intelligence might help athletes recognize which emotional states help performance. PMID:24149631
Emotional intelligence and emotions associated with optimal and dysfunctional athletic performance.
Lane, Andrew M; Devonport, Tracey J; Soos, Istvan; Karsai, Istvan; Leibinger, Eva; Hamar, Pal
2010-01-01
This study investigated relationships between self-report measures of emotional intelligence and memories of pre-competitive emotions before optimal and dysfunctional athletic performance. Participant-athletes (n = 284) completed a self-report measure of emotional intelligence and two measures of pre-competitive emotions; a) emotions experienced before an optimal performance, and b) emotions experienced before a dysfunctional performance. Consistent with theoretical predictions, repeated MANOVA results demonstrated pleasant emotions associated with optimal performance and unpleasant emotions associated with dysfunctional performance. Emotional intelligence correlated with pleasant emotions in both performances with individuals reporting low scores on the self-report emotional intelligence scale appearing to experience intense unpleasant emotions before dysfunctional performance. We suggest that future research should investigate relationships between emotional intelligence and emotion-regulation strategies used by athletes. Key pointsAthletes reporting high scores of self-report emotional intelligence tend to experience pleasant emotions.Optimal performance is associated with pleasant emotions and dysfunctional performance is associated with unpleasant emotions.Emotional intelligence might help athletes recognize which emotional states help performance.
Development of the Italian Version of the Near-Death Experience Scale.
Pistoia, Francesca; Mattiacci, Giulia; Sarà, Marco; Padua, Luca; Macchi, Claudio; Sacco, Simona
2018-01-01
Near-death experiences (NDEs) have been defined as any conscious perceptual experience occurring in individuals pronounced clinically dead or who came very close to physical death. They are frequently reported by patients surviving a critical injury and, intriguingly, they show common features across different populations. The tool traditionally used to assess NDEs is the NDE Scale, which is available in the original English version. The aim of this study was to develop the Italian version of the NDE Scale and to assess its reliability in a specific clinical setting. A process of translation of the original scale was performed in different stages in order to obtain a fully comprehensible and accurate Italian translation. Later, the scale was administered to a convenience sample of patients who had experienced a condition of coma and were, at the time of assessment, fully conscious and able to provide information as requested by the scale. Inter-rater and test-retest reliability, assessed by the weighted Cohen's kappa ( K w ), were estimated. A convenience sample of 20 subjects [mean age ± standard deviation (SD) 51.6 ± 17.1, median time from injury 3.5 months, interquartile range (IQR) 2-10] was included in the study. Inter-rater [ K w 0.77 (95% CI 0.67-0.87)] and test-retest reliability [ K w 0.96 (95% CI 0.91-1.00)] showed good to excellent values for the total scores of the Italian NDE Scale and for subanalyses of each single cluster of the scale. An Italian Version of the NDE Scale is now available to investigate the frequency of NDE, the causes for NDE heterogeneity across different life-threatening conditions, and the possible neural mechanisms underlying NDE phenomenology.
Development of the Italian Version of the Near-Death Experience Scale
Pistoia, Francesca; Mattiacci, Giulia; Sarà, Marco; Padua, Luca; Macchi, Claudio; Sacco, Simona
2018-01-01
Near-death experiences (NDEs) have been defined as any conscious perceptual experience occurring in individuals pronounced clinically dead or who came very close to physical death. They are frequently reported by patients surviving a critical injury and, intriguingly, they show common features across different populations. The tool traditionally used to assess NDEs is the NDE Scale, which is available in the original English version. The aim of this study was to develop the Italian version of the NDE Scale and to assess its reliability in a specific clinical setting. A process of translation of the original scale was performed in different stages in order to obtain a fully comprehensible and accurate Italian translation. Later, the scale was administered to a convenience sample of patients who had experienced a condition of coma and were, at the time of assessment, fully conscious and able to provide information as requested by the scale. Inter-rater and test–retest reliability, assessed by the weighted Cohen’s kappa (Kw), were estimated. A convenience sample of 20 subjects [mean age ± standard deviation (SD) 51.6 ± 17.1, median time from injury 3.5 months, interquartile range (IQR) 2–10] was included in the study. Inter-rater [Kw 0.77 (95% CI 0.67–0.87)] and test–retest reliability [Kw 0.96 (95% CI 0.91–1.00)] showed good to excellent values for the total scores of the Italian NDE Scale and for subanalyses of each single cluster of the scale. An Italian Version of the NDE Scale is now available to investigate the frequency of NDE, the causes for NDE heterogeneity across different life-threatening conditions, and the possible neural mechanisms underlying NDE phenomenology. PMID:29479314
Nano-Scale Devices for Frequency-Based Magnetic Biosensing
2017-01-31
UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT We demonstrate via experiment and simulation that the magnetic-field-dependent frequency...Stirling Highway, Crawley WA 6009; +61 8 6488 7015; Fax. Period of Performance: 06/05/2015 – 11/04/2016 Abstract: We demonstrate via experiment ...particle-induced changes to the ( quasi -)static magnetization within the active layer of the device 2 . This project however focuses on particle-induced
Stilp, Christian E; Kiefte, Michael; Alexander, Joshua M; Kluender, Keith R
2010-10-01
Some evidence, mostly drawn from experiments using only a single moderate rate of speech, suggests that low-frequency amplitude modulations may be particularly important for intelligibility. Here, two experiments investigated intelligibility of temporally distorted sentences across a wide range of simulated speaking rates, and two metrics were used to predict results. Sentence intelligibility was assessed when successive segments of fixed duration were temporally reversed (exp. 1), and when sentences were processed through four third-octave-band filters, the outputs of which were desynchronized (exp. 2). For both experiments, intelligibility decreased with increasing distortion. However, in exp. 2, intelligibility recovered modestly with longer desynchronization. Across conditions, performances measured as a function of proportion of utterance distorted converged to a common function. Estimates of intelligibility derived from modulation transfer functions predict a substantial proportion of the variance in listeners' responses in exp. 1, but fail to predict performance in exp. 2. By contrast, a metric of potential information, quantified as relative dissimilarity (change) between successive cochlear-scaled spectra, is introduced. This metric reliably predicts listeners' intelligibility across the full range of speaking rates in both experiments. Results support an information-theoretic approach to speech perception and the significance of spectral change rather than physical units of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaminski, Michael
The Irreversible Wash Aid Additive process has been under development by the U.S. Environmental Protection Agency (EPA) and Argonne National Laboratory (Argonne). This process for radioactive cesium mitigation consists of a solution to wash down contaminated structures, roadways, and vehicles and a sequestering agent to bind the radionuclides from the wash water and render them environmentally immobile. The purpose of this process is to restore functionality to basic services and immediately reduce the consequences of a radiologically-contaminated urban environment. Research and development have resulted in a down-selection of technologies for integration and demonstration at the pilot-scale level as part ofmore » the Wide Area Recovery and Resiliency Program (WARRP) under the Department of Homeland Security and the Denver Urban Area Security Initiative. As part of developing the methods for performing a pilot-scale demonstration at the WARRP conference in Denver in 2012, Argonne conducted small-scale field experiments at Separmatic Systems. The main purpose of these experiments was to refine the wash water collection and separations systems and demonstrate key unit operations to help in planning for the large scale demonstration in Denver. Since the purpose of these tests was to demonstrate the operations of the system, we used no radioactive materials. After a brief set of experiments with the LAKOS unit to familiarize ourselves with its operation, two experiments were completed on two separate dates with the Separmatic systems.« less
Posttraumatic stress symptoms following forensic dental identification: Mt. Carmel, Waco, Texas.
McCarroll, J E; Fullerton, C S; Ursano, R J; Hermsen, J M
1996-06-01
This study was conducted to determine risk factors for posttraumatic stress in medical care professionals who perform postmortem identifications. Thirty-one dentists (29 men and two women) who had identified the dead from the fire at the Branch Davidian compound in April 1993 were compared to 47 dentists (45 men and two women) who lived in the area but had not identified any of these remains. Posttraumatic symptoms in both groups were measured by using the Impact of Event Scale and the Brief Symptom Inventory. For the remains handlers only, the subjective distress of handling remains and the social support received during the procedure were reported. Higher scores on the Impact of Event Scale intrusion subscale, the overall Impact of Event Scale, and the obsessive-compulsive subscale of the Brief Symptom Inventory were found for the remains handlers than for the comparison group. Within the remains handler group, distress was significantly related to the hours of exposure to the remains, prior experience handling remains, age, and the support received from spouses and co-workers during the identifications. Posttraumatic stress symptoms can be expected in some health professionals who perform postmortem identifications. Prior experience and social support may mitigate some of these responses.
Papaevangelou, Vassiliki A; Gikas, Georgios D; Tsihrintzis, Vassilios A
2017-02-01
The current experimental work aimed at the investigation of the overall chromium removal capacity of constructed wetlands (CWs) and the chromium fate-distribution within a wetland environment. For this purpose, the experimental setup included the parallel operation and monitoring of two horizontal subsurface flow (HSF) pilot-scale CWs and two vertical flow (VF) pilot-scale CWs treating Cr-bearing wastewater. Samples were collected from the influent, the effluent, the substrate and the plants. Apart from the continuous experiment, batch experiments (kinetics and isotherm) were conducted in order to investigate the chromium adsorption capacity of the substrate material. According to the findings, HSF-CWs demonstrated higher removal capacities in comparison to VF-CWs, while in both types the planted units indicated better performance compared to the unplanted ones. Analysis in various wetland compartments and annual mass balance calculation highlighted the exceptional contribution of substrate to chromium retention, while Cr accumulation in plant was not so high. Finally, experimental data fitted better to the pseudo-second-order and Langmuir models regarding kinetics and isotherm simulation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Study of Rapid-Regression Liquefying Hybrid Rocket Fuels
NASA Technical Reports Server (NTRS)
Zilliac, Greg; DeZilwa, Shane; Karabeyoglu, M. Arif; Cantwell, Brian J.; Castellucci, Paul
2004-01-01
A report describes experiments directed toward the development of paraffin-based hybrid rocket fuels that burn at regression rates greater than those of conventional hybrid rocket fuels like hydroxyl-terminated butadiene. The basic approach followed in this development is to use materials such that a hydrodynamically unstable liquid layer forms on the melting surface of a burning fuel body. Entrainment of droplets from the liquid/gas interface can substantially increase the rate of fuel mass transfer, leading to surface regression faster than can be achieved using conventional fuels. The higher regression rate eliminates the need for the complex multi-port grain structures of conventional solid rocket fuels, making it possible to obtain acceptable performance from single-port structures. The high-regression-rate fuels contain no toxic or otherwise hazardous components and can be shipped commercially as non-hazardous commodities. Among the experiments performed on these fuels were scale-up tests using gaseous oxygen. The data from these tests were found to agree with data from small-scale, low-pressure and low-mass-flux laboratory tests and to confirm the expectation that these fuels would burn at high regression rates, chamber pressures, and mass fluxes representative of full-scale rocket motors.
Defense Waste Processing Facility Simulant Chemical Processing Cell Studies for Sludge Batch 9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Tara E.; Newell, J. David; Woodham, Wesley H.
The Savannah River National Laboratory (SRNL) received a technical task request from Defense Waste Processing Facility (DWPF) and Saltstone Engineering to perform simulant tests to support the qualification of Sludge Batch 9 (SB9) and to develop the flowsheet for SB9 in the DWPF. These efforts pertained to the DWPF Chemical Process Cell (CPC). CPC experiments were performed using SB9 simulant (SB9A) to qualify SB9 for sludge-only and coupled processing using the nitric-formic flowsheet in the DWPF. Two simulant batches were prepared, one representing SB8 Tank 40H and another representing SB9 Tank 51H. The simulant used for SB9 qualification testing wasmore » prepared by blending the SB8 Tank 40H and SB9 Tank 51H simulants. The blended simulant is referred to as SB9A. Eleven CPC experiments were run with an acid stoichiometry ranging between 105% and 145% of the Koopman minimum acid equation (KMA), which is equivalent to 109.7% and 151.5% of the Hsu minimum acid factor. Three runs were performed in the 1L laboratory scale setup, whereas the remainder were in the 4L laboratory scale setup. Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) cycles were performed on nine of the eleven. The other two were SRAT cycles only. One coupled flowsheet and one extended run were performed for SRAT and SME processing. Samples of the condensate, sludge, and off-gas were taken to monitor the chemistry of the CPC experiments.« less
Senn, Olivier; Kilchenmann, Lorenz; von Georgi, Richard; Bullerjahn, Claudia
2016-01-01
This study tested the influence of expert performance microtiming on listeners' experience of groove. Two professional rhythm section performances (bass/drums) in swing and funk style were recorded, and the performances' original microtemporal deviations from a regular metronomic grid were scaled to several levels of magnitude. Music expert (n = 79) and non-expert (n = 81) listeners rated the groove qualities of stimuli using a newly developed questionnaire that measures three dimensions of the groove experience (Entrainment, Enjoyment, and the absence of Irritation). Findings show that music expert listeners were more sensitive to microtiming manipulations than non-experts. Across both expertise groups and for both styles, groove ratings were high for microtiming magnitudes equal or smaller than those originally performed and decreased for exaggerated microtiming magnitudes. In particular, both the fully quantized music and the music with the originally performed microtiming pattern were rated equally high on groove. This means that neither the claims of PD theory (that microtiming deviations are necessary for groove) nor the opposing exactitude hypothesis (that microtiming deviations are detrimental to groove) were supported by the data. PMID:27761117
Senn, Olivier; Kilchenmann, Lorenz; von Georgi, Richard; Bullerjahn, Claudia
2016-01-01
This study tested the influence of expert performance microtiming on listeners' experience of groove. Two professional rhythm section performances (bass/drums) in swing and funk style were recorded, and the performances' original microtemporal deviations from a regular metronomic grid were scaled to several levels of magnitude. Music expert ( n = 79) and non-expert ( n = 81) listeners rated the groove qualities of stimuli using a newly developed questionnaire that measures three dimensions of the groove experience ( Entrainment, Enjoyment , and the absence of Irritation ). Findings show that music expert listeners were more sensitive to microtiming manipulations than non-experts. Across both expertise groups and for both styles, groove ratings were high for microtiming magnitudes equal or smaller than those originally performed and decreased for exaggerated microtiming magnitudes. In particular, both the fully quantized music and the music with the originally performed microtiming pattern were rated equally high on groove. This means that neither the claims of PD theory (that microtiming deviations are necessary for groove) nor the opposing exactitude hypothesis (that microtiming deviations are detrimental to groove) were supported by the data.
Menon, Samir; Zhu, Jack; Goyal, Deeksha; Khatib, Oussama
2017-07-01
Haptic interfaces compatible with functional magnetic resonance imaging (Haptic fMRI) promise to enable rich motor neuroscience experiments that study how humans perform complex manipulation tasks. Here, we present a large-scale study (176 scans runs, 33 scan sessions) that characterizes the reliability and performance of one such electromagnetically actuated device, Haptic fMRI Interface 3 (HFI-3). We outline engineering advances that ensured HFI-3 did not interfere with fMRI measurements. Observed fMRI temporal noise levels with HFI-3 operating were at the fMRI baseline (0.8% noise to signal). We also present results from HFI-3 experiments demonstrating that high resolution fMRI can be used to study spatio-temporal patterns of fMRI blood oxygenation dependent (BOLD) activation. These experiments include motor planning, goal-directed reaching, and visually-guided force control. Observed fMRI responses are consistent with existing literature, which supports Haptic fMRI's effectiveness at studying the brain's motor regions.
NASA Technical Reports Server (NTRS)
1976-01-01
Results of studies performed on the magnetospheric and plasma portion of the AMPS are presented. Magnetospheric and plasma in space experiments and instruments are described along with packaging (palletization) concepts. The described magnetospheric and plasma experiments were considered as separate entities. Instrumentation ospheric and plasma experiments were considered as separate entities. Instrumentation requirements and operations were formulated to provide sufficient data for unambiguous interpretation of results without relying upon other experiments of the series. Where ground observations are specified, an assumption was made that large-scale additions or modifications to existing facilities were not required.
NASA Technical Reports Server (NTRS)
Deyoung, James A.; Klepczynski, William J.; Mckinley, Angela Davis; Powell, William M.; Mai, Phu V.; Hetzel, P.; Bauch, A.; Davis, J. A.; Pearce, P. R.; Baumont, Francoise S.
1995-01-01
The international transatlantic time and frequency transfer experiment was designed by participating laboratories and has been implemented during 1994 to test the international communications path involving a large number of transmitting stations. This paper will present empirically determined clock and time scale differences, time and frequency domain instabilities, and a representative power spectral density analysis. The experiments by the method of co-location which will allow absolute calibration of the participating laboratories have been performed. Absolute time differences and accuracy levels of this experiment will be assessed in the near future.
CFD Analysis in Advance of the NASA Juncture Flow Experiment
NASA Technical Reports Server (NTRS)
Lee, H. C.; Pulliam, T. H.; Neuhart, D. H.; Kegerise, M. A.
2017-01-01
NASA through its Transformational Tools and Technologies Project (TTT) under the Advanced Air Vehicle Program, is supporting a substantial effort to investigate the formation and origin of separation bubbles found on wing-body juncture zones. The flow behavior in these regions is highly complex, difficult to measure experimentally, and challenging to model numerically. Multiple wing configurations were designed and evaluated using Computational Fluid Dynamics (CFD), and a series of wind tunnel risk reduction tests were performed to further down-select the candidates for the final experiment. This paper documents the CFD analysis done in conjunction with the 6 percent scale risk reduction experiment performed in NASA Langley's 14- by 22-Foot Subsonic Tunnel. The combined CFD and wind tunnel results ultimately helped the Juncture Flow committee select the wing configurations for the final experiment.
The Pitch Imagery Arrow Task: Effects of Musical Training, Vividness, and Mental Control
Gelding, Rebecca W.; Thompson, William Forde; Johnson, Blake W.
2015-01-01
Musical imagery is a relatively unexplored area, partly because of deficiencies in existing experimental paradigms, which are often difficult, unreliable, or do not provide objective measures of performance. Here we describe a novel protocol, the Pitch Imagery Arrow Task (PIAT), which induces and trains pitch imagery in both musicians and non-musicians. Given a tonal context and an initial pitch sequence, arrows are displayed to elicit a scale-step sequence of imagined pitches, and participants indicate whether the final imagined tone matches an audible probe. It is a staircase design that accommodates individual differences in musical experience and imagery ability. This new protocol was used to investigate the roles that musical expertise, self-reported auditory vividness and mental control play in imagery performance. Performance on the task was significantly better for participants who employed a musical imagery strategy compared to participants who used an alternative cognitive strategy and positively correlated with scores on the Control subscale from the Bucknell Auditory Imagery Scale (BAIS). Multiple regression analysis revealed that Imagery performance accuracy was best predicted by a combination of strategy use and scores on the Vividness subscale of BAIS. These results confirm that competent performance on the PIAT requires active musical imagery and is very difficult to achieve using alternative cognitive strategies. Auditory vividness and mental control were more important than musical experience in the ability to perform manipulation of pitch imagery. PMID:25807078
V&V Of CFD Modeling Of The Argonne Bubble Experiment: FY15 Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyt, Nathaniel C.; Wardle, Kent E.; Bailey, James L.
2015-09-30
In support of the development of accelerator-driven production of the fission product Mo 99, computational fluid dynamics (CFD) simulations of an electron-beam irradiated, experimental-scale bubble chamber have been conducted in order to aid in interpretation of existing experimental results, provide additional insights into the physical phenomena, and develop predictive thermal hydraulic capabilities that can be applied to full-scale target solution vessels. Toward that end, a custom hybrid Eulerian-Eulerian-Lagrangian multiphase solver was developed, and simulations have been performed on high-resolution meshes. Good agreement between experiments and simulations has been achieved, especially with respect to the prediction of the maximum temperature ofmore » the uranyl sulfate solution in the experimental vessel. These positive results suggest that the simulation methodology that has been developed will prove to be suitable to assist in the development of full-scale production hardware.« less
Theoretical Studies of Liquid He-4 Near the Superfluid Transition
NASA Technical Reports Server (NTRS)
Manousakis, Efstratios
2002-01-01
We performed theoretical studies of liquid helium by applying state of the art simulation and finite-size scaling techniques. We calculated universal scaling functions for the specific heat and superfluid density for various confining geometries relevant for experiments such as the confined helium experiment and other ground based studies. We also studied microscopically how the substrate imposes a boundary condition on the superfluid order parameter as the superfluid film grows layer by layer. Using path-integral Monte Carlo, a quantum Monte Carlo simulation method, we investigated the rich phase diagram of helium monolayer, bilayer and multilayer on a substrate such as graphite. We find excellent agreement with the experimental results using no free parameters. Finally, we carried out preliminary calculations of transport coefficients such as the thermal conductivity for bulk or confined helium systems and of their scaling properties. All our studies provide theoretical support for various experimental studies in microgravity.
Properties of piezoresistive silicon nano-scale cantilevers with applications to BioNEMS
NASA Astrophysics Data System (ADS)
Arlett, Jessica Lynn
Over the last decade a great deal of interest has been raised in applications of Microelectromechanical Sensors [MEMS] for the detection of biological molecules and to the study of their forces of interaction. Experiments in these areas have included Force Spectroscopy (Chemical Force Microscopy), MEMS patch clamp technology, and surface stress sensors. All of these technologies suffer from limitations on temporal response and involve devices with active surface areas that are large compared to molecular dimensions. Biofunctionalized nanoelectromechanical systems (BioNEMS) have the potential to overcome both of these hurdles, offering important new prospects for single-molecule force assays that are amenable to large scale integration. Results are presented here on the characterization of piezoresistive silicon cantilevers with applications to BioNEMS devices. The cantilevers were characterized by studying their response in gaseous ambients under a number of drive conditions including magnetic, piezoelectric, and thermal actuation, in addition to passive detection of the thermomechanical response. The measurements were performed at liquid helium temperature, at room temperature, and over a range of pressures (atmospheric pressure to 30mT). Theoretical studies have been performed on the response of these devices to Brownian fluctuations in fluid, on the feasibility of these devices as surface stress sensors, and on improvements in device design as compared to piezoresistive surface stress sensors currently discussed in the literature. The devices were encapsulated in microfluidics and measurements were performed to show the noise floor in fluid. The piezoresistive response of the device in fluid was shown through the use of pulsatory fluidic drive. As a proof of concept, biodetection experiments are presented for biotin labeled beads. The biofunctionalization for the latter experiment was performed entirely within the microfluidics. A discussion of how these experiments can be extended to other cells, spores, and molecules is presented.
Eustachian Tube Mucosal Inflammation Scale Validation Based on Digital Video Images.
Kivekäs, Ilkka; Pöyhönen, Leena; Aarnisalo, Antti; Rautiainen, Markus; Poe, Dennis
2015-12-01
The most common cause for Eustachian tube dilatory dysfunction is mucosal inflammation. The aim of this study was to validate a scale for Eustachian tube mucosal inflammation, based on digital video clips obtained during diagnostic rigid endoscopy. A previously described four-step scale for grading the degree of inflammation of the mucosa of the Eustachian tube lumen was used for this validation study. A tutorial for use of the scale, including static images and 10 second video clips, was presented to 26 clinicians with various levels of experience. Each clinician then reviewed 35 short digital video samples of Eustachian tubes from patients and rated the degree of inflammation. A subset of the clinicians performed a second rating of the same video clips at a subsequent time. Statistical analysis of the ratings provided inter- and intrarater reliability scores. Twenty-six clinicians with various levels of experience rated a total of 35 videos. Thirteen clinicians rated the videos twice. The overall correlation coefficient for the rating of inflammation severity was relatively good (0.74, 95% confidence interval, 0.72-0.76). The intralevel correlation coefficient for intrarater reliability was high (0.86). For those who rated videos twice, the intralevel correlation coefficient improved after the first rating (0.73, to 0.76), but improvement was not statistically significant. The inflammation scale used for Eustachian tube mucosal inflammation is reliable and this scale can be used with a high level of consistency by clinicians with various levels of experience.
A search for anisotrophy in the cosmic microwave background on intermediate angular scales
NASA Technical Reports Server (NTRS)
Alsop, D. C.; Cheng, E. S.; Clapp, A. C.; Cottingham, D. A.; Fischer, M. L.; Gundersen, J. O.; Kreysa, E.; Lange, A. E.; Lubin, P. M.; Meinhold, P. R.
1992-01-01
The results of a search for anisotropy in the cosmic microwave background on angular scales near 1 deg are presented. Observations were simultaneously performed in bands centered at frequencies of 6, 9, and 12 per cm with a multifrequency bolometric receiver mounted on a balloon-borne telescope. The statistical sensitivity of the data is the highest reported to date at this angular scale, which is of critical importance for understanding the formation of structure in the universe. Signals in excess of random were observed in the data. The experiment, data analysis, and interpretation are described.
Design and performance of the spin asymmetries of the nucleon experiment
NASA Astrophysics Data System (ADS)
Maxwell, J. D.; Armstrong, W. R.; Choi, S.; Jones, M. K.; Kang, H.; Liyanage, A.; Meziani, Z.-E.; Mulholland, J.; Ndukum, L.; Rondón, O. A.; Ahmidouch, A.; Albayrak, I.; Asaturyan, A.; Ates, O.; Baghdasaryan, H.; Boeglin, W.; Bosted, P.; Brash, E.; Brock, J.; Butuceanu, C.; Bychkov, M.; Carlin, C.; Carter, P.; Chen, C.; Chen, J.-P.; Christy, M. E.; Covrig, S.; Crabb, D.; Danagoulian, S.; Daniel, A.; Davidenko, A. M.; Davis, B.; Day, D.; Deconinck, W.; Deur, A.; Dunne, J.; Dutta, D.; El Fassi, L.; Elaasar, M.; Ellis, C.; Ent, R.; Flay, D.; Frlez, E.; Gaskell, D.; Geagla, O.; German, J.; Gilman, R.; Gogami, T.; Gomez, J.; Goncharenko, Y. M.; Hashimoto, O.; Higinbotham, D. W.; Horn, T.; Huber, G. M.; Jones, M.; Kalantarians, N.; Kang, H. K.; Kawama, D.; Keith, C.; Keppel, C.; Khandaker, M.; Kim, Y.; King, P. M.; Kohl, M.; Kovacs, K.; Kubarovsky, V.; Li, Y.; Liyanage, N.; Luo, W.; Mamyan, V.; Markowitz, P.; Maruta, T.; Meekins, D.; Melnik, Y. M.; Mkrtchyan, A.; Mkrtchyan, H.; Mochalov, V. V.; Monaghan, P.; Narayan, A.; Nakamura, S. N.; Nuruzzaman; Pentchev, L.; Pocanic, D.; Posik, M.; Puckett, A.; Qiu, X.; Reinhold, J.; Riordan, S.; Roche, J.; Sawatzky, B.; Shabestari, M.; Slifer, K.; Smith, G.; Soloviev, L.; Solvignon, P.; Tadevosyan, V.; Tang, L.; Vasiliev, A. N.; Veilleux, M.; Walton, T.; Wesselmann, F.; Wood, S. A.; Yao, H.; Ye, Z.; Zhu, L.
2018-03-01
The Spin Asymmetries of the Nucleon Experiment (SANE) performed inclusive, double-polarized electron scattering measurements of the proton at the Continuous Electron Beam Accelerator Facility at Jefferson Lab. A novel detector array observed scattered electrons of four-momentum transfer 2 . 5
Karam, Amanda L; McMillan, Catherine C; Lai, Yi-Chun; de Los Reyes, Francis L; Sederoff, Heike W; Grunden, Amy M; Ranjithan, Ranji S; Levis, James W; Ducoste, Joel J
2017-06-14
The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software.
NMR spectroscopy of single sub-nL ova with inductive ultra-compact single-chip probes
Grisi, Marco; Vincent, Franck; Volpe, Beatrice; Guidetti, Roberto; Harris, Nicola; Beck, Armin; Boero, Giovanni
2017-01-01
Nuclear magnetic resonance (NMR) spectroscopy enables non-invasive chemical studies of intact living matter. However, the use of NMR at the volume scale typical of microorganisms is hindered by sensitivity limitations, and experiments on single intact organisms have so far been limited to entities having volumes larger than 5 nL. Here we show NMR spectroscopy experiments conducted on single intact ova of 0.1 and 0.5 nL (i.e. 10 to 50 times smaller than previously achieved), thereby reaching the relevant volume scale where life development begins for a broad variety of organisms, humans included. Performing experiments with inductive ultra-compact (1 mm2) single-chip NMR probes, consisting of a low noise transceiver and a multilayer 150 μm planar microcoil, we demonstrate that the achieved limit of detection (about 5 pmol of 1H nuclei) is sufficient to detect endogenous compounds. Our findings suggest that single-chip probes are promising candidates to enable NMR-based study and selection of microscopic entities at biologically relevant volume scales. PMID:28317887
Karam, Amanda L.; McMillan, Catherine C.; Lai, Yi-Chun; de los Reyes, Francis L.; Sederoff, Heike W.; Grunden, Amy M.; Ranjithan, Ranji S.; Levis, James W.; Ducoste, Joel J.
2017-01-01
The optimal design and operation of photosynthetic bioreactors (PBRs) for microalgal cultivation is essential for improving the environmental and economic performance of microalgae-based biofuel production. Models that estimate microalgal growth under different conditions can help to optimize PBR design and operation. To be effective, the growth parameters used in these models must be accurately determined. Algal growth experiments are often constrained by the dynamic nature of the culture environment, and control systems are needed to accurately determine the kinetic parameters. The first step in setting up a controlled batch experiment is live data acquisition and monitoring. This protocol outlines a process for the assembly and operation of a bench-scale photosynthetic bioreactor that can be used to conduct microalgal growth experiments. This protocol describes how to size and assemble a flat-plate, bench-scale PBR from acrylic. It also details how to configure a PBR with continuous pH, light, and temperature monitoring using a data acquisition and control unit, analog sensors, and open-source data acquisition software. PMID:28654054
NASA Astrophysics Data System (ADS)
Cao, Chao
2009-03-01
Nano-scale physical phenomena and processes, especially those in electronics, have drawn great attention in the past decade. Experiments have shown that electronic and transport properties of functionalized carbon nanotubes are sensitive to adsorption of gas molecules such as H2, NO2, and NH3. Similar measurements have also been performed to study adsorption of proteins on other semiconductor nano-wires. These experiments suggest that nano-scale systems can be useful for making future chemical and biological sensors. Aiming to understand the physical mechanisms underlying and governing property changes at nano-scale, we start off by investigating, via first-principles method, the electronic structure of Pd-CNT before and after hydrogen adsorption, and continue with coherent electronic transport using non-equilibrium Green’s function techniques combined with density functional theory. Once our results are fully analyzed they can be used to interpret and understand experimental data, with a few difficult issues to be addressed. Finally, we discuss a newly developed multi-scale computing architecture, OPAL, that coordinates simultaneous execution of multiple codes. Inspired by the capabilities of this computing framework, we present a scenario of future modeling and simulation of multi-scale, multi-physical processes.
NASA Astrophysics Data System (ADS)
Huenges, Ernst; Trautwein, Ute; Legarth, Björn; Zimmermann, Günter
2006-10-01
The Rotliegend of the North German basin is the target reservoir of an interdisciplinary investigation program to develop a technology for the generation of geothermal electricity from low-enthalpy reservoirs. An in situ downhole laboratory was established in the 4.3 km deep well Groβ Schönebeck with the purpose of developing appropriate stimulation methods to increase permeability of deep aquifers by enhancing or creating secondary porosity and flow paths. The goal is to learn how to enhance the inflow performance of a well from a variety of rock types in low permeable geothermal reservoirs. A change in effective stress due to fluid pressure was observed to be one of the key parameters influencing flow properties both downhole and in laboratory experiments on reservoir rocks. Fluid pressure variation was induced using proppant-gel-frac techniques as well as waterfrac techniques in several different new experiments in the borehole. A pressure step test indicates generation and extension of multiple fractures with closure pressures between 6 and 8.4 MPa above formation pressure. In a 24-hour production test 859 m3 water was produced from depth indicating an increase of productivity in comparison with former tests. Different depth sections and transmissibility values were observed in the borehole depending on fluid pressure. In addition, laboratory experiments were performed on core samples from the sandstone reservoir under uniaxial strain conditions, i.e., no lateral strain, constant axial load. The experiments on the borehole and the laboratory scale were realized on the same rock types under comparable stress conditions with similar pore pressure variations. Nevertheless, stress dependences of permeability are not easy to compare from scale to scale. Laboratory investigations reflect permeability variations due to microstructural heterogeneities and the behavior in the borehole is dominated by the generation of connections to large-scale structural patterns.
Friction and Wear on the Atomic Scale
NASA Astrophysics Data System (ADS)
Gnecco, Enrico; Bennewitz, Roland; Pfeiffer, Oliver; Socoliuc, Anisoara; Meyer, Ernst
Friction has long been the subject of research: the empirical da Vinci-Amontons friction laws have been common knowledge for centuries. Macroscopic experiments performed by the school of Bowden and Tabor revealed that macroscopic friction can be related to the collective action of small asperities. Over the last 15 years, experiments performed with the atomic force microscope have provided new insights into the physics of single asperities sliding over surfaces. This development, together with the results from complementary experiments using surface force apparatus and the quartz microbalance, have led to the new field of nanotribology. At the same time, increasing computing power has permitted the simulation of processes that occur during sliding contact involving several hundreds of atoms. It has become clear that atomic processes cannot be neglected when interpreting nanotribology experiments. Even on well-defined surfaces, experiments have revealed that atomic structure is directly linked to friction force. This chapter will describe friction force microscopy experiments that reveal, more or less directly, atomic processes during sliding contact.
Kluwe-Schiavon, Bruno; Viola, Thiago Wendt; Grassi-Oliveira, Rodrigo
2016-01-01
There is strong evidence to indicate that childhood maltreatment can negatively affect both physical and mental health and there is increasing interest in understanding the occurrence and consequences of such experiences. While several tools have been developed to retrospectively investigate childhood maltreatment experiences, most of them do not investigate the experience of witnessing family violence during childhood or bullying exposure. Moreover, the majority of scales do not identify when these experiences may have occurred, who was involved or the feelings evoked, such as helplessness or terror. The Maltreatment and Abuse Chronology of Exposure (MACE) scale was developed to overcome these limitations. In view of the improvements over previous self-report instruments that this new tool offers and of the small number of self-report questionnaires for childhood maltreatment assessment available in Brazil, this study was conducted to conduct cross-cultural adaptation of the MACE scale for Brazilian Portuguese. The following steps were performed: translation, back-translation, committee review for semantic and conceptual evaluation, and acceptability trial for equivalence. Semantic and structural changes were made to the interview to adapt it for the Brazilian culture and all 75 of the items that comprise the longer version of MACE were translated. The results of the acceptability trial suggest that the items are comprehensible. The MACE scales may be useful tools for investigation of childhood maltreatment and make a valuable contribution to research in Brazil. Future studies should consider testing the availability and reliability of the three versions of the instrument translated into Brazilian Portuguese.
NASA Technical Reports Server (NTRS)
Barankiewicz, Wendy S.; Perusek, Gail P.; Ibrahim, Mounir B.
1992-01-01
Full temperature ejector model simulations are expensive, and difficult to implement experimentally. If an approximate similarity principle could be established, properly chosen performance parameters should be similar for both hot and cold flow tests if the initial Mach number and total pressures of the flow field are held constant. Existing ejector data is used to explore the utility of one particular similarity principle; the Munk and Prim similarity principle for isentropic flows. Static performance test data for a full-scale thrust augmenting ejector are analyzed for primary flow temperatures up to 1560 R. At different primary temperatures, exit pressure contours are compared for similarity. A nondimensional flow parameter is then used to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments.
NASA Technical Reports Server (NTRS)
Barankiewicz, Wendy; Perusek, Gail P.; Ibrahim, Mounir
1992-01-01
Full temperature ejector model simulations are expensive, and difficult to implement experimentally. If an approximate similarity principle could be established, properly chosen performance parameters should be similar for both hot and cold flow tests if the initial Mach number and total pressures of the flow field are held constant. Existing ejector data is used to explore the utility of one particular similarity principle; the Munk and Prim similarity principle for isentropic flows. Static performance test data for a full-scale thrust augmenting ejector are analyzed for primary flow temperatures up to 1560 R. At different primary temperatures, exit pressure contours are compared for similarity. A nondimensional flow paramenter is then used to eliminate primary nozzle temperature dependence and verify similarity between the hot and cold flow experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-29
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable–region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observationalmore » dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.« less
NASA Astrophysics Data System (ADS)
Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.
2017-11-01
Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is unlikely that a single evaluation measure for all variables in an IAM exists, and therefore sector-by-sector evaluation may be necessary.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
Medical students' clerkship experiences and self-perceived competence in clinical skills.
Katowa-Mukwato, P; Andrews, B; Maimbolwa, M; Lakhi, S; Michelo, C; Mulla, Y; Banda, S S
2014-01-01
In a traditional curriculum, medical students are expected to acquire clinical competence through the apprenticeship model using the Halstedian "see one, do one, and teach one, approach". The University of Zambia School of Medicine used a traditional curriculum model from 1966 until 2011 when a competence-based curriculum was implemented. To explore medical students' clerkships experiences and self-perceived competence in clinical skills. A cross-sectional survey was conducted on 5th, 6 th , and 7 th year medical students of the University of Zambia, School of Medicine two months prior to final examinations. Students were asked to rate their clerkship experiences with respect to specific skills on a scale of 1 to 4 and their level of self-perceived competence on a scale of 1 to 3. Skills evaluated were in four main domains: history taking and communication, physical examination, procedural, and professionalism, team work and medical decision making. Using Statistical Package for Social Scientist (SPSS), correlations were performed between experiences and self-perceived competence on specific skills, within domains and overall. Out of 197 clinical students 138 (70%) participated in the survey. The results showed significant increase in the proportion of students performing different skills and reporting feeling very competent with each additional clinical year. Overall correlations between experience and self-perceived competence were moderate (0.55). On individual skills, the highest correlation between experience and self-perceived competence were observed on mainly medical and surgical related procedural skills with the highest at 0.82 for nasal gastric tube insertion and 0.76 for endotracheal intubation. Despite the general improvement in skills experiences and self-perceived competence, some deficiencies were noted as significant numbers of final year students had never attempted common important procedures especially those performed in emergency situations. Deficiencies in certain skills may call for incorporation of teaching/learning methods that broaden students' exposure to such skills.
Large-scale boiling experiments of the flooded cavity concept for in-vessel core retention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, T.Y.; Slezak, S.E.; Bentz, J.H.
1994-03-01
This paper presents results of ex-vessel boiling experiments performed in the CYBL (CYlindrical BoiLing) facility. CYBL is a reactor-scale facility for confirmatory research of the flooded cavity concept for accident management. CYBL has a tank-within-a-tank design; the inner tank simulates the reactor vessel and the outer tank simulates the reactor cavity. Experiments with uniform and edge-peaked heat flux distributions up to 20 W/cm{sup 2} across the vessel bottom were performed. Boiling outside the reactor vessel was found to be subcooled nucleate boiling. The subcooling is mainly due to the gravity head which results from flooding the sides of the reactormore » vessel. The boiling process exhibits a cyclic pattern with four distinct phases: direct liquid/solid contact, bubble nucleation and growth, coalescence, and vapor mass dispersion (ejection). The results suggest that under prototypic heat load and heat flux distributions, the flooded cavity in a passive pressurized water reactor like the AP-600 should be capable of cooling the reactor pressure vessel in the central region of the lower head that is addressed by these tests.« less
Irradiation of Materials using Short, Intense Ion Beams
NASA Astrophysics Data System (ADS)
Seidl, Peter; Ji, Q.; Persaud, A.; Feinberg, E.; Silverman, M.; Sulyman, A.; Waldron, W. L.; Schenkel, T.; Barnard, J. J.; Friedman, A.; Grote, D. P.; Gilson, E. P.; Kaganovich, I. D.; Stepanov, A.; Zimmer, M.
2016-10-01
We present experiments studying material properties created with nanosecond and millimeter-scale ion beam pulses on the Neutralized Drift Compression Experiment-II at Berkeley Lab. The explored scientific topics include the dynamics of ion induced damage in materials, materials synthesis far from equilibrium, warm dense matter and intense beam-plasma physics. We describe the improved accelerator performance, diagnostics and results of beam-induced irradiation of thin samples of, e.g., tin and silicon. Bunches with >3x1010 ions/pulse with 1-mm radius and 2-30 ns FWHM duration and have been created. To achieve the short pulse durations and mm-scale focal spot radii, the 1.2 MeV He+ ion beam is neutralized in a drift compression section which removes the space charge defocusing effect during the final compression and focusing. Quantitative comparison of detailed particle-in-cell simulations with the experiment play an important role in optimizing the accelerator performance and keep pace with the accelerator repetition rate of <1/minute. This work was supported by the Office of Science of the US Department of Energy under contracts DE-AC0205CH11231 (LBNL), DE-AC52-07NA27344 (LLNL) and DE-AC02-09CH11466 (PPPL).
Scaled Eagle Nebula Experiments on NIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pound, Marc W.
We performed scaled laboratory experiments at the National Ignition Facility laser to assess models for the creation of pillar structures in star-forming clouds of molecular hydrogen, in particular the famous Pillars of the Eagle Nebula. Because pillars typically point towards nearby bright ultraviolet stars, sustained directional illumination appears to be critical to pillar formation. The experiments mock up illumination from a cluster of ultraviolet-emitting stars, using a novel long duration (30--60 ns), directional, laser-driven x-ray source consisting of multiple radiation cavities illuminated in series. Our pillar models are assessed using the morphology of the Eagle Pillars observed with the Hubblemore » Space Telescope, and measurements of column density and velocity in Eagle Pillar II obtained at the BIMA and CARMA millimeter wave facilities. In the first experiments we assess a shielding model for pillar formation. The experimental data suggest that a shielding pillar can match the observed morphology of Eagle Pillar II, and the observed Pillar II column density and velocity, if augmented by late time cometary growth.« less
Comparative performance of rubber modified hot mix asphalt under ALF loading.
DOT National Transportation Integrated Search
2003-08-01
Experiment 2 at the Louisiana ALF site involved determining the engineering benefits of using powdered rubber (PRM) in hot mix asphalt mixes. Three full scale test sections were constructed and subjected to increasing loads from the ALF. Lane 2-1 inc...
ENHANCED FORMATION OF CHLORINATED PICS BY THE ADDITION OF BROMINE
A systematic series of experiments were performed on a pilot-scale rotary kiln incinerator simulator in which liquid surrogate wastes containing varied levels of chlorine and bromine were burned. The surrogate wastes used were a series of mixtures of methylene chloride and methyl...
Validating a Geographical Image Retrieval System.
ERIC Educational Resources Information Center
Zhu, Bin; Chen, Hsinchun
2000-01-01
Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…
WISC-R Information and Digit Span Scores of American and Canadian Children.
ERIC Educational Resources Information Center
Beauchamp, David P.; And Others
1979-01-01
Differences were investigated in performance between third-grade American and Canadian children on two subtests of the Wechsler Intelligence Scale for Children Revised. Results were discussed in terms of Canadian and American curriculum contents and test-taking experiences. (Author/JKS)
A numerical tool for reproducing driver behaviour: experiments and predictive simulations.
Casucci, M; Marchitto, M; Cacciabue, P C
2010-03-01
This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.
NASA Astrophysics Data System (ADS)
Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.
2003-12-01
Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.
NASA Astrophysics Data System (ADS)
Ren, Jie
2017-12-01
The process by which a kinesin motor couples its ATPase activity with concerted mechanical hand-over-hand steps is a foremost topic of molecular motor physics. Two major routes toward elucidating kinesin mechanisms are the motility performance characterization of velocity and run length, and single-molecular state detection experiments. However, these two sets of experimental approaches are largely uncoupled to date. Here, we introduce an integrative motility state analysis based on a theorized kinetic graph theory for kinesin, which, on one hand, is validated by a wealth of accumulated motility data, and, on the other hand, allows for rigorous quantification of state occurrences and chemomechanical cycling probabilities. An interesting linear scaling for kinesin motility performance across species is discussed as well. An integrative kinetic graph theory analysis provides a powerful tool to bridge motility and state characterization experiments, so as to forge a unified effort for the elucidation of the working mechanisms of molecular motors.
NASA Astrophysics Data System (ADS)
Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul
2015-09-01
Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.
Rotor Performance at High Advance Ratio: Theory versus Test
NASA Technical Reports Server (NTRS)
Harris, Franklin D.
2008-01-01
Five analytical tools have been used to study rotor performance at high advance ratio. One is representative of autogyro rotor theory in 1934 and four are representative of helicopter rotor theory in 2008. The five theories are measured against three sets of well documented, full-scale, isolated rotor performance experiments. The major finding of this study is that the decades spent by many rotorcraft theoreticians to improve prediction of basic rotor aerodynamic performance has paid off. This payoff, illustrated by comparing the CAMRAD II comprehensive code and Wheatley & Bailey theory to H-34 test data, shows that rational rotor lift to drag ratios are now predictable. The 1934 theory predicted L/D ratios as high as 15. CAMRAD II predictions compared well with H-34 test data having L/D ratios more on the order of 7 to 9. However, the detailed examination of the selected codes compared to H-34 test data indicates that not one of the codes can predict to engineering accuracy above an advance ratio of 0.62 the control positions and shaft angle of attack required for a given lift. There is no full-scale rotor performance data available for advance ratios above 1.0 and extrapolation of currently available data to advance ratios on the order of 2.0 is unreasonable despite the needs of future rotorcraft. Therefore, it is recommended that an overly strong full-scale rotor blade set be obtained and tested in a suitable wind tunnel to at least an advance ratio of 2.5. A tail rotor from a Sikorsky CH-53 or other large single rotor helicopter should be adequate for this exploratory experiment.
Effect of Wall Shear Stress on Corrosion Inhibitor Film Performance
NASA Astrophysics Data System (ADS)
Canto Maya, Christian M.
In oil and gas production, internal corrosion of pipelines causes the highest incidence of recurring failures. Ensuring the integrity of ageing pipeline infrastructure is an increasingly important requirement. One of the most widely applied methods to reduce internal corrosion rates is the continuous injection of chemicals in very small quantities, called corrosion inhibitors. These chemical substances form thin films at the pipeline internal surface that reduce the magnitude of the cathodic and/or anodic reactions. However, the efficacy of such corrosion inhibitor films can be reduced by different factors such as multiphase flow, due to enhanced shear stress and mass transfer effects, loss of inhibitor due to adsorption on other interfaces such as solid particles, bubbles and droplets entrained by the bulk phase, and due to chemical interaction with other incompatible substances present in the stream. The first part of the present project investigated the electrochemical behavior of two organic corrosion inhibitors (a TOFA/DETA imidazolinium, and an alkylbenzyl dimethyl ammonium chloride), with and without an inorganic salt (sodium thiosulfate), and the resulting enhancement. The second part of the work explored the performance of corrosion inhibitor under multiphase (gas/liquid, solid/liquid) flow. The effect of gas/liquid multiphase flow was investigated using small and large scale apparatus. The small scale tests were conducted using a glass cell and a submersed jet impingement attachment with three different hydrodynamic patterns (water jet, CO 2 bubbles impact, and water vapor cavitation). The large scale experiments were conducted applying different flow loops (hilly terrain and standing slug systems). Measurements of weight loss, linear polarization resistance (LPR), and adsorption mass (using an electrochemical quartz crystal microbalance, EQCM) were used to quantify the effect of wall shear stress on the performance and integrity of corrosion inhibitor films. Different scenarios were evaluated in this section of the work, such as the loss of corrosion inhibitor due to the formation of foam, and the effect of different substrates on the adsorption of corrosion inhibitor. Erosion/corrosion effects due to solids carried by a multiphase flow were investigated both on a small and large scale. Small scale experiments were performed in order to determine whether the corrosion inhibitor concentration was diminished because of adsorption onto the large surface area of entrained solid particles. The large scale experiments were done to evaluate the effect of mechanical erosion corrosion on inhibitor film performance, and vice versa. The analysis of the results obtained by electrochemical characterization shows that the adsorption mechanism having a corrosion inhibitor competing with water molecules for a place on the steel surface is an accurate approach to describe this phenomenon. From the experimental results obtained in the multiphase part of this research project, it can be concluded that the performance of corrosion inhibitor films is not significantly impacted by mechanical forces alone; even under the worst case scenarios tested here (standing slug and erosion/corrosion). Reduction of inhibitor performance was found to be primarily due to the loss of inhibitor due to consumption by adsorption particularly when a gas phase was present, leading to foam formation.
A Purpose-Driven Fourth Year of Medical School.
Dewan, Mantosh; Norcini, John
2017-10-03
The fourth year of medical school has been repeatedly found to be ineffective, and concerns exist about its purpose and academic quality, as well as grade inflation. Since Flexner, the purpose of undergraduate medical training has moved from readiness for independent practice to readiness for postgraduate training. However, training directors report that medical graduates are inadequately prepared to enter residency. The authors propose a fourth year with two components: first, a yearlong, longitudinal ambulatory experience of at least three days each week on an interprofessional team with consistent faculty supervision and mentoring, increasing independence, and a focus on education; and second, rigorous clinical-scales-based assessment of meaningful outcomes.In the proposed model, the medical student has generous time with a limited panel of patients, and increasing autonomy, with faculty moving from supervising physicians to collaborating physicians. There is regular assessment and formative feedback. This more independent, longitudinal clinical experience uniquely allows assessment of the most meaningful work-based performance outcomes-that is, patient outcomes assessed by validated clinical scales. The proposed fourth year will require a realignment of resources and faculty time; however, models already exist. Barriers and possible solutions are discussed.A purpose-driven, assessment-rich fourth year with patient and supervisor continuity will provide real-world experience, making medical graduates more competent and confident on the first day of residency. Use of clinical scales will also allow educators new confidence that the performance-based competence of these more experienced and expert graduates leads to demonstrable collaboration, healing, and good patient outcomes.
Faksness, Liv-Guri; Brandvik, Per Johan; Daae, Ragnhild L; Leirvik, Frode; Børseth, Jan Fredrik
2011-05-01
A large-scale field experiment took place in the marginal ice zone in the Barents Sea in May 2009. Fresh oil (7000 L) was released uncontained between the ice floes to study oil weathering and spreading in ice and surface water. A detailed monitoring of oil-in-water and ice interactions was performed throughout the six-day experiment. In addition, meteorological and oceanographic data were recorded for monitoring of the wind speed and direction, air temperature, currents and ice floe movements. The monitoring showed low concentrations of dissolved hydrocarbons and the predicted acute toxicity indicated that the acute toxicity was low. The ice field drifted nearly 80 km during the experimental period, and although the oil drifted with the ice, it remained contained between the ice floes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Performance of ultrafiltration membrane process combined with coagulation/sedimentation.
Jang, N Y; Watanabe, Y; Minegishi, S
2005-01-01
Effects of coagulation/sedimentation as a pre-treatment on the dead-end ultrafiltration (UF) membrane process were studied in terms of membrane fouling and removal efficiency of natural dissolved organic matter, using Chitose River water. Two types of experiment were carried out. One was a bench scale membrane filtration with jar-test and the other was membrane filtration pilot plant combined with the Jet Mixed Separator (JMS) as a pre-coagulation/sedimentation unit. In the bench scale experiment, the effects of coagulant dosage, pH and membrane operating pressure on the membrane fouling and removal efficiency of natural dissolved organic matter were investigated. In the pilot plant experiment, we also investigated the effect of pre-coagulation/sedimentation on the membrane fouling and the removal efficiency of natural dissolved organic matter. Coagulation/sedimentation prior to membrane filtration process controlled the membrane fouling and increased the removal efficiency of natural dissolved organic matter.
Processing of the WLCG monitoring data using NoSQL
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.
Enhancing Self-Efficacy and Performance: An Experimental Comparison of Psychological Techniques.
Wright, Bradley James; O'Halloran, Paul Daniel; Stukas, Arthur Anthony
2016-01-01
We assessed how 6 psychological performance enhancement techniques (PETs) differentially improved self-efficacy (SE) and skill performance. We also assessed whether vicarious experiences and verbal persuasion as posited sources of SE (Bandura, 1982 ) were supported and, further, if the effects of the 6 PETs remained after controlling for achievement motivation traits and self-esteem. A within-subject design assessed each individual across 2 trials for 3 disparate PETs. A between-groups design assessed differences between PETs paired against each other for 3 similar novel tasks. Participants (N = 96) performed 2 trials of 10 attempts at each of the tasks (kick, throw, golf putt) in a counterbalanced sequence using their nondominant limb. Participants completed the Sport Orientation Questionnaire, Rosenberg Self-Esteem Scale, and General Self-Efficacy Scale and were randomly allocated to either the modeling or imagery, goal-setting or instructional self-statement, or knowledge-of-results or motivational feedback conditions aligned with each task. An instructional self-statement improved performance better than imagery, modeling, goal setting, and motivational and knowledge-of-results augmented feedback. Motivational auditory feedback most improved SE. Increased SE change scores were related to increased performance difference scores on all tasks after controlling for age, sex, achievement motivation, and self-esteem. Some sources of SE may be more influential than others on both SE and performance improvements. We provide partial support for the sources of SE proposed by Bandura's social-cognitive theory with verbal persuasion but not vicarious experiences improving SE.
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
Earthquake source properties from instrumented laboratory stick-slip
Kilgore, Brian D.; McGarr, Arthur F.; Beeler, Nicholas M.; Lockner, David A.; Thomas, Marion Y.; Mitchell, Thomas M.; Bhat, Harsha S.
2017-01-01
Stick-slip experiments were performed to determine the influence of the testing apparatus on source properties, develop methods to relate stick-slip to natural earthquakes and examine the hypothesis of McGarr [2012] that the product of stiffness, k, and slip duration, Δt, is scale-independent and the same order as for earthquakes. The experiments use the double-direct shear geometry, Sierra White granite at 2 MPa normal stress and a remote slip rate of 0.2 µm/sec. To determine apparatus effects, disc springs were added to the loading column to vary k. Duration, slip, slip rate, and stress drop decrease with increasing k, consistent with a spring-block slider model. However, neither for the data nor model is kΔt constant; this results from varying stiffness at fixed scale.In contrast, additional analysis of laboratory stick-slip studies from a range of standard testing apparatuses is consistent with McGarr's hypothesis. kΔt is scale-independent, similar to that of earthquakes, equivalent to the ratio of static stress drop to average slip velocity, and similar to the ratio of shear modulus to wavespeed of rock. These properties result from conducting experiments over a range of sample sizes, using rock samples with the same elastic properties as the Earth, and scale-independent design practices.
Penetration of tungsten-alloy rods into composite ceramic targets: Experiments and 2-D simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Z.; Dekel, E.; Hohler, V.
1998-07-10
A series of terminal ballistics experiments, with scaled tungsten-alloy penetrators, was performed on composite targets consisting of ceramic tiles glued to thick steel backing plates. Tiles of silicon-carbide, aluminum nitride, titanium-dibroide and boron-carbide were 20-80 mm thick, and impact velocity was 1.7 km/s. 2-D numerical simulations, using the PISCES code, were performed in order to simulate these shots. It is shown that a simplified version of the Johnson-Holmquist failure model can account for the penetration depths of the rods but is not enough to capture the effect of lateral release waves on these penetrations.
ATLAS I/O performance optimization in as-deployed environments
NASA Astrophysics Data System (ADS)
Maier, T.; Benjamin, D.; Bhimji, W.; Elmsheuser, J.; van Gemmeren, P.; Malon, D.; Krumnack, N.
2015-12-01
This paper provides an overview of an integrated program of work underway within the ATLAS experiment to optimise I/O performance for large-scale physics data analysis in a range of deployment environments. It proceeds to examine in greater detail one component of that work, the tuning of job-level I/O parameters in response to changes to the ATLAS event data model, and considers the implications of such tuning for a number of measures of I/O performance.
Zecevic, Damir E; Wagner, Karl G
2013-07-01
Effective and predictive small-scale selection tools are inevitable during the development of a solubility enhanced drug product. For hot-melt extrusion, this selection process can start with a microscale performance evaluation on a hot-stage microscope (HSM). A batch size of 400 mg can provide sufficient materials to assess the drug product attributes such as solid-state properties, solubility enhancement, and physical stability as well as process related attributes such as processing temperature in a twin-screw extruder (TSE). Prototype formulations will then be fed into a 5 mm TSE (~1-2 g) to confirm performance from the HSM under additional shear stress. Small stress stability testing might be performed with these samples or a larger batch (20-40 g) made by 9 or 12 mm TSE. Simultaneously, numeric process simulations are performed using process data as well as rheological and thermal properties of the formulations. Further scale up work to 16 and 18 mm TSE confirmed and refined the simulation model. Thus, at the end of the laboratory-scale development, not only the clinical trial supply could be manufactured, but also one can form a sound risk assessment to support further scale up even without decades of process experience. Copyright © 2013 Wiley Periodicals, Inc.
Lab and Pore-Scale Study of Low Permeable Soils Diffusional Tortuosity
NASA Astrophysics Data System (ADS)
Lekhov, V.; Pozdniakov, S. P.; Denisova, L.
2016-12-01
Diffusion plays important role in contaminant spreading in low permeable units. The effective diffusion coefficient of saturated porous medium depends on this coefficient in water, porosity and structural parameter of porous space - tortuosity. Theoretical models of relationship between porosity and diffusional tortuosity are usually derived for conceptual granular models of medium filled by solid particles of simple geometry. These models usually do not represent soils with complex microstructure. The empirical models, like as Archie's law, based on the experimental electrical conductivity data are mostly useful for practical applications. Such models contain empirical parameters that should be defined experimentally for given soil type. In this work, we compared tortuosity values obtained in lab-scale diffusional experiments and pore scale diffusion simulation for the studied soil microstructure and exanimated relationship between tortuosity and porosity. Samples for the study were taken from borehole cores of low-permeable silt-clay formation. Using the samples of 50 cm3 we performed lab scale diffusional experiments and estimated the lab-scale tortuosity. Next using these samples we studied the microstructure with X-ray microtomograph. Shooting performed on undisturbed microsamples of size 1,53 mm with a resolution ×300 (10243 vox). After binarization of each obtained 3-D structure, its spatial correlation analysis was performed. This analysis showed that the spatial correlation scale of the indicator variogram is considerably smaller than microsample length. Then there was the numerical simulation of the Laplace equation with binary coefficients for each microsamples. The total number of simulations at the finite-difference grid of 1753 cells was 3500. As a result the effective diffusion coefficient, tortuosity and porosity values were obtained for all studied microsamples. The results were analyzed in the form of graph of tortuosity versus porosity. The 6 experimental tortuosity values well agree with pore-scale simulations falling in the general pattern that shows nonlinear decreasing of tortuosity with decreasing of porosity. Fitting this graph by Archie model we found exponent value in the range between 1,8 and 2,4. This work was supported by RFBR via grant 14-05-00409.
Simulations of hypervelocity impacts for asteroid deflection studies
NASA Astrophysics Data System (ADS)
Heberling, T.; Ferguson, J. M.; Gisler, G. R.; Plesko, C. S.; Weaver, R.
2016-12-01
The possibility of kinetic-impact deflection of threatening near-earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving two independent spacecraft, NASAs DART (Double Asteroid Redirection Test) and ESAs AIM (Asteroid Impact Mission). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos, at a speed of 5 to 7 km/s, is expected to alter the mutual orbit by an observable amount. The velocity imparted to the secondary depends on the geometry and dynamics of the impact, and especially on the momentum enhancement factor, conventionally called beta. We use the Los Alamos hydrocodes Rage and Pagosa to estimate beta in laboratory-scale benchmark experiments and in the large-scale asteroid deflection test. Simulations are performed in two- and three-dimensions, using a variety of equations of state and strength models for both the lab-scale and large-scale cases. This work is being performed as part of a systematic benchmarking study for the AIDA mission that includes other hydrocodes.
Development of the trickle roof cooling and heating system: Experimental plan
NASA Astrophysics Data System (ADS)
Haves, P.; Jankovic, T.; Doderer, E.
1982-07-01
A passive system applicable both to retrofit and new construction was developed. This system (the trickle roof system) dissipates heat from a thin film of water flowing over the roof. A small scale trickle roof system dissipator was tested at Trinity University under a range of ambient conditions and operating configurations. The results suggest that trickle roof systems should have comparable performance to roof pond systems. Provided is a review of the trickle roof system concept, several possible configurations, and the benefits the systems can provide. Test module experiments And results are presented in detail. The requirements for full scale testing are discussed and a plan is outlined using the two identical residential scale passive test facility buildings at Trinity University, San Antonio, Texas. Full scale experimental results would be used to validate computer algorithms, provide system optimization, and produce a nationwide performance assessment and design guidelines. This would provide industry with the information necessary to determine the commerical potential of the trickle roof system.
Fire extinguishing tests -80 with methyl alcohol gasoline (in MIXED)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmstedt, G.; Ryderman, A.; Carlsson, B.
1980-01-01
Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15more » fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results, and performances with small pools can hardly be correlated with results from large scale fires.« less
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
Poirier, Frédéric J A M; Gurnsey, Rick
2005-08-01
Eccentricity-dependent resolution losses are sometimes compensated for in psychophysical experiments by magnifying (scaling) stimuli at each eccentricity. The use of either pre-selected scaling factors or unscaled stimuli sometimes leads to non-monotonic changes in performance as a function of eccentricity. We argue that such non-monotonic changes arise when performance is limited by more than one type of constraint at each eccentricity. Building on current methods developed to investigate peripheral perception [e.g., Watson, A. B. (1987). Estimation of local spatial scale. Journal of the Optical Society of America A, 4 (8), 1579-1582; Poirier, F. J. A. M., & Gurnsey, R. (2002). Two eccentricity dependent limitations on subjective contour discrimination. Vision Research, 42, 227-238; Strasburger, H., Rentschler, I., & Harvey Jr., L. O. (1994). Cortical magnification theory fails to predict visual recognition. European Journal of Neuroscience, 6, 1583-1588], we show how measured scaling can deviate from a linear function of eccentricity in a grating acuity task [Thibos, L. N., Still, D. L., & Bradley, A. (1996). Characterization of spatial aliasing and contrast sensitivity in peripheral vision. Vision Research, 36(2), 249-258]. This framework can also explain the central performance drop [Kehrer, L. (1989). Central performance drop on perceptual segregation tasks. Spatial Vision, 4, 45-62] and a case of "reverse scaling" of the integration window in symmetry [Tyler, C. W. (1999). Human symmetry detection exhibits reverse eccentricity scaling. Visual Neuroscience, 16, 919-922]. These cases of non-monotonic performance are shown to be consistent with multiple sources of resolution loss, each of which increases linearly with eccentricity. We conclude that most eccentricity research, including "oddities", can be explained by multiple-scaling theory as extended here, where the receptive field properties of all underlying mechanisms in a task increase in size with eccentricity, but not necessarily at the same rate.
Characterizing the dynamic strength of materials for ballistic applications
NASA Astrophysics Data System (ADS)
Cazamias, James Ulysses
We unambiguously verified the hypothesis that normal penetration in brittle materials may be represented as a bi-modal process. The first mode is governed by fundamental strength properties of the target, while the second mode is governed by the fracture kinetics. We investigated the failure response of glass under impact loading. We observed a drop in the failure wave velocity by a factor of 1/2 after unloading. While not unexpected, this drop had not been clearly observed previously. In contradiction to literature values, we observed a drop in sound speed behind the failure wave. Finally, despite the common perception that the failed material is comminuted, we observed a finite tensile strength. We proposed a new variant of the Taylor test using scaled rods to examine strain rate effects. For armor steel, we observed changes in strength greater than what would be expected from a logarithmic dependence of strength on strain rate although not enough to account for scale effects. For tungsten penetrators, we observed that smaller scale tungsten rods appeared to have more work hardening than the large scale rods which might account for scale effects. We examined the square Taylor impact problem. We showed that the square Taylor test is a new way to study shear localization under compressive-shear loading. We performed the first shock characterization of AlON. We observed that the bar impact experiment appears to differentiate between different thicknesses of ceramic tile in qualitative agreement with subscale and full scale penetration experiments. We present data supporting the lower yield strength estimate of 4.3 GPa for alumina. We performed the first bar impact characterization of AlON.
Life Out There: An Astrobiological Multimedia Experience for the Digital Planetarium
NASA Astrophysics Data System (ADS)
Yu, K. C.; Grinspoon, D.
2013-04-01
Planetariums have a long history of experimentation with audio and visuals to create new multimedia experiences. We report on a series of innovative experiences in the Gates Planetarium at the Denver Museum of Nature & Science in 2009-2011 combining live performances of music and navigation through scientific visualizations. The Life Out There productions featured a story showcasing astrobiology concepts at scales ranging from galactic to molecular, and told using VJ-ing of immersive visualizations and musical performances from the House Band to the Universe. Funded by the NASA Astrobiology Institute's JPL-Titan Team, these hour-long shows were broken into four separate themed musical movements, with an improvisatory mix of music, dome visuals, and spoken science narrative which resulted in no two performances being exactly alike. Post-performance dissemination is continuing via a recorded version of the performance available as a DVD and online streaming video. Written evaluations from visitors who were present at the live shows reveal high satisfaction, while one of the Life Out There concerts was used to inaugurate a new evening program to draw in a younger audience demographic to DMNS.
Using Performance Tools to Support Experiments in HPC Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naughton, III, Thomas J; Boehm, Swen; Engelmann, Christian
2014-01-01
The high performance computing (HPC) community is working to address fault tolerance and resilience concerns for current and future large scale computing platforms. This is driving enhancements in the programming environ- ments, specifically research on enhancing message passing libraries to support fault tolerant computing capabilities. The community has also recognized that tools for resilience experimentation are greatly lacking. However, we argue that there are several parallels between performance tools and resilience tools . As such, we believe the rich set of HPC performance-focused tools can be extended (repurposed) to benefit the resilience community. In this paper, we describe the initialmore » motivation to leverage standard HPC per- formance analysis techniques to aid in developing diagnostic tools to assist fault tolerance experiments for HPC applications. These diagnosis procedures help to provide context for the system when the errors (failures) occurred. We describe our initial work in leveraging an MPI performance trace tool to assist in provid- ing global context during fault injection experiments. Such tools will assist the HPC resilience community as they extend existing and new application codes to support fault tolerances.« less
NASA Astrophysics Data System (ADS)
Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.
2010-10-01
We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.
Krause, S; Herzog, G; Schlenhoff, A; Sonntag, A; Wiesendanger, R
2011-10-28
The influence of a high spin-polarized tunnel current onto the switching behavior of a superparamagnetic nanoisland on a nonmagnetic substrate is investigated by means of spin-polarized scanning tunneling microscopy. A detailed lifetime analysis allows for a quantification of the effective temperature rise of the nanoisland and the modification of the activation energy barrier for magnetization reversal, thereby using the nanoisland as a local thermometer and spin-transfer torque analyzer. Both the Joule heating and spin-transfer torque are found to scale linearly with the tunnel current. The results are compared to experiments performed on lithographically fabricated magneto-tunnel junctions, revealing a very high spin-transfer torque switching efficiency in our experiments.
ERIC Educational Resources Information Center
Harsh, Joseph; Esteb, John J.; Maltese, Adam V.
2017-01-01
National calls in science, technology, engineering, and technology education reform efforts have advanced the wide-scale engagement of students in undergraduate research for the preparation of a workforce and citizenry able to attend to the challenges of the 21st century. Awareness of the potential benefits and costs of these experiences has led…
Solving Navier-Stokes equations on a massively parallel processor; The 1 GFLOP performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saati, A.; Biringen, S.; Farhat, C.
This paper reports on experience in solving large-scale fluid dynamics problems on the Connection Machine model CM-2. The authors have implemented a parallel version of the MacCormack scheme for the solution of the Navier-Stokes equations. By using triad floating point operations and reducing the number of interprocessor communications, they have achieved a sustained performance rate of 1.42 GFLOPS.
Gong, Yang; Zhang, Jiajie
2011-04-01
In a distributed information search task, data representation and cognitive distribution jointly affect user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered framework, we proposed a search model and task taxonomy. The model defines its application in the context of healthcare setting. The taxonomy clarifies the legitimate operations for each type of search task of relational data. We then developed experimental prototypes of hyperlipidemia data displays. Based on the displays, we tested the search tasks performance through two experiments. The experiments are of a within-subject design with a random sample of 24 participants. The results support our hypotheses and validate the prediction of the model and task taxonomy. In this study, representation dimensions, data scales, and search task types are the main factors in determining search efficiency and effectiveness. Specifically, the more external representations provided on the interface the better search task performance of users. The results also suggest the ideal search performance occurs when the question type and its corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which could be more effectively designed in electronic medical records.
WHY DOES FLUE GAS ELEMENTAL MERCURY CONCENTRATION INCREASE ACROSS A WET SCRUBBER?
The paper describes the results of research investigating the potential reduction of oxidized mercury (Hg2+) to elemental mercury (Hg0) and subsequent emission of Hg0 from wet scrubbers. Experiments were performed in a bench-scale, wet scrubber simulator containing solutions used...
Hands-On Exercise in Environmental Structural Geology Using a Fracture Block Model.
ERIC Educational Resources Information Center
Gates, Alexander E.
2001-01-01
Describes the use of a scale analog model of an actual fractured rock reservoir to replace paper copies of fracture maps in the structural geology curriculum. Discusses the merits of the model in enabling students to gain experience performing standard structural analyses. (DDR)
Electron and photon identification in the D0 experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazov, V. M.; Abbott, B.; Acharya, B. S.
2014-06-01
The electron and photon reconstruction and identification algorithms used by the D0 Collaboration at the Fermilab Tevatron collider are described. The determination of the electron energy scale and resolution is presented. Studies of the performance of the electron and photon reconstruction and identification are summarized.
Dust Sensor with Large Detection Area Using Polyimide Film and Piezoelectric Elements
NASA Astrophysics Data System (ADS)
Kobayashi, M.; Okudaira, O.; Kurosawa, K.; Okamoto, T.; Matsui, T.
2016-10-01
We describe the development of dust particles sensor in space with large area (1m × 1m scale). The sensor has just a thin film of polyimide attached with small tips of piezoelectric elements. We performed experiments to characterize the sensor.
Anchorage strength and slope stability of a landfill liner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villard, P.; Gourc, J.P.; Feki, N.
1997-11-01
In order to determine reliable dimensions of an anchorage system and satisfactory operation of the watertight liner in a waste landfill, it is essential to make an accurate assessment of the tensions acting on the geosynthetics on the top of the slope. Experimental and theoretical studies have been carried out in parallel. The former concern a full-scale experiment undertaken in Montreuil sur Barse on a waste storage site with instrumented slope. The latter concern anchorage tests performed on a scale model for different anchorage geometries.
Numerical evaluation of the scale problem on the wind flow of a windbreak
Liu, Benli; Qu, Jianjun; Zhang, Weimin; Tan, Lihai; Gao, Yanhong
2014-01-01
The airflow field around wind fences with different porosities, which are important in determining the efficiency of fences as a windbreak, is typically studied via scaled wind tunnel experiments and numerical simulations. However, the scale problem in wind tunnels or numerical models is rarely researched. In this study, we perform a numerical comparison between a scaled wind-fence experimental model and an actual-sized fence via computational fluid dynamics simulations. The results show that although the general field pattern can be captured in a reduced-scale wind tunnel or numerical model, several flow characteristics near obstacles are not proportional to the size of the model and thus cannot be extrapolated directly. For example, the small vortex behind a low-porosity fence with a scale of 1:50 is approximately 4 times larger than that behind a full-scale fence. PMID:25311174
NASA Astrophysics Data System (ADS)
Stavrianaki, K.; Vallianatos, F.; Sammonds, P. R.; Ross, G. J.
2014-12-01
Fracturing is the most prevalent deformation mechanism in rocks deformed in the laboratory under simulated upper crustal conditions. Fracturing produces acoustic emissions (AE) at the laboratory scale and earthquakes on a crustal scale. The AE technique provides a means to analyse microcracking activity inside the rock volume and since experiments can be performed under confining pressure to simulate depth of burial, AE can be used as a proxy for natural processes such as earthquakes. Experimental rock deformation provides us with several ways to investigate time-dependent brittle deformation. Two main types of experiments can be distinguished: (1) "constant strain rate" experiments in which stress varies as a result of deformation, and (2) "creep" experiments in which deformation and deformation rate vary over time as a result of an imposed constant stress. We conducted constant strain rate experiments on air-dried Darley Dale sandstone samples in a variety of confining pressures (30MPa, 50MPa, 80MPa) and in water saturated samples with 20 MPa initial pore fluid pressure. The results from these experiments used to determine the initial loading in the creep experiments. Non-extensive statistical physics approach was applied to the AE data in order to investigate the spatio-temporal pattern of cracks close to failure. A more detailed study was performed for the data from the creep experiments. When axial stress is plotted against time we obtain the trimodal creep curve. Calculation of Tsallis entropic index q is performed to each stage of the curve and the results are compared with the ones from the constant strain rate experiments. The Epidemic Type Aftershock Sequence model (ETAS) is also applied to each stage of the creep curve and the ETAS parameters are calculated. We investigate whether these parameters are constant across all stages of the curve, or whether there are interesting patterns of variation. This research has been co-funded by the European Union (European Social Fund) and Greek national resources under the framework of the "THALES Program: SEISMO FEAR HELLARC" project of the "Education & Lifelong Learning" Operational Programme.
Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent
NASA Astrophysics Data System (ADS)
Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.
2014-06-01
The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.
Simulating Extraterrestrial Ices in the Laboratory
NASA Astrophysics Data System (ADS)
Berisford, D. F.; Carey, E. M.; Hand, K. P.; Choukroun, M.
2017-12-01
Several ongoing experiments at JPL attempt to simulate the ice environment for various regimes associated with icy moons. The Europa Penitent Ice Experiment (EPIX) simulates the surface environment of an icy moon, to investigate the physics of ice surface morphology growth. This experiment features half-meter-scale cryogenic ice samples, cryogenic radiative sink environment, vacuum conditions, and diurnal cycling solar simulation. The experiment also includes several smaller fixed-geometry vacuum chambers for ice simulation at Earth-like and intermediate temperature and vacuum conditions for development of surface morphology growth scaling relations. Additionally, an ice cutting facility built on a similar platform provides qualitative data on the mechanical behavior of cryogenic ice with impurities under vacuum, and allows testing of ice cutting/sampling tools relevant for landing spacecraft. A larger cutting facility is under construction at JPL, which will provide more quantitative data and allow full-scale sampling tool tests. Another facility, the JPL Ice Physics Laboratory, features icy analog simulant preparation abilities that range icy solar system objects such as Mars, Ceres and the icy satellites of Saturn and Jupiter. In addition, the Ice Physics Lab has unique facilities for Icy Analog Tidal Simulation and Rheological Studies of Cryogenic Icy Slurries, as well as equipment to perform thermal and mechanical properties testing on icy analog materials and their response to sinusoidal tidal stresses.
Challenges in Ocean Data Assimilation for the US West Coast
NASA Astrophysics Data System (ADS)
Li, Z.; Chao, Y.; Farrara, J.; Wang, X.
2006-12-01
A three-dimensional variational data assimilation (3DVAR) system has been developed for the Regional Ocean Modeling System (ROMS), and it is called ROMS-DAS. This system provides a capability of predicting meso- to small-scale variations with temporal scales from hours to days in the coastal oceans. To cope with the particular difficulties that result from complex coastlines and bottom topography, unbalanced flows and sparse observations, ROMS-DAS utilizes several novel strategies. These strategies include the implementation of three-dimensional anisotropic and inhomogeneous error correlations, application of particular weak dynamic constraints, and implementation of efficient and reliable algorithms for minimizing the cost function. The ROMS-DAS system was applied in field experiments for Monterey Bay during both 2003 (Autonomous Ocean Sampling Network - AOSN) and 2006 (MB06). These two experiments included intensive data collection from a variety of observational platforms, including satellites, airplanes, High Frequency radars, Acoustic Doppler Current Profilers, ships, drifters, buoys, autonomous underwater vehicles (AUV), and particularly a fleet of undersea gliders. Using these data sets, various data assimilation experiments were performed to address several major data assimilation challenges that arise from multi-scales structures, inhomogeneous properties, dynamical imbalance of the flow, and tides. Basing on these experiments, a set of strategies were formulated to deal with those challenges.
Effects of Psychiatric Symptoms on Attention in North Korean Refugees.
Lee, Yu Jin; Jun, Jin Yong; Park, Juhyun; Kim, Soohyun; Gwak, Ah Reum; Lee, So Hee; Yoo, So Young; Kim, Seog Ju
2016-09-01
We investigated the performance of North Korean refugees on attention tasks, and the relationship between that performance and psychiatric symptoms. Sustained and divided attention was assessed using the computerized Comprehensive Attention Test in North Korean refugees and in South Koreans. All participants also completed the Beck Depression Inventory, the Beck Anxiety Inventory, the Impact of Event Scale-Revised and the Dissociative Experiences Scale-II (DES-II). The North Korean refugees showed slower reaction times (RTs) on the visual sustained attention task compared to the South Koreans after controlling for age and sex. North Korean refugees had a greater number of omission errors (OEs) on the divided attention task and a higher standard deviation (SD) of RT. Total DES-II scores of the North Korean refugees were associated with the number of OEs and the SD of RT on the sustained attention task, and with the number of OEs on the divided attention task. North Korean refugees showed poorer performance on computerized attention tasks. In addition, attention deficit among North Korean refugees was associated with their dissociative experiences. Our results suggest that refugees may have attention deficits, which may be related to their psychiatric symptoms, particularly dissociation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, D.; Powell, B.; Barber, K.
The SRNL Radiological Field Lysimeter Experiment (RadFLEx) is a one-of-a-kind test bed facility designed to study radionuclide geochemical processes in the Savannah River Site (SRS) vadose zone at a larger spatial scale (from grams to tens of kilograms of sediment) and temporal scale (from months to decade) than is readily afforded through laboratory studies. RadFLEx is a decade-long project that was initiated on July 5, 2012 and is funded by six different sources. The objective of this status report is as follows: 1) to report findings to date that have an impact on SRS performance assessment (PA) calculations, and 2)more » to provide performance metrics of the RadFLEx program. The PA results are focused on measurements of transport parameters, such as distribution coefficients (Kd values), solubility, and unsaturated flow values. As this is an interim report, additional information from subsequent research may influence our interpretation of current results. Research related to basic understanding of radionuclide geochemistry in these vadose zone soils and other source terms are not described here but are referenced for the interested reader.« less
Elder, Edmund J; Evans, Jonathan C; Scherzer, Brian D; Hitt, James E; Kupperblatt, Gary B; Saghir, Shakil A; Markham, Dan A
2007-07-01
Many new molecular entities targeted for pharmaceutical applications face serious development challenges because of poor water solubility. Although particle engineering technologies such as controlled precipitation have been shown to enhance aqueous dissolution and bioavailability of poorly water soluble active pharmaceutical ingredients, the data available are the results of laboratory-scale experiments. These technologies must be evaluated at larger scale to ensure that the property enhancement is scalable and that the modified drugs can be processed on conventional equipment. In experiments using ketoconazole as the model drug, the controlled precipitation process was shown to produce kg-scale modified drug powder with enhanced dissolution comparable to that of lab-scale powder. Ketoconazole was demonstrated to be stable throughout the controlled precipitation process, with a residual methanol level below the ICH limit. The modified crystalline powder can be formulated, and then compressed using conventional high-speed tableting equipment, and the resulting tablets showed bioavailability more than double that of commercial tablets. When appropriately protected from moisture, both the modified powder and tablets prepared from the modified powder showed no change in dissolution performance for at least 6 months following storage at accelerated conditions and for at least 18 months following storage at room temperature.
Fundamental Scalings of Zonal Flows in a Basic Plasma Physics Experiment
NASA Astrophysics Data System (ADS)
Sokolov, Vladimir; Wei, Xiao; Sen, Amiya K.
2007-11-01
A basic physics experimental study of zonal flows (ZF) associated with ITG (ion temperature gradient) drift modes has been performed in the Columbia Linear Machine (CLM) and ZF has been definitively identified [1]. However, in contrast to most tokamak experiments, the stabilizing effect of ZF shear to ITG appears to be small in CLM. We now report on the study of important scaling behavior of ZF. First and most importantly, we report on the collisional damping scaling of ZF, which is considered to be its saturation mechanism [2]. By varying the sum of ion-ion and ion-neutral collision frequency over nearly half an order of magnitude, we find no change in the amplitude of ZF. Secondly, we study the scaling of ZF amplitude with ITG amplitude via increasing ITG drive though ηi, as well as feedback (stabilizing / destabilizing). We have observed markedly different scaling near and far above marginal stability. [1] V. Sokolov, X. Wei, A.K. Sen and K. Avinash, Plasma Phys.Controlled Fusion 48, S111 (2006). [2] P.H. Diamond, S.-I. Itoh, K.Itoh and T.S. Hahm, Plasma Phys.Controlled Fusion 47, R35 (2005).
NASA Astrophysics Data System (ADS)
Huang, Shiquan; Yi, Youping; Li, Pengchuan
2011-05-01
In recent years, multi-scale simulation technique of metal forming is gaining significant attention for prediction of the whole deformation process and microstructure evolution of product. The advances of numerical simulation at macro-scale level on metal forming are remarkable and the commercial FEM software, such as Deform2D/3D, has found a wide application in the fields of metal forming. However, the simulation method of multi-scale has little application due to the non-linearity of microstructure evolution during forming and the difficulty of modeling at the micro-scale level. This work deals with the modeling of microstructure evolution and a new method of multi-scale simulation in forging process. The aviation material 7050 aluminum alloy has been used as example for modeling of microstructure evolution. The corresponding thermal simulated experiment has been performed on Gleeble 1500 machine. The tested specimens have been analyzed for modeling of dislocation density, nucleation and growth of recrystallization(DRX). The source program using cellular automaton (CA) method has been developed to simulate the grain nucleation and growth, in which the change of grain topology structure caused by the metal deformation was considered. The physical fields at macro-scale level such as temperature field, stress and strain fields, which can be obtained by commercial software Deform 3D, are coupled with the deformed storage energy at micro-scale level by dislocation model to realize the multi-scale simulation. This method was explained by forging process simulation of the aircraft wheel hub forging. Coupled the results of Deform 3D with CA results, the forging deformation progress and the microstructure evolution at any point of forging could be simulated. For verifying the efficiency of simulation, experiments of aircraft wheel hub forging have been done in the laboratory and the comparison of simulation and experiment result has been discussed in details.
Acute Exercise Improves Mood and Motivation in Young Men with ADHD Symptoms.
Fritz, Kathryn M; O'Connor, Patrick J
2016-06-01
Little is known about whether acute exercise affects signs or symptoms of attention deficit/hyperactivity disorder (ADHD) in adults. This experiment sought to determine the effects of a single bout of moderate-intensity leg cycling exercise on measures of attention, hyperactivity, mood, and motivation to complete mental work in adult men reporting elevated ADHD symptoms. A repeated-measures crossover experiment was conducted with 32 adult men (18-33 yr) with symptoms consistent with adult ADHD assessed by the Adult Self-Report Scale V1.1. Measures of attention (continuous performance task and Bakan vigilance task), motivation to perform the mental work (visual analog scale), lower leg physical activity (accelerometry), and mood (Profile of Mood States and Addiction Research Center Inventory amphetamine scale) were measured before and twice after a 20-min seated rest control or exercise condition involving cycling at 65% V˙O2peak. Condition (exercise vs rest) × time (baseline, post 1, and post 2) ANOVA was used to test the hypothesized exercise-induced improvements in all outcomes. Statistically significant condition-time interactions were observed for vigor (P < 0.001), amphetamine (P < 0.001), motivation (P = 0.027), and Profile of Mood States depression (P = 0.027), fatigue (P = 0.030), and confusion (P = 0.046) scales. No significant interaction effects were observed for leg hyperactivity, simple reaction time, or vigilance task performance (accuracy, errors, or reaction time). In young men reporting elevated symptoms of ADHD, a 20-min bout of moderate-intensity cycle exercise transiently enhances motivation for cognitive tasks, increases feelings of energy, and reduces feelings of confusion, fatigue, and depression, but this has no effect on the behavioral measures of attention or hyperactivity used.
Elliott, Mark; Stauber, Christine E.; DiGiano, Francis A.; Fabiszewski de Aceituno, Anna; Sobsey, Mark D.
2015-01-01
The biosand filter (BSF) is an intermittently operated, household-scale slow sand filter for which little data are available on the effect of sand composition on treatment performance. Therefore, bench-scale columns were prepared according to the then-current (2006–2007) guidance on BSF design and run in parallel to conduct two microbial challenge experiments of eight-week duration. Triplicate columns were loaded with Accusand silica or crushed granite to compare virus and E. coli reduction performance. Bench-scale experiments provided confirmation that increased schmutzdecke growth, as indicated by decline in filtration rate, is the primary factor causing increased E. coli reductions of up to 5-log10. However, reductions of challenge viruses improved only modestly with increased schmutzdecke growth. Filter media type (Accusand silica vs. crushed granite) did not influence reduction of E. coli bacteria. The granite media without backwashing yielded superior virus reductions when compared to Accusand. However, for columns in which the granite media was first backwashed (to yield a more consistent distribution of grains and remove the finest size fraction), virus reductions were not significantly greater than in columns with Accusand media. It was postulated that a decline in surface area with backwashing decreased the sites and surface area available for virus sorption and/or biofilm growth and thus decreased the extent of virus reduction. Additionally, backwashing caused preferential flow paths and deviation from plug flow; backwashing is not part of standard BSF field preparation and is not recommended for BSF column studies. Overall, virus reductions were modest and did not meet the 5- or 3-log10 World Health Organization performance targets. PMID:26308036
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Philip E.; Banfield, Jill; Chandler, Darrell P.
The Rifle IFRC continued to make excellent progress during the last 12 months. As noted above, a key field experiment (Best Western) was performed during 2011 as a logical follow-on to the Super 8 field experiment preformed in 2010. In the Super 8 experiment, we successfully combined desorption and bioreduction and deployed a number of novel tracer techniques to enhance our ability to interpret the biogeochemistry of the experiment. In the Best Western experiment, we used the same experimental plot (Plot C) as was used for Super 8. The overarching objective of the Best Western field experiment was to comparedmore » the impacts of abiotic vs. biotic increases in alkalinity and to assess the mass of the sorbed pool of U(VI) at Rifle at the field scale. Both of these objectives were met. Preliminary analysis of the data indicate that the underlying biogeochemical data sets were obtained that will support a mechanistic understanding of the underlying processes, including remarkable insight into previously unrecognized microbial processes taking place during acetate amendment of the subsurface for a second time.« less
ARCADE-R2 experiment on board BEXUS 17 stratospheric balloon
NASA Astrophysics Data System (ADS)
Barbetta, Marco; Boesso, Alessandro; Branz, Francesco; Carron, Andrea; Olivieri, Lorenzo; Prendin, Jacopo; Rodeghiero, Gabriele; Sansone, Francesco; Savioli, Livia; Spinello, Fabio; Francesconi, Alessandro
2015-09-01
This paper provides an overview of the ARCADE-R2 experiment, a technology demonstrator that aimed to prove the feasibility of small-scale satellite and/or aircraft systems with automatic (a) attitude determination, (b) control and (c) docking capabilities. The experiment embodies a simplified scenario in which an unmanned vehicle mock-up performs rendezvous and docking operations with a fixed complementary unit. The experiment is composed by a supporting structure, which holds a small vehicle with one translational and one rotational degree of freedom, and its fixed target. The dual system features three main custom subsystems: a relative infrared navigation sensor, an attitude control system based on a reaction wheel and a small-scale docking mechanism. The experiment bus is equipped with pressure and temperature sensors, and wind probes to monitor the external environmental conditions. The experiment flew on board the BEXUS 17 stratospheric balloon on October 10, 2013, where several navigation-control-docking sequences were executed and data on the external pressure, temperature, wind speed and direction were collected, characterizing the atmospheric loads applied to the vehicle. This paper describes the critical components of ARCADE-R2 as well as the main results obtained from the balloon flight.
Kelly, Laura; Ziebland, Sue; Jenkinson, Crispin
2015-11-01
Health-related websites have developed to be much more than information sites: they are used to exchange experiences and find support as well as information and advice. This paper documents the development of a tool to compare the potential consequences and experiences a person may encounter when using health-related websites. Questionnaire items were developed following a review of relevant literature and qualitative secondary analysis of interviews relating to experiences of health. Item reduction steps were performed on pilot survey data (n=167). Tests of validity and reliability were subsequently performed (n=170) to determine the psychometric properties of the questionnaire. Two independent item pools entered psychometric testing: (1) Items relating to general views of using the internet in relation to health and, (2) Items relating to the consequences of using a specific health-related website. Identified sub-scales were found to have high construct validity, internal consistency and test-retest reliability. Analyses confirmed good psychometric properties in the eHIQ-Part 1 (11 items) and the eHIQ-Part 2 (26 items). This tool will facilitate the measurement of the potential consequences of using websites containing different types of material (scientific facts and figures, blogs, experiences, images) across a range of health conditions. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Madsen, A.; Als-Nielsen, J.; Hallmann, J.; Roth, T.; Lu, W.
2016-07-01
β -brass exhibits an archetypical example of an order-disorder transition with a critical behavior that was previously investigated by neutron scattering. The data were well described by the three-dimensional (3d) Ising model but the relatively crude experimental resolution prevented an in-depth examination of the single-length scaling hypothesis, a cornerstone in the theory of critical phenomena. With the development of synchrotron x-ray experiments, high-resolution data could be recorded and surprisingly it was found that the single-length scaling did not hold in most critical systems, possibly due to strain originating from surface defects and/or impurities. In this paper we demonstrate single-length critical behavior using high-resolution x-ray scattering in β -brass. The investigations confirm that β -brass behaves like a 3d Ising system over a wide range of length scales comprising correlated clusters of millions of atoms. To vary the surface sensitivity, experiments have been performed both in Bragg reflection and Laue transmission geometries but without any substantial differences observed in the scaling and critical behavior.
Why do lab-scale experiments ever resemble geological scale patterning?
NASA Astrophysics Data System (ADS)
Ferdowsi, Behrooz; Jones, Brandon C.; Stein, Jeremy L.; Shinbrot, Troy
2017-11-01
The Earth and other planets are abundant with curious and poorly understood surface patterns. Examples include sand dunes, periodic and aperiodic ridges and valleys, and networks of river and submarine channels. We make the minimalist proposition that the dominant mechanism governing these varied patterns is mass conservation: notwithstanding detailed particulars, the universal rule is mass conservation and there are only a finite number of surface patterns that can result from this process. To test this minimalist proposition, we perform experiments in a vertically vibrated bed of fine grains, and we show that every one of a wide variety of patterns seen in the laboratory is also seen in recorded geomorphologies. We explore a range of experimental driving frequencies and amplitudes, and we complement these experimental results with a non-local cellular automata model that reproduces the surface patterns seen using a minimalist approach that allows a free surface to deform subject to mass conservation and simple known forces such as gravity. These results suggest a common cause for the effectiveness of lab-scale models for geological scale patterning that otherwise ought to have no reasonable correspondence.
Direct and inverse energy cascades in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Campagne, Antoine; Gallet, Basile; Moisy, Frédéric; Cortet, Pierre-Philippe
2014-12-01
We present experimental evidence for a double cascade of kinetic energy in a statistically stationary rotating turbulence experiment. Turbulence is generated by a set of vertical flaps, which continuously injects velocity fluctuations towards the center of a rotating water tank. The energy transfers are evaluated from two-point third-order three-component velocity structure functions, which we measure using stereoscopic particle image velocimetry in the rotating frame. Without global rotation, the energy is transferred from large to small scales, as in classical three-dimensional turbulence. For nonzero rotation rates, the horizontal kinetic energy presents a double cascade: a direct cascade at small horizontal scales and an inverse cascade at large horizontal scales. By contrast, the vertical kinetic energy is always transferred from large to small horizontal scales, a behavior reminiscent of the dynamics of a passive scalar in two-dimensional turbulence. At the largest rotation rate, the flow is nearly two-dimensional, and a pure inverse energy cascade is found for the horizontal energy. To describe the scale-by-scale energy budget, we consider a generalization of the Kármán-Howarth-Monin equation to inhomogeneous turbulent flows, in which the energy input is explicitly described as the advection of turbulent energy from the flaps through the surface of the control volume where the measurements are performed.
Stenhouse, Rosie; Snowden, Austyn; Young, Jenny; Carver, Fiona; Carver, Hannah; Brown, Norrie
2016-08-01
Reports of poor nursing care have focused attention on values based selection of candidates onto nursing programmes. Values based selection lacks clarity and valid measures. Previous caring experience might lead to better care. Emotional intelligence (EI) might be associated with performance, is conceptualised and measurable. To examine the impact of 1) previous caring experience, 2) emotional intelligence 3) social connection scores on performance and retention in a cohort of first year nursing and midwifery students in Scotland. A longitudinal, quasi experimental design. Adult and mental health nursing, and midwifery programmes in a Scottish University. Adult, mental health and midwifery students (n=598) completed the Trait Emotional Intelligence Questionnaire-short form and Schutte's Emotional Intelligence Scale on entry to their programmes at a Scottish University, alongside demographic and previous caring experience data. Social connection was calculated from a subset of questions identified within the TEIQue-SF in a prior factor and Rasch analysis. Student performance was calculated as the mean mark across the year. Withdrawal data were gathered. 598 students completed baseline measures. 315 students declared previous caring experience, 277 not. An independent-samples t-test identified that those without previous caring experience scored higher on performance (57.33±11.38) than those with previous caring experience (54.87±11.19), a statistically significant difference of 2.47 (95% CI, 0.54 to 4.38), t(533)=2.52, p=.012. Emotional intelligence scores were not associated with performance. Social connection scores for those withdrawing (mean rank=249) and those remaining (mean rank=304.75) were statistically significantly different, U=15,300, z=-2.61, p$_amp_$lt;0.009. Previous caring experience led to worse performance in this cohort. Emotional intelligence was not a useful indicator of performance. Lower scores on the social connection factor were associated with withdrawal from the course. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lin, Wei-Quan; Wu, Jiang; Yuan, Le-Xin; Zhang, Sheng-Chao; Jing, Meng-Juan; Zhang, Hui-Shan; Luo, Jia-Li; Lei, Yi-Xiong; Wang, Pei-Xi
2015-11-20
To explore the impact of workplace violence on job performance and quality of life of community healthcare workers in China, especially the relationship of these three variables. From December 2013 to April 2014, a total of 1404 healthcare workers were recruited by using the random cluster sampling method from Community Health Centers in Guangzhou and Shenzhen. The workplace violence scale, the job performance scale and the quality of life scale (SF-36) were self-administered. The structural equation model constructed by Amos 17.0 was employed to assess the relationship among these variables. Our study found that 51.64% of the respondents had an experience of workplace violence. It was found that both job performance and quality of life had a negative correlation with workplace violence. A positive association was identified between job performance and quality of life. The path analysis showed the total effect (β = -0.243) of workplace violence on job performance consisted of a direct effect (β = -0.113) and an indirect effect (β = -0.130), which was mediated by quality of life. Workplace violence among community healthcare workers is prevalent in China. The workplace violence had negative effects on the job performance and quality of life of CHCs' workers. The study suggests that improvement in the quality of life may lead to an effective reduction of the damages in job performance caused by workplace violence.
Ishimori, Yuu; Mitsunobu, Fumihiro; Yamaoka, Kiyonori; Tanaka, Hiroshi; Kataoka, Takahiro; Sakoda, Akihiro
2011-07-01
A radon test facility for small animals was developed in order to increase the statistical validity of differences of the biological response in various radon environments. This paper illustrates the performances of that facility, the first large-scale facility of its kind in Japan. The facility has a capability to conduct approximately 150 mouse-scale tests at the same time. The apparatus for exposing small animals to radon has six animal chamber groups with five independent cages each. Different radon concentrations in each animal chamber group are available. Because the first target of this study is to examine the in vivo behaviour of radon and its effects, the major functions to control radon and to eliminate thoron were examined experimentally. Additionally, radon progeny concentrations and their particle size distributions in the cages were also examined experimentally to be considered in future projects.
Mindful learning can promote connectedness to nature: Implicit and explicit evidence.
Wang, Xue; Geng, Liuna; Zhou, Kexin; Ye, Lijuan; Ma, Yinglin; Zhang, Shuhao
2016-08-01
Environmental problems have attracted increasing attention, yet individuals' connectedness to nature remains a significant concern for potential solutions to these problems. In this article, we propose a novel method to promote connectedness to nature: mindful learning. One hundred and thirty-four students participated in the experiment. First, baseline measurements using the Connectedness to Nature Scale were obtained. Participants were then assigned to either a mindful or mindless learning condition. Finally, as a posttest, participants completed the Implicit Association Test and the Inclusion of Nature in the Self Scale. The performance of the mindful-learning group was better for both measures. Participants in the mindful-learning condition performed better on the Implicit Association Test and scored higher on the Inclusion of Nature in the Self Scale. These results provide empirical evidence that mindful learning may promote connectedness to nature, both implicitly and explicitly. Copyright © 2016 Elsevier Inc. All rights reserved.
McGeown, Sarah P; Gray, Eleanor A; Robinson, Jamey L; Dewhurst, Stephen A
2014-06-01
Two experiments investigated the cognitive skills that underlie children's susceptibility to semantic and phonological false memories in the Deese/Roediger-McDermott procedure (Deese, 1959; Roediger & McDermott, 1995). In Experiment 1, performance on the Verbal Similarities subtest of the British Ability Scales (BAS) II (Elliott, Smith, & McCulloch, 1997) predicted correct and false recall of semantic lures. In Experiment 2, performance on the Yopp-Singer Test of Phonemic Segmentation (Yopp, 1988) did not predict correct recall, but inversely predicted the false recall of phonological lures. Auditory short-term memory was a negative predictor of false recall in Experiment 1, but not in Experiment 2. The findings are discussed in terms of the formation of gist and verbatim traces as proposed by fuzzy trace theory (Reyna & Brainerd, 1998) and the increasing automaticity of associations as proposed by associative activation theory (Howe, Wimmer, Gagnon, & Plumpton, 2009). Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzi, Silvio; Hereld, Mark; Insley, Joseph
In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less
Effects of Density Stratification in Compressible Polytropic Convection
NASA Astrophysics Data System (ADS)
Manduca, Cathryn M.; Anders, Evan H.; Bordwell, Baylee; Brown, Benjamin P.; Burns, Keaton J.; Lecoanet, Daniel; Oishi, Jeffrey S.; Vasil, Geoffrey M.
2017-11-01
We study compressible convection in polytropically-stratified atmospheres, exploring the effect of varying the total density stratification. Using the Dedalus pseudospectral framework, we perform 2D and 3D simulations. In these experiments we vary the number of density scale heights, studying atmospheres with little stratification (1 density scale height) and significant stratification (5 density scale heights). We vary the level of convective driving (quantified by the Rayleigh number), and study flows at similar Mach numbers by fixing the initial superadiabaticity. We explore the differences between 2D and 3D simulations, and in particular study the equilibration between different reservoirs of energy (kinetic, potential and internal) in the evolved states.
The Thomas Self-Concept Values Test.
ERIC Educational Resources Information Center
Thomas, Walter L.
A test was developed to assess personal self-concept values of preprimary and primary aged children. If large scale preschool programs are to be justified, effects in the areas of intellectual growth, achievement performance, and personal-social growth must be observable in children several years after preschool experience and must be measurable…
USDA-ARS?s Scientific Manuscript database
Juice production is a multibillion dollar industry and an economical way to use fruit past seasonal harvests. To evaluate how production steps influence not-from-concentrate (NFC) blueberry (Vaccinium sp.) juice recovery, bench top and pilot scale experiments were performed. In bench-top, southern h...
2011-12-31
current methods used for aluminum-skinned aircraft. To this end, a series of medium-scale fire experiments were performed on aerospace composite materials...History.....................................................................................................................4 3. METHODS , ASSUMPTIONS AND...4.3. Agent Cost Analysis ..........................................................................................................21 5. CONCLUSIONS
Research on Potential Environmental Impacts of Oxy-fuel Combustion at EPA
An existing 35kW laboratory-scale combustor located at the U.S. EPA’s National Risk Management Research Laboratory, Research Triangle Park, North Carolina, has been modified for performing oxy-natural gas and oxy-coal experiments by adding O2 operation and flue gas recycling capa...
Research on Potential Environmental Impacts of Oxyfuel Combustion at EPA
An existing 35kW laboratory-scale combustor located at the U.S. EPA’s National Risk Management Research Laboratory, Research Triangle Park, North Carolina, has been modified for performing oxy-natural gas and oxy-coal experiments by adding O2 operation and flue gas recyclin...
Experiments were performed on a 73 kW rotary kiln incinerator simulator equipped with a 73 kW secondary combustion chamber (SCC) to examine emissions of products of incomplete combustion (PICs) resulting from incineration of carbon tetrachloride (CCl4) and dichlorometh...
General Purpose Sampling in the Domain of Higher Education.
ERIC Educational Resources Information Center
Creager, John A.
The experience of the American Council on Education's Cooperative Institutional Research Program indicates that large-scale national surveys in the domain of higher education can be performed with scientific integrity within the constraints of costs, logistics, and technical resources. The purposes of this report are to provide complete and…
A Measurement of Alienation in College Student Marihuana Users and Non-Users.
ERIC Educational Resources Information Center
Harris, Eileen M.
A three part questionnaire was administered to 1380 Southern Illinois University students to: (1) elicit demographic data; (2) determine the extent of experience with marihuana; and (3) measure alienation utilizing Dean's scale. In addition, the Minnesota Multiphasic Personality Lie Inventory was given. Statistical analyses were performed to…
THE EFFECT OF ACTIVATED CARBON SURFACE MOISTURE ON LOW TEMPERATURE MERCURY ADSORPTION
Experiments with elemental mercury (Hg0) adsorption by activated carbons were performed using a bench-scale fixed-bed reactor at room temperature (27 degrees C) to determine the role of surface moisture in capturing Hg0. A bituminous-coal-based activated carbon (BPL) and an activ...
de Carvalho, Laura Maria Araújo; Gonsalez, Elisiane Crestani de Miranda; Iorio, Maria Cecília Martineli
The difficulty the elderly experience in understanding speech may be related to several factors including cognitive and perceptual performance. To evaluate the influence of cognitive performance, depressive symptoms, and education on speech perception in noise of elderly hearing aids users. The sample consisted of 25 elderly hearing aids users in bilateral adaptation, both sexes, mean age 69.7 years. Subjects underwent cognitive assessment using the Mini-Mental State Examination and the Alzheimer's Disease Assessment Scale-cognitive and depressive symptoms evaluation using the Geriatric Depression Scale. The assessment of speech perception in noise (S/N ratio) was performed in free field using the Portuguese Sentence List test. Statistical analysis included the Spearman correlation calculation and multiple linear regression model, with 95% confidence level and 0.05 significance level. In the study of speech perception in noise (S/N ratio), there was statistically significant correlation between education scores (p=0.018), as well as with the Mini-Mental State Examination (p=0.002), Alzheimer's Disease Assessment Scale-cognitive (p=0.003), and Geriatric Depression Scale (p=0.022) scores. We found that for a one-unit increase in Alzheimer's Disease Assessment Scale-cognitive score, the S/N ratio increased on average 0.15dB, and for an increase of one year in education, the S/N ratio decreased on average 0.40dB. Level of education, cognitive performance, and depressive symptoms influence the speech perception in noise of elderly hearing aids users. The better the cognitive level and the higher the education, the better is the elderly communicative performance in noise. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Miskovic, Ljubisa; Alff-Tuomala, Susanne; Soh, Keng Cher; Barth, Dorothee; Salusjärvi, Laura; Pitkänen, Juha-Pekka; Ruohonen, Laura; Penttilä, Merja; Hatzimanikatis, Vassily
2017-01-01
Recent advancements in omics measurement technologies have led to an ever-increasing amount of available experimental data that necessitate systems-oriented methodologies for efficient and systematic integration of data into consistent large-scale kinetic models. These models can help us to uncover new insights into cellular physiology and also to assist in the rational design of bioreactor or fermentation processes. Optimization and Risk Analysis of Complex Living Entities (ORACLE) framework for the construction of large-scale kinetic models can be used as guidance for formulating alternative metabolic engineering strategies. We used ORACLE in a metabolic engineering problem: improvement of the xylose uptake rate during mixed glucose-xylose consumption in a recombinant Saccharomyces cerevisiae strain. Using the data from bioreactor fermentations, we characterized network flux and concentration profiles representing possible physiological states of the analyzed strain. We then identified enzymes that could lead to improved flux through xylose transporters (XTR). For some of the identified enzymes, including hexokinase (HXK), we could not deduce if their control over XTR was positive or negative. We thus performed a follow-up experiment, and we found out that HXK2 deletion improves xylose uptake rate. The data from the performed experiments were then used to prune the kinetic models, and the predictions of the pruned population of kinetic models were in agreement with the experimental data collected on the HXK2 -deficient S. cerevisiae strain. We present a design-build-test cycle composed of modeling efforts and experiments with a glucose-xylose co-utilizing recombinant S. cerevisiae and its HXK2 -deficient mutant that allowed us to uncover interdependencies between upper glycolysis and xylose uptake pathway. Through this cycle, we also obtained kinetic models with improved prediction capabilities. The present study demonstrates the potential of integrated "modeling and experiments" systems biology approaches that can be applied for diverse applications ranging from biotechnology to drug discovery.
Shallow cumulus rooted in photosynthesis
NASA Astrophysics Data System (ADS)
Vila-Guerau Arellano, J.; Ouwersloot, H.; Horn, G.; Sikma, M.; Jacobs, C. M.; Baldocchi, D.
2014-12-01
We investigate the interaction between plant evapotranspiration, controlled by photosynthesis (for a low vegetation cover by C3 and C4 grasses), and the moist thermals that are responsible for the formation and development of shallow cumulus clouds (SCu). We perform systematic numerical experiments at fine spatial scales using large-eddy simulations explicitly coupled to a plant-physiology model. To break down the complexity of the vegetation-atmospheric system at the diurnal scales, we design the following experiments with increasing complexity: (a) clouds that are transparent to radiation, (b) clouds that shade the surface from the incoming shortwave radiation and (c) plant stomata whose apertures react with an adjustment in time to cloud perturbations. The shading by SCu leads to a strong spatial variability in photosynthesis and the surface energy balance. As a result, experiment (b) simulates SCu that are characterized by less extreme and less skewed values of the liquid water path and cloud-base height. These findings are corroborated by the calculation of characteristics lengths scales of the thermals and clouds using autocorrelation and spectral analysis methods. We find that experiments (a) and (b) are characterized by similar cloud cover evolution, but different cloud population characteristics. Experiment (b), including cloud shading, is characterized by smaller clouds, but closer to each other. By performing a sensitivity analysis on the exchange of water vapor and carbon dioxide at the canopy level, we show that the larger water-use efficiency of C4 grass leads to two opposing effects that directly influence boundary-layer clouds: the thermals below the clouds are more vigorous and deeper driven by a larger buoyancy surface flux (positive effect), but are characterized by less moisture content (negative effect). We conclude that under the investigated mid-latitude atmospheric and well-watered soil conditions, SCu over C4 grass fields is characterized by larger cloud cover and an enhanced liquid water path compared to C3 grass fields.
Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula
2016-08-20
Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.
The high velocity, high adiabat, ``Bigfoot'' campaign and tests of indirect-drive implosion scaling
NASA Astrophysics Data System (ADS)
Casey, Daniel
2017-10-01
To achieve hotspot ignition, inertial confinement fusion (ICF) implosions must achieve high hotspot internal energy that is inertially confined by a dense shell of DT fuel. To accomplish this, implosions are designed to achieve high peak implosion velocity, good energy coupling between the hotspot and imploding shell, and high areal-density at stagnation. However, experiments have shown that achieving these simultaneously is extremely challenging, partly because of inherent tradeoffs between these three interrelated requirements. The Bigfoot approach is to intentionally trade off high convergence, and therefore areal-density, in favor of high implosion velocity and good coupling between the hotspot and shell. This is done by intentionally colliding the shocks in the DT ice layer. This results in a short laser pulse which improves hohlraum symmetry and predictability while the reduced compression improves hydrodynamic stability. The results of this campaign will be reviewed and include demonstrated low-mode symmetry control at two different hohlraum geometries (5.75 mm and 5.4 mm diameters) and at two different target scales (5.4 mm and 6.0 mm hohlraum diameters) spanning 300-430 TW in laser power and 0.8-1.7 MJ in laser energy. Results of the 10% scaling between these designs for the hohlraum and capsule will be presented. Hydrodynamic instability growth from engineering features like the capsule fill tube are currently thought to be a significant perturbation to the target performance and a major factor in reducing its performance compared to calculations. Evidence supporting this hypothesis as well as plans going forward will be presented. Ongoing experiments are attempting to measure the impact on target performance from increase in target scale, and the preliminary results will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Drive Scaling of hohlraums heated with 2ω light
NASA Astrophysics Data System (ADS)
Oades, Kevin; Foster, John; Slark, Gary; Stevenson, Mark; Kauffman, Robert; Suter, Larry; Hinkel, Denise; Miller, Mike; Schneider, Marilyn; Springer, Paul
2002-11-01
We report on experiments using a single beam from the AWE?s HELEN laser to study scaling of hohlraum drive with hohlraum scale size. The hohlruams were heated with 400 J in a 1 ns square pulse with and without a phaseplate. The drive was measured using a PCD and an FRD. Scattered light was measured using a full aperture backscatter system. Drive is consistent with hohlraum scaling and LASNEX modeling using the absorbed laser energy. Bremsstrahlung from fast electrons and M-shell x-ray production were also measured. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
Numerical Modeling of STARx for Ex Situ Soil Remediation
NASA Astrophysics Data System (ADS)
Gerhard, J.; Solinger, R. L.; Grant, G.; Scholes, G.
2016-12-01
Growing stockpiles of contaminated soils contaminated with petroleum hydrocarbons are an outstanding problem worldwide. Self-sustaining Treatment for Active Remediation (STAR) is an emerging technology based on smouldering combustion that has been successfully deployed for in situ remediation. STAR has also been developed for ex situ applications (STARx). This work used a two-dimensional numerical model to systematically explore the sensitivity of ex situ remedial performance to key design and operational parameters. First the model was calibrated and validated against pilot scale experiments, providing confidence that the rate and extent of treatment were correctly predicted. Simulations then investigated sensitivity of remedial performance to injected air flux, contaminant saturation, system configuration, heterogeneity of intrinsic permeability, heterogeneity of contaminant saturation, and system scale. Remedial performance was predicted to be most sensitive to the injected air flux, with higher air fluxes achieving higher treatment rates and remediating larger fractions of the initial contaminant mass. The uniformity of the advancing smouldering front was predicted to be highly dependent on effective permeability contrasts between treated and untreated sections of the contaminant pack. As a result, increased heterogeneity (of intrinsic permeability in particular) is predicted to lower remedial performance. Full-scale systems were predicted to achieve treatment rates an order of magnitude higher than the pilot scale for similar contaminant saturation and injected air flux. This work contributed to the large scale STARx treatment system that is being tested at a field site in Fall 2016.
Validation Results for Core-Scale Oil Shale Pyrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staten, Josh; Tiwari, Pankaj
2015-03-01
This report summarizes a study of oil shale pyrolysis at various scales and the subsequent development a model for in situ production of oil from oil shale. Oil shale from the Mahogany zone of the Green River formation was used in all experiments. Pyrolysis experiments were conducted at four scales, powdered samples (100 mesh) and core samples of 0.75”, 1” and 2.5” diameters. The batch, semibatch and continuous flow pyrolysis experiments were designed to study the effect of temperature (300°C to 500°C), heating rate (1°C/min to 10°C/min), pressure (ambient and 500 psig) and size of the sample on product formation.more » Comprehensive analyses were performed on reactants and products - liquid, gas and spent shale. These experimental studies were designed to understand the relevant coupled phenomena (reaction kinetics, heat transfer, mass transfer, thermodynamics) at multiple scales. A model for oil shale pyrolysis was developed in the COMSOL multiphysics platform. A general kinetic model was integrated with important physical and chemical phenomena that occur during pyrolysis. The secondary reactions of coking and cracking in the product phase were addressed. The multiscale experimental data generated and the models developed provide an understanding of the simultaneous effects of chemical kinetics, and heat and mass transfer on oil quality and yield. The comprehensive data collected in this study will help advance the move to large-scale in situ oil production from the pyrolysis of oil shale.« less
NASA Astrophysics Data System (ADS)
Strand, T. E.; Wang, H. F.
2003-12-01
Immiscible displacement protocols have long been used to infer the geometric properties of the void space in granular porous media. The three most commonly used experimental techniques are the measurement of soil-moisture retention curves and relative permeability-capillary pressure-saturation relations, as well as mercury intrusion porosimetry experiments. A coupled theoretical and computational investigation was performed that provides insight into the limitations associated with each technique and quantifies the relationship between experimental observations and the geometric properties of the void space. It is demonstrated that the inference of the pore space geometry from both mercury porosimetry experiments and measurements of capillary pressure curves is influenced by trapping/mobilization phenomena and subject to scaling behavior. In addition, both techniques also assume that the capillary pressure at a location on the meniscus can be approximated by a pressure difference across a region or sample. For example, when performing capillary pressure measurements, the capillary pressure, taken to be the difference between the injected fluid pressure at the inlet and the defending fluid pressure at the outlet, is increased in a series of small steps and the fluid saturation is measured each time the system reaches steady. Regions of defending fluid that become entrapped by the invading fluid can be subsequently mobilized at higher flow rates (capillary pressures), contributing to a scale-dependence of the capillary pressure-saturation curve that complicates the determination of the properties of the pore space. This scale-dependence is particularly problematic for measurements performed at the core scale. Mercury porosimetry experiments are subject to similar limitations. Trapped regions of defending fluid are also present during the measurement of soil-moisture retention curves, but the effects of scaling behavior on the evaluation of the pore space properties from the immiscible displacement structure are much simpler to account for due to the control of mobilization phenomena. Some mobilization may occur due to film flow, but this can be limited by keeping time scales relatively small or exploited at longer time scales in order to quantify the rate of film flow. Computer simulations of gradient-stabilized drainage and imbibition to the (respective) equilibrium positions were performed using a pore-scale modified invasion percolation (MIP) model in order to quantify the relationship between the saturation profile and the geometric properties of the void space. These simulations are similar to the experimental measurement of soil-moisture retention curves. Results show that the equilibrium height and the width of the equilibrium fringe depend on two length scale distributions, one controlling the imbibition equilibrium structure and the other controlling the drainage structure. The equilibrium height is related to the mean value of the appropriate distribution as described by Jurin's law, and the width of the equilibrium fringe scales as a function of a combined parameter, the Bond number, Bo, divided by the coefficient of variation (cov). Simulations also demonstrate that the apparent radius distribution obtained from saturation profiles using direct inversion by Jurin's law is a subset of the actual distribution in the porous medium. The relationship between the apparent and actual radius distributions is quantified in terms of the combined parameter, Bo/cov, and the mean coordination number of the porous medium.
The Experience of Cognitive Intrusion of Pain: scale development and validation
Attridge, Nina; Crombez, Geert; Van Ryckeghem, Dimitri; Keogh, Edmund; Eccleston, Christopher
2015-01-01
Abstract Patients with chronic pain often report their cognition to be impaired by pain, and this observation has been supported by numerous studies measuring the effects of pain on cognitive task performance. Furthermore, cognitive intrusion by pain has been identified as one of 3 components of pain anxiety, alongside general distress and fear of pain. Although cognitive intrusion is a critical characteristic of pain, no specific measure designed to capture its effects exists. In 3 studies, we describe the initial development and validation of a new measure of pain interruption: the Experience of Cognitive Intrusion of Pain (ECIP) scale. In study 1, the ECIP scale was administered to a general population sample to assess its structure and construct validity. In study 2, the factor structure of the ECIP scale was confirmed in a large general population sample experiencing no pain, acute pain, or chronic pain. In study 3, we examined the predictive value of the ECIP scale in pain-related disability in fibromyalgia patients. The ECIP scale scores followed a normal distribution with good variance in a general population sample. The scale had high internal reliability and a clear 1-component structure. It differentiated between chronic pain and control groups, and it was a significant predictor of pain-related disability over and above pain intensity. Repairing attentional interruption from pain may become a novel target for pain management interventions, both pharmacologic and nonpharmacologic. PMID:26067388
The 6-foot-4-inch Wind Tunnel at the Washington Navy Yard
NASA Technical Reports Server (NTRS)
Desmond, G L; Mccrary, J A
1935-01-01
The 6-foot-4-inch wind tunnel and its auxiliary equipment has proven itself capable of continuous and reliable output of data. The real value of the tunnel will increase as experience is gained in checking the observed tunnel performance against full-scale performance. Such has been the case of the 8- by 8-foot tunnel, and for that reason the comparison in the calibration tests have been presented.
Inverter performance comparison at PVUSA
NASA Astrophysics Data System (ADS)
Farmer, Brian K.; Stolte, Walter J.; Reyes, Antonio B.
1996-01-01
The paper is a summary of the Photovoltaics for Utility Scale Applications (PVUSA) Project's experience with procurement, testing, operation and maintenance of photovoltaic (PV) power conditioning units (PCUs) at the PVUSA Davis and Kerman sites. Brief descriptions of each of five different PCU models are included to explain tests and operational characteristics. A comparison of the PCUs' performances is presented, and conclusions are offered. Further details are in a forthcoming PVUSA report on PCUs and Power Quality [1].
Knight, Sophie; Aggarwal, Rajesh; Agostini, Aubert; Loundou, Anderson; Berdah, Stéphane; Crochet, Patrice
2018-01-01
Total Laparoscopic hysterectomy (LH) requires an advanced level of operative skills and training. The aim of this study was to develop an objective scale specific for the assessment of technical skills for LH (H-OSATS) and to demonstrate feasibility of use and validity in a virtual reality setting. The scale was developed using a hierarchical task analysis and a panel of international experts. A Delphi method obtained consensus among experts on relevant steps that should be included into the H-OSATS scale for assessment of operative performances. Feasibility of use and validity of the scale were evaluated by reviewing video recordings of LH performed on a virtual reality laparoscopic simulator. Three groups of operators of different levels of experience were assessed in a Marseille teaching hospital (10 novices, 8 intermediates and 8 experienced surgeons). Correlations with scores obtained using a recognised generic global rating tool (OSATS) were calculated. A total of 76 discrete steps were identified by the hierarchical task analysis. 14 experts completed the two rounds of the Delphi questionnaire. 64 steps reached consensus and were integrated in the scale. During the validation process, median time to rate each video recording was 25 minutes. There was a significant difference between the novice, intermediate and experienced group for total H-OSATS scores (133, 155.9 and 178.25 respectively; p = 0.002). H-OSATS scale demonstrated high inter-rater reliability (intraclass correlation coefficient [ICC] = 0.930; p<0.001) and test retest reliability (ICC = 0.877; p<0.001). High correlations were found between total H-OSATS scores and OSATS scores (rho = 0.928; p<0.001). The H-OSATS scale displayed evidence of validity for assessment of technical performances for LH performed on a virtual reality simulator. The implementation of this scale is expected to facilitate deliberate practice. Next steps should focus on evaluating the validity of the scale in the operating room.
The Zero Boil-Off Tank Experiment Ground Testing and Verification of Fluid and Thermal Performance
NASA Technical Reports Server (NTRS)
Chato, David J.; Kassemi, Mohammad; Kahwaji, Michel; Kieckhafer, Alexander
2016-01-01
The Zero Boil-Off Technology (ZBOT) Experiment involves performing a small scale International Space Station (ISS) experiment to study tank pressurization and pressure control in microgravity. The ZBOT experiment consists of a vacuum jacketed test tank filled with an inert fluorocarbon simulant liquid. Heaters and thermo-electric coolers are used in conjunction with an axial jet mixer flow loop to study a range of thermal conditions within the tank. The objective is to provide a high quality database of low gravity fluid motions and thermal transients which will be used to validate Computational Fluid Dynamic (CFD) modeling. This CFD can then be used in turn to predict behavior in larger systems with cryogens. This paper will discuss the work that has been done to demonstrate that the ZBOT experiment is capable of performing the functions required to produce a meaningful and accurate results, prior to its launch to the International Space Station. Main systems discussed are expected to include the thermal control system, the optical imaging system, and the tank filling system.This work is sponsored by NASAs Human Exploration Mission Directorates Physical Sciences Research program.
Resonant scattering experiments with radioactive nuclear beams - Recent results and future plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teranishi, T.; Sakaguchi, S.; Uesaka, T.
2013-04-19
Resonant scattering with low-energy radioactive nuclear beams of E < 5 MeV/u have been studied at CRIB of CNS and at RIPS of RIKEN. As an extension to the present experimental technique, we will install an advanced polarized proton target for resonant scattering experiments. A Monte-Carlo simulation was performed to study the feasibility of future experiments with the polarized target. In the Monte-Carlo simulation, excitation functions and analyzing powers were calculated using a newly developed R-matrix calculation code. A project of a small-scale radioactive beam facility at Kyushu University is also briefly described.
Recent results from the ARGO-YBJ experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camarri, P.
2010-03-26
The ARGO-YBJ experiment at YangBaJing in Tibet (4300 m a.s.l.) has been taking data with its full layout since October 2007. Here we present the first significant results obtained in gamma-ray astronomy and cosmic-ray physics. Emphasis is placed on the analysis of gamma-ray emission from point-like sources (Crab Nebula, MRK 421), on the preliminary limit on the antiproton/proton flux ratio, on the large-scale cosmic-ray anisotropy and on the proton-air cross section. The performance of the detector is also discussed, and the perspectives of the experiment are outlined.
Reformulations of practice: beyond experience in paramedic airway management.
Mausz, Justin; Donovan, Seanan; McConnell, Meghan; Lapalme, Corey; Webb, Andrea; Feres, Elizabeth; Tavares, Walter
2017-07-01
"Deliberate practice" and "feedback" are necessary for the development of expertise. We explored clinical performance in settings where these features are inconsistent or limited, hypothesizing that even in algorithmic domains of practice, clinical performance reformulates in ways that may threaten patient safety, and that experience fails to predict performance. Paramedics participated in two recorded simulation sessions involving airway management, which were analyzed three ways: first, we identified variations in "decision paths" by coding the actions of the participants according to an airway management algorithm. Second, we identified cognitive schemas driving behavior using qualitative descriptive analysis. Third, clinical performances were evaluated using a global rating scale, checklist, and time to achieve ventilation; the relationship between experience and these metrics was assessed using Pearson's correlation. Thirty participants completed a total of 59 simulations. Mean experience was 7.2 (SD=5.8) years. We observed highly variable practice patterns and identified idiosyncratic decision paths and schemas governing practice. We revealed problematic performance deficiencies related to situation awareness, decision making, and procedural skills. There was no association between experience and clinical performance (Scenario 1: r=0.13, p=0.47; Scenario 2: r=-0.10, p=0.58), or the number of errors (Scenario 1: r=.10, p=0.57; Scenario 2: r=0.25, p=0.17) or the time to achieve ventilation (Scenario 1: r=0.53, p=0.78; Scenario 2: r=0.27, p=0.15). Clinical performance was highly variable when approaching an algorithmic problem, and procedural and cognitive errors were not attenuated by provider experience. These findings suggest reformulations of practice emerge in settings where feedback and deliberate practice are limited.
Aoun, Bachir; Pellegrini, Eric; Trapp, Marcus; Natali, Francesca; Cantù, Laura; Brocca, Paola; Gerelli, Yuri; Demé, Bruno; Marek Koza, Michael; Johnson, Mark; Peters, Judith
2016-04-01
Neutron scattering techniques have been employed to investigate 1,2-dimyristoyl-sn -glycero-3-phosphocholine (DMPC) membranes in the form of multilamellar vesicles (MLVs) and deposited, stacked multilamellar-bilayers (MLBs), covering transitions from the gel to the liquid phase. Neutron diffraction was used to characterise the samples in terms of transition temperatures, whereas elastic incoherent neutron scattering (EINS) demonstrates that the dynamics on the sub-macromolecular length-scale and pico- to nano-second time-scale are correlated with the structural transitions through a discontinuity in the observed elastic intensities and the derived mean square displacements. Molecular dynamics simulations have been performed in parallel focussing on the length-, time- and temperature-scales of the neutron experiments. They correctly reproduce the structural features of the main gel-liquid phase transition. Particular emphasis is placed on the dynamical amplitudes derived from experiment and simulations. Two methods are used to analyse the experimental data and mean square displacements. They agree within a factor of 2 irrespective of the probed time-scale, i.e. the instrument utilized. Mean square displacements computed from simulations show a comparable level of agreement with the experimental values, albeit, the best match with the two methods varies for the two instruments. Consequently, experiments and simulations together give a consistent picture of the structural and dynamical aspects of the main lipid transition and provide a basis for future, theoretical modelling of dynamics and phase behaviour in membranes. The need for more detailed analytical models is pointed out by the remaining variation of the dynamical amplitudes derived in two different ways from experiments on the one hand and simulations on the other.
Falås, P; Longrée, P; la Cour Jansen, J; Siegrist, H; Hollender, J; Joss, A
2013-09-01
Removal of organic micropollutants in a hybrid biofilm-activated sludge process was investigated through batch experiments, modeling, and full-scale measurements. Batch experiments with carriers and activated sludge from the same full-scale reactor were performed to assess the micropollutant removal rates of the carrier biofilm under oxic conditions and the sludge under oxic and anoxic conditions. Clear differences in the micropollutant removal kinetics of the attached and suspended growth were demonstrated, often with considerably higher removal rates for the biofilm compared to the sludge. For several micropollutants, the removal rates were also affected by the redox conditions, i.e. oxic and anoxic. Removal rates obtained from the batch experiments were used to model the micropollutant removal in the full-scale process. The results from the model and plant measurements showed that the removal efficiency of the process can be predicted with acceptable accuracy (± 25%) for most of the modeled micropollutants. Furthermore, the model estimations indicate that the attached growth in hybrid biofilm-activated sludge processes can contribute significantly to the removal of individual compounds, such as diclofenac. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tuarob, Suppawong; Tucker, Conrad S; Salathe, Marcel; Ram, Nilam
2014-06-01
The role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data. Social media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported. Three data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts. Two sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media. The small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smalyuk, V. A.; Atherton, L. J.; Benedetti, L. R.
The radiation-driven, low-adiabat, cryogenic DT layered plastic capsule implosions were carried out on the National Ignition Facility (NIF) to study the sensitivity of performance to peak power and drive duration. An implosion with extended drive and at reduced peak power of 350 TW achieved the highest compression with fuel areal density of ~1.3±0.1 g/cm 2, representing a significant step from previously measured ~1.0 g/cm 2 toward a goal of 1.5 g/cm 2. Moreover, for future experiments will focus on understanding and mitigating hydrodynamic instabilities and mix, and improving symmetry required to reach the threshold for thermonuclear ignition on NIF.
Smalyuk, V. A.; Atherton, L. J.; Benedetti, L. R.; ...
2013-10-19
The radiation-driven, low-adiabat, cryogenic DT layered plastic capsule implosions were carried out on the National Ignition Facility (NIF) to study the sensitivity of performance to peak power and drive duration. An implosion with extended drive and at reduced peak power of 350 TW achieved the highest compression with fuel areal density of ~1.3±0.1 g/cm 2, representing a significant step from previously measured ~1.0 g/cm 2 toward a goal of 1.5 g/cm 2. Moreover, for future experiments will focus on understanding and mitigating hydrodynamic instabilities and mix, and improving symmetry required to reach the threshold for thermonuclear ignition on NIF.
Digital 8-DPSK Modem For Trellis-Coded Communication
NASA Technical Reports Server (NTRS)
Jedrey, T. C.; Lay, N. E.; Rafferty, W.
1989-01-01
Digital real-time modem processes octuple differential-phase-shift-keyed trellis-coded modulation. Intended for use in communicating data at rate up to 4.8 kb/s in land-mobile satellite channel (Rician fading) of 5-kHz bandwidth at carrier frequency of 1 to 2 GHz. Modulator and demodulator contain digital signal processors performing modem functions. Design flexible in that functions altered via software. Modem successfully tested and evaluated in both laboratory and field experiments, including recent full-scale satellite experiment. In all cases, modem performed within 1 dB of theory. Other communication systems benefitting from this type of modem include land mobile (without satellites), paging, digitized voice, and frequency-modulation subcarrier data broadcasting.
Formal thought disorder, neuropsychology and insight in schizophrenia.
Barrera, Alvaro; McKenna, Peter J; Berrios, German E
2009-01-01
Information provided by patients with schizophrenia and their respective carers is used to study the descriptive psychopathology and neuropsychology of formal thought disorder (FTD). Relatively intellectually preserved schizophrenia patients (n = 31) exhibiting from no to severe positive FTD completed a self-report scale of FTD, a scale of insight as well as several tests of executive and semantic function. The patients' carers completed another scale of FTD to assess the patients' speech. FTD as self-reported by patients was significantly associated with the synonyms test performance and severity of the reality distortion dimension. FTD as assessed by a clinician and by the patients' carers was significantly associated with executive test performance and performance in a test of associative semantics. Overall insight was significantly associated with severity of the reality distortion dimension and graded naming test performance, but was not associated with self-reported FTD or severity of FTD as assessed by the clinician or carers. The self-reported experience of FTD has different clinical and neuropsychological correlates from those of FTD as assessed by clinicians and carers. The assessment of FTD by patients and carers used along with the clinician's assessment may further the study of this group of symptoms. 2009 S. Karger AG, Basel.
Detection of tunnel excavation using fiber optic reflectometry: experimental validation
NASA Astrophysics Data System (ADS)
Linker, Raphael; Klar, Assaf
2013-06-01
Cross-border smuggling tunnels enable unmonitored movement of people and goods, and pose a severe threat to homeland security. In recent years, we have been working on the development of a system based on fiber- optic Brillouin time domain reflectometry (BOTDR) for detecting tunnel excavation. In two previous SPIE publications we have reported the initial development of the system as well as its validation using small-scale experiments. This paper reports, for the first time, results of full-scale experiments and discusses the system performance. The results confirm that distributed measurement of strain profiles in fiber cables buried at shallow depth enable detection of tunnel excavation, and by proper data processing, these measurements enable precise localization of the tunnel, as well as reasonable estimation of its depth.
Magneto-Optical Signature of Massless Kane Electrons in Cd 3 As 2
Akrap, A.; Hakl, M.; Tchoumakov, S.; ...
2016-09-21
Here, we report on optical reflectivity experiments performed on Cd 3As 2 over a broad range of photon energies and magnetic fields. The presence of 3D massless charge carriers are clearly indicated in the observed response. The specific cyclotron resonance absorption in the quantum limit implies that we are probing massless Kane electrons rather than symmetry-protected 3D Dirac particles. Furthermore, the latter may appear at a smaller energy scale and are not directly observed in our infrared experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, David Edward
A description of the development of the mc_runjob software package used to manage large scale computing tasks for the D0 Experiment at Fermilab is presented, along with a review of the Digital Front End Trigger electronics and the software used to control them. A tracking study is performed on detector data to determine that the D0 Experiment can detect charged B mesons, and that these results are in accordance with current results. B mesons are found by searching for the decay channel B ± → J / Ψ K ± .
NASA Technical Reports Server (NTRS)
Blanchard, M. B.; Oberbeck, V. R.; Bunch, T. E.; Reynolds, R. T.; Canning, T. N.; Jackson, R. W.
1976-01-01
The feasibility of employing penetrators for exploring Mars was examined. Eight areas of interest for key scientific experiments were identified. These include: seismic activity, imaging, geochemistry, water measurement, heatflow, meteorology, magnetometry, and biochemistry. In seven of the eight potential experiment categories this year's progress included: conceptual design, instrument fabrication, instrument performance evaluation, and shock loading of important components. Most of the components survived deceleration testing with negligible performance changes. Components intended to be placed inside the penetrator forebody were tested up to 3,500 g and components intended to be placed on the afterbody were tested up to 21,000 g. A field test program was conducted using tentative Mars penetrator mission constraints. Drop tests were performed at two selected terrestrial analog sites to determine the range of penetration depths for anticipated common Martian materials. Minimum penetration occurred in basalt at Amboy, California. Three full-scale penetrators penetrated 0.4 to 0.9 m into the basalt after passing through 0.3 to 0.5 m of alluvial overburden. Maximum penetration occurred in unconsolidated sediments at McCook, Nebraska. Two full-scale penetrators penetrated 2.5 to 8.5 m of sediment. Impact occurred in two kinds of sediment: loess and layered clay. Deceleration g loads of nominally 2,000 for the forebody and 20,000 for the afterbody did not present serious design problems for potential experiments. Penetrators have successfully impacted into terrestrial analogs of the probable extremes of potential Martian sites.
Metal-loaded organic scintillators for neutrino physics
Buck, Christian; Yeh, Minfang
2016-08-03
Organic liquid scintillators are used in many neutrino physics experiments of the past and present. In particular for low energy neutrinos when realtime and energy information are required, liquid scintillators have several advantages compared to other technologies. In many cases the organic liquid needs to be loaded with metal to enhance the neutrino signal over background events. Several metal loaded scintillators of the past suffered from chemical and optical instabilities, limiting the performance of these neutrino detectors. Different ways of metal loading are described in the article with a focus on recent techniques providing metal loaded scintillators that can bemore » used under stable conditions for many years even in ton scale experiments. Lastly, we review applications of metal loaded scintillators in neutrino experiments and compare the performance as well as the prospects of different scintillator types.« less
Scale Space for Camera Invariant Features.
Puig, Luis; Guerrero, José J; Daniilidis, Kostas
2014-09-01
In this paper we propose a new approach to compute the scale space of any central projection system, such as catadioptric, fisheye or conventional cameras. Since these systems can be explained using a unified model, the single parameter that defines each type of system is used to automatically compute the corresponding Riemannian metric. This metric, is combined with the partial differential equations framework on manifolds, allows us to compute the Laplace-Beltrami (LB) operator, enabling the computation of the scale space of any central projection system. Scale space is essential for the intrinsic scale selection and neighborhood description in features like SIFT. We perform experiments with synthetic and real images to validate the generalization of our approach to any central projection system. We compare our approach with the best-existing methods showing competitive results in all type of cameras: catadioptric, fisheye, and perspective.
NASA Technical Reports Server (NTRS)
Starr, D. OC.; Cox, S. K.
1985-01-01
A simplified cirrus cloud model is presented which may be used to investigate the role of various physical processes in the life cycle of a cirrus cloud. The model is a two-dimensional, time-dependent, Eulerian numerical model where the focus is on cloud-scale processes. Parametrizations are developed to account for phase changes of water, radiative processes, and the effects of microphysical structure on the vertical flux of ice water. The results of a simulation of a thin cirrostratus cloud are given. The results of numerical experiments performed with the model are described in order to demonstrate the important role of cloud-scale processes in determining the cloud properties maintained in response to larger scale forcing. The effects of microphysical composition and radiative processes are considered, as well as their interaction with thermodynamic and dynamic processes within the cloud. It is shown that cirrus clouds operate in an entirely different manner than liquid phase stratiform clouds.
Visual texture perception via graph-based semi-supervised learning
NASA Astrophysics Data System (ADS)
Zhang, Qin; Dong, Junyu; Zhong, Guoqiang
2018-04-01
Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.
OARE flight maneuvers and calibration measurements on STS-58
NASA Technical Reports Server (NTRS)
Blanchard, Robert C.; Nicholson, John Y.; Ritter, James R.; Larman, Kevin T.
1994-01-01
The Orbital Acceleration Research Experiment (OARE), which has flown on STS-40, STS-50, and STS-58, contains a three axis accelerometer with a single, nonpendulous, electrostatically suspended proofmass which can resolve accelerations to the nano-g level. The experiment also contains a full calibration station to permit in situ bias and scale factor calibration. This on-orbit calibration capability eliminates the large uncertainty of ground-based calibrations encountered with accelerometers flown in the past on the orbiter, thus providing absolute acceleration measurement accuracy heretofore unachievable. This is the first time accelerometer scale factor measurements have been performed on orbit. A detailed analysis of the calibration process is given along with results of the calibration factors from the on-orbit OARE flight measurements on STS-58. In addition, the analysis of OARE flight maneuver data used to validate the scale factor measurements in the sensor's most sensitive range is also presented. Estimates on calibration uncertainties are discussed. This provides bounds on the STS-58 absolute acceleration measurements for future applications.
Initial validation of a numeric zero to ten scale to measure children's state anxiety.
Crandall, Margie; Lammers, Cathy; Senders, Craig; Savedra, Marilyn; Braun, Jerome V
2007-11-01
Although children experience physical and behavioral consequences from anxiety in many health care settings, anxiety assessment and subsequent management is not often performed because of the lack of clinically useful subjective scales. Current state anxiety scales are either observational or multidimensional self-report measures requiring significant clinician and patient time. Because anxiety is subjective, in this pilot study, we evaluated the validity of a self-report numeric 0-10 anxiety scale that is easy to administer to children in the clinical setting. A descriptive correlation research design was used to determine the concurrent validity for a numeric 0-10 anxiety scale with the state portion of the State-Trait Anxiety Inventory for Children (STAIC). During clinic preoperative visits, 60 children, 7-13 yr, provided anxiety scores for the 0-10 scale and the STAIC pre- and posteducation. Simple linear regression and Pearson correlation were performed to determine the strength of the relationship. STAIC was associated with the anxiety scale both preeducation (beta = 1.20, SE[beta] = 0.34, F[1,58] = 12.74, P = 0.0007) and posteducation (beta = 1.97, SE[beta]) = 0.31, F[1,58] = 40.11, P < 0.0001). Correlations were moderate for pre-education (r = 0.424) and posteducation (r = 0.639). This initial study supports the validity of the numeric 0-10 anxiety self-report scale to assess state anxiety in children as young as 7 yr.
Design and performance of the spin asymmetries of the nucleon experiment
Maxwell, J. D.; Armstrong, W. R.; Choi, S.; ...
2018-03-01
The Spin Asymmetries of the Nucleon Experiment (SANE) performed inclusive, double-polarized electron scattering measurements of the proton at the Continuous Electron Beam Facility at Jefferson Lab. A novel detector array observed scattered electrons of four-momentum transfer 2.5 < Q 2 < 6.5 GeV 2 and Bjorken scaling 0.3 < x < 0.8 from initial beam energies of 4.7 and 5.9 GeV. Employing a polarized proton target which could be rotated with respect to the incident electron beam, both parallel and near perpendicular spin asymmetries were measured, allowing model-independent access to transverse polarization observables A 1, A 2, g 1, gmore » 2 and moment d 2 of the proton. This article summarizes the operation and performance of the polarized target, polarized electron beam, and novel detector systems used during the course of the experiment, and describes analysis techniques utilized to access the physics observables of interest.« less
Rothen, Nicolas; Meier, Beat
2010-04-01
In synaesthesia, the input of one sensory modality automatically triggers an additional experience, not normally triggered by the input of that modality. Therefore, compared to non-synaesthetes, additional experiences exist and these may be used as retrieval cues when memory is tested. Previous case studies have suggested that synaesthesia may yield even extraordinary memory abilities. However, group studies found either a task-specific memory advantage or no performance advantage at all. The aim of the present study was to test whether grapheme-colour synaesthesia gives rise to a general memory benefit using a standardised memory test (Wechsler Memory Scale). The synaesthetes showed a performance advantage in episodic memory tests, but not in short-term memory tests. However, performance was still within the ordinary range. The results support the hypothesis that synaesthesia provides for a richer world of experience and as a consequence additional retrieval cues may be available and beneficial but not to the point of extraordinary memory ability.
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A
2017-06-28
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results.
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A.
2017-01-01
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results. PMID:28773081
Design and performance of the spin asymmetries of the nucleon experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, J. D.; Armstrong, W. R.; Choi, S.
The Spin Asymmetries of the Nucleon Experiment (SANE) performed inclusive, double-polarized electron scattering measurements of the proton at the Continuous Electron Beam Facility at Jefferson Lab. A novel detector array observed scattered electrons of four-momentum transfer 2.5 < Q 2 < 6.5 GeV 2 and Bjorken scaling 0.3 < x < 0.8 from initial beam energies of 4.7 and 5.9 GeV. Employing a polarized proton target which could be rotated with respect to the incident electron beam, both parallel and near perpendicular spin asymmetries were measured, allowing model-independent access to transverse polarization observables A 1, A 2, g 1, gmore » 2 and moment d 2 of the proton. This article summarizes the operation and performance of the polarized target, polarized electron beam, and novel detector systems used during the course of the experiment, and describes analysis techniques utilized to access the physics observables of interest.« less
NASA Technical Reports Server (NTRS)
Saunders, J. D.; Stueber, T. J.; Thomas, S. R.; Suder, K. L.; Weir, L. J.; Sanders, B. W.
2012-01-01
Status on an effort to develop Turbine Based Combined Cycle (TBCC) propulsion is described. This propulsion technology can enable reliable and reusable space launch systems. TBCC propulsion offers improved performance and safety over rocket propulsion. The potential to realize aircraft-like operations and reduced maintenance are additional benefits. Among most the critical TBCC enabling technologies are: 1) mode transition from turbine to scramjet propulsion, 2) high Mach turbine engines and 3) TBCC integration. To address these TBCC challenges, the effort is centered on a propulsion mode transition experiment and includes analytical research. The test program, the Combined-Cycle Engine Large Scale Inlet Mode Transition Experiment (CCE LIMX), was conceived to integrate TBCC propulsion with proposed hypersonic vehicles. The goals address: (1) dual inlet operability and performance, (2) mode-transition sequences enabling a switch between turbine and scramjet flow paths, and (3) turbine engine transients during transition. Four test phases are planned from which a database can be used to both validate design and analysis codes and characterize operability and integration issues for TBCC propulsion. In this paper we discuss the research objectives, features of the CCE hardware and test plans, and status of the parametric inlet characterization testing which began in 2011. This effort is sponsored by the NASA Fundamental Aeronautics Hypersonics project
Bulnes-Abundis, David; Carrillo-Cocom, Leydi M; Aráiz-Hernández, Diana; García-Ulloa, Alfonso; Granados-Pastor, Marisa; Sánchez-Arreola, Pamela B; Murugappan, Gayathree; Alvarez, Mario M
2013-04-01
In industrial practice, stirred tank bioreactors are the most common mammalian cell culture platform. However, research and screening protocols at the laboratory scale (i.e., 5-100 mL) rely primarily on Petri dishes, culture bottles, or Erlenmeyer flasks. There is a clear need for simple-easy to assemble, easy to use, easy to clean-cell culture mini-bioreactors for lab-scale and/or screening applications. Here, we study the mixing performance and culture adequacy of a 30 mL eccentric stirred tank mini-bioreactor. A detailed mixing characterization of the proposed bioreactor is presented. Laser induced fluorescence (LIF) experiments and computational fluid dynamics (CFD) computations are used to identify the operational conditions required for adequate mixing. Mammalian cell culture experiments were conducted with two different cell models. The specific growth rate and the maximum cell density of Chinese hamster ovary (CHO) cell cultures grown in the mini-bioreactor were comparable to those observed for 6-well culture plates, Erlenmeyer flasks, and 1 L fully instrumented bioreactors. Human hematopoietic stem cells were successfully expanded tenfold in suspension conditions using the eccentric mini-bioreactor system. Our results demonstrate good mixing performance and suggest the practicality and adequacy of the proposed mini-bioreactor. Copyright © 2012 Wiley Periodicals, Inc.
FLOW TESTING AND ANALYSIS OF THE FSP-1 EXPERIMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawkes, Grant L.; Jones, Warren F.; Marcum, Wade
The U.S. High Performance Research Reactor Conversions fuel development team is focused on developing and qualifying the uranium-molybdenum (U-Mo) alloy monolithic fuel to support conversion of domestic research reactors to low enriched uranium. Several previous irradiations have demonstrated the favorable behavior of the monolithic fuel. The Full Scale Plate 1 (FSP-1) fuel plate experiment will be irradiated in the northeast (NE) flux trap of the Advanced Test Reactor (ATR). This fueled experiment contains six aluminum-clad fuel plates consisting of monolithic U-Mo fuel meat. Flow testing experimentation and hydraulic analysis have been performed on the FSP-1 experiment to be irradiated inmore » the ATR at the Idaho National Laboratory (INL). A flow test experiment mockup of the FSP-1 experiment was completed at Oregon State University. Results of several flow test experiments are compared with analyses. This paper reports and shows hydraulic analyses are nearly identical to the flow test results. A water velocity of 14.0 meters per second is targeted between the fuel plates. Comparisons between FSP-1 measurements and this target will be discussed. This flow rate dominates the flow characteristics of the experiment and model. Separate branch flows have minimal effect on the overall experiment. A square flow orifice was placed to control the flowrate through the experiment. Four different orifices were tested. A flow versus delta P curve for each orifice is reported herein. Fuel plates with depleted uranium in the fuel meat zone were used in one of the flow tests. This test was performed to evaluate flow test vibration with actual fuel meat densities and reported herein. Fuel plate deformation tests were also performed and reported.« less
Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe
2017-01-01
Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379
ALFA: The new ALICE-FAIR software framework
NASA Astrophysics Data System (ADS)
Al-Turany, M.; Buncic, P.; Hristov, P.; Kollegger, T.; Kouzinopoulos, C.; Lebedev, A.; Lindenstruth, V.; Manafov, A.; Richter, M.; Rybalchenko, A.; Vande Vyvre, P.; Winckler, N.
2015-12-01
The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities[1, 2]. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.
NASA Astrophysics Data System (ADS)
Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R.
2012-01-01
State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM) with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE) are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation, interior flow assimilation at any adjustment scale produces streamflow predictions with a spatial correlation structure more consistent with that of streamflow observations. We also describe diagnosing the complexity of the assimilation problem using the spatial correlation information associated with the streamflow process, and discuss the effect of timing errors in a simulated hydrograph on the performance of the data assimilation procedure.
Graphics performance in rich Internet applications.
Hoetzlein, Rama C
2012-01-01
Rendering performance for rich Internet applications (RIAs) has recently focused on the debate between using Flash and HTML5 for streaming video and gaming on mobile devices. A key area not widely explored, however, is the scalability of raw bitmap graphics performance for RIAs. Does Flash render animated sprites faster than HTML5? How much faster is WebGL than Flash? Answers to these questions are essential for developing large-scale data visualizations, online games, and truly dynamic websites. A new test methodology analyzes graphics performance across RIA frameworks and browsers, revealing specific performance outliers in existing frameworks. The results point toward a future in which all online experiences might be GPU accelerated.
Analytical and Experimental Verification of a Flight Article for a Mach-8 Boundary-Layer Experiment
NASA Technical Reports Server (NTRS)
Richards, W. Lance; Monaghan, Richard C.
1996-01-01
Preparations for a boundary-layer transition experiment to be conducted on a future flight mission of the air-launched Pegasus(TM) rocket are underway. The experiment requires a flight-test article called a glove to be attached to the wing of the Mach-8 first-stage booster. A three-dimensional, nonlinear finite-element analysis has been performed and significant small-scale laboratory testing has been accomplished to ensure the glove design integrity and quality of the experiment. Reliance on both the analysis and experiment activities has been instrumental in the success of the flight-article design. Results obtained from the structural analysis and laboratory testing show that all glove components are well within the allowable thermal stress and deformation requirements to satisfy the experiment objectives.
Lange, Gudrun; Leonhart, Rainer; Gruber, Harald
2018-01-01
Creation is an important part of many interventions in creative arts therapies (art, music, dance, and drama therapy). This active part of art-making in arts therapies has not yet been closely investigated. The present study commits to this field of research using a mixed-methods design to investigate the effects of active creation on health-related psychological outcomes. In an artistic inquiry within an experimental design, N = 44 participants engaged in active art-making for eight minutes in the presence of the researcher (first author) with a choice of artistic materials: paper and colors for drawing and writing, musical instruments, space for moving or performing. Before and after the creation, participants completed a well-being, a self-efficacy and an experience of creation scale, and in addition found their own words to express the experiences during the activity. We hypothesized that the experience of empowerment, freedom, impact, and creativity (Experience of Creation Scale) mediates the positive effect of active creation on the outcomes of self-efficacy and well-being, and evaluated this assumption with a mediation analysis. Results suggest that the effect of active creation on both self-efficacy and well-being is significantly mediated by the Experience of Creation Scale. This article focuses on the quantitative side of the investigation. During the process, qualitative and quantitative results were triangulated for a more valid evaluation and jointly contribute to the emerging theory frame of embodied aesthetics. PMID:29439541
Lange, Gudrun; Leonhart, Rainer; Gruber, Harald; Koch, Sabine C
2018-02-12
Creation is an important part of many interventions in creative arts therapies (art, music, dance, and drama therapy). This active part of art-making in arts therapies has not yet been closely investigated. The present study commits to this field of research using a mixed-methods design to investigate the effects of active creation on health-related psychological outcomes. In an artistic inquiry within an experimental design, N = 44 participants engaged in active art-making for eight minutes in the presence of the researcher (first author) with a choice of artistic materials: paper and colors for drawing and writing, musical instruments, space for moving or performing. Before and after the creation, participants completed a well-being, a self-efficacy and an experience of creation scale, and in addition found their own words to express the experiences during the activity. We hypothesized that the experience of empowerment, freedom, impact, and creativity (Experience of Creation Scale) mediates the positive effect of active creation on the outcomes of self-efficacy and well-being, and evaluated this assumption with a mediation analysis. Results suggest that the effect of active creation on both self-efficacy and well-being is significantly mediated by the Experience of Creation Scale. This article focuses on the quantitative side of the investigation. During the process, qualitative and quantitative results were triangulated for a more valid evaluation and jointly contribute to the emerging theory frame of embodied aesthetics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T; Hollenbach, Daniel F
2009-02-01
The Radiochemical Development Facility at Oak Ridge National Laboratory has been storing solid materials containing 233U for decades. Preparations are under way to process these materials into a form that is inherently safe from a nuclear criticality safety perspective. This will be accomplished by down-blending the {sup 233}U materials with depleted or natural uranium. At the request of the U.S. Department of Energy, a study has been performed using the SCALE sensitivity and uncertainty analysis tools to demonstrate how these tools could be used to validate nuclear criticality safety calculations of selected process and storage configurations. ISOTEK nuclear criticality safetymore » staff provided four models that are representative of the criticality safety calculations for which validation will be needed. The SCALE TSUNAMI-1D and TSUNAMI-3D sequences were used to generate energy-dependent k{sub eff} sensitivity profiles for each nuclide and reaction present in the four safety analysis models, also referred to as the applications, and in a large set of critical experiments. The SCALE TSUNAMI-IP module was used together with the sensitivity profiles and the cross-section uncertainty data contained in the SCALE covariance data files to propagate the cross-section uncertainties ({Delta}{sigma}/{sigma}) to k{sub eff} uncertainties ({Delta}k/k) for each application model. The SCALE TSUNAMI-IP module was also used to evaluate the similarity of each of the 672 critical experiments with each application. Results of the uncertainty analysis and similarity assessment are presented in this report. A total of 142 experiments were judged to be similar to application 1, and 68 experiments were judged to be similar to application 2. None of the 672 experiments were judged to be adequately similar to applications 3 and 4. Discussion of the uncertainty analysis and similarity assessment is provided for each of the four applications. Example upper subcritical limits (USLs) were generated for application 1 based on trending of the energy of average lethargy of neutrons causing fission, trending of the TSUNAMI similarity parameters, and use of data adjustment techniques.« less
A ``Cyber Wind Facility'' for HPC Wind Turbine Field Experiments
NASA Astrophysics Data System (ADS)
Brasseur, James; Paterson, Eric; Schmitz, Sven; Campbell, Robert; Vijayakumar, Ganesh; Lavely, Adam; Jayaraman, Balaji; Nandi, Tarak; Jha, Pankaj; Dunbar, Alex; Motta-Mena, Javier; Craven, Brent; Haupt, Sue
2013-03-01
The Penn State ``Cyber Wind Facility'' (CWF) is a high-fidelity multi-scale high performance computing (HPC) environment in which ``cyber field experiments'' are designed and ``cyber data'' collected from wind turbines operating within the atmospheric boundary layer (ABL) environment. Conceptually the ``facility'' is akin to a high-tech wind tunnel with controlled physical environment, but unlike a wind tunnel it replicates commercial-scale wind turbines operating in the field and forced by true atmospheric turbulence with controlled stability state. The CWF is created from state-of-the-art high-accuracy technology geometry and grid design and numerical methods, and with high-resolution simulation strategies that blend unsteady RANS near the surface with high fidelity large-eddy simulation (LES) in separated boundary layer, blade and rotor wake regions, embedded within high-resolution LES of the ABL. CWF experiments complement physical field facility experiments that can capture wider ranges of meteorological events, but with minimal control over the environment and with very small numbers of sensors at low spatial resolution. I shall report on the first CWF experiments aimed at dynamical interactions between ABL turbulence and space-time wind turbine loadings. Supported by DOE and NSF.
Emotional Intelligence of Malaysian Academia towards Work Performance
ERIC Educational Resources Information Center
Ngah, Rohana; Jusoff, Kamaruzaman; Rahman, Zanariah Abdul
2009-01-01
This paper describes the research conducted in relating to emotional intelligence of university staff to work attitude. The Emotional Intelligence (EI) Scale devised by Schutte et al. (1998) is used in this study, which is more suitable compared to BarOn Emotional Quotient Inventory. Beside their experiences, knowledge and skills, emotion play an…
Process and Learning Outcomes from Remotely-Operated, Simulated, and Hands-on Student Laboratories
ERIC Educational Resources Information Center
Corter, James E.; Esche, Sven K.; Chassapis, Constantin; Ma, Jing; Nickerson, Jeffrey V.
2011-01-01
A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the…
USDA-ARS?s Scientific Manuscript database
To investigate the coupled effects of solution chemistry and vadose zone processes on the mobility of quantum dot (QD) nanoparticles, laboratory scale transport experiments were performed. The complex coupled effects of ionic strength, size of QD aggregates, surface tension, contact angle, infiltrat...
Combustion experiments in a laboratory-scale fixed bed reactor were performed to determine the role of temperature and time in PCDD/F formation allowing a global kinetic expression to be written for PCDD/F formation due to soot oxidation in fly ash deposits. Rate constants were c...
Three-Dimensional Scale-Model Tank Experiment of the Hudson Canyon Region
2013-09-30
methodology to infer information about seabed properties. This work is performed in collaboration with Dr. David Knobles (ARL:UT). PUBLICATIONS Peer...94, 2334–2342 (1993). [12] J. M. Collis , W. Siegmann, M. Collins, S. H. J., and R. J. Soukup, “Comparison of simulations and data from a seismo
ERIC Educational Resources Information Center
Chyung, Seung Youn; Winiecki, Donald J.; Hunt, Gary; Sevier, Carol M.
2017-01-01
Team projects are increasingly used in engineering courses. Students may develop attitudes toward team projects from prior experience, and their attitudinal responses could influence their performance during team project-based learning in the future. Thus, instructors need to measure students' attitudes toward team projects during their learner…
Advances in time-scale algorithms
NASA Technical Reports Server (NTRS)
Stein, S. R.
1993-01-01
The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.
Graph processing platforms at scale: practices and experiences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Seung-Hwan; Lee, Sangkeun; Brown, Tyler C
2015-01-01
Graph analysis unveils hidden associations of data in many phenomena and artifacts, such as road network, social networks, genomic information, and scientific collaboration. Unfortunately, a wide diversity in the characteristics of graphs and graph operations make it challenging to find a right combination of tools and implementation of algorithms to discover desired knowledge from the target data set. This study presents an extensive empirical study of three representative graph processing platforms: Pegasus, GraphX, and Urika. Each system represents a combination of options in data model, processing paradigm, and infrastructure. We benchmarked each platform using three popular graph operations, degree distribution,more » connected components, and PageRank over a variety of real-world graphs. Our experiments show that each graph processing platform shows different strength, depending the type of graph operations. While Urika performs the best in non-iterative operations like degree distribution, GraphX outputforms iterative operations like connected components and PageRank. In addition, we discuss challenges to optimize the performance of each platform over large scale real world graphs.« less
Moore, Jonathan W.; Carlson, Stephanie M.; Twardochleb, Laura A.; Hwan, Jason L.; Fox, Justin M.; Hayes, Sean A.
2012-01-01
Omnivores can impact ecosystems via opposing direct or indirect effects. For example, omnivores that feed on herbivores and plants could either increase plant biomass due to the removal of herbivores or decrease plant biomass due to direct consumption. Thus, empirical quantification of the relative importance of direct and indirect impacts of omnivores is needed, especially the impacts of invasive omnivores. Here we investigated how an invasive omnivore (signal crayfish, Pacifastacus leniusculus) impacts stream ecosystems. First, we performed a large-scale experiment to examine the short-term (three month) direct and indirect impacts of crayfish on a stream food web. Second, we performed a comparative study of un-invaded areas and areas invaded 90 years ago to examine whether patterns from the experiment scaled up to longer time frames. In the experiment, crayfish increased leaf litter breakdown rate, decreased the abundance and biomass of other benthic invertebrates, and increased algal production. Thus, crayfish controlled detritus via direct consumption and likely drove a trophic cascade through predation on grazers. Consistent with the experiment, the comparative study also found that benthic invertebrate biomass decreased with crayfish. However, contrary to the experiment, crayfish presence was not significantly associated with higher leaf litter breakdown in the comparative study. We posit that during invasion, generalist crayfish replace the more specialized native detritivores (caddisflies), thereby leading to little long-term change in net detrital breakdown. A feeding experiment revealed that these native detritivores and the crayfish were both effective consumers of detritus. Thus, the impacts of omnivores represent a temporally-shifting interplay between direct and indirect effects that can control basal resources. PMID:23209810
Modeling the effects of contrast enhancement on target acquisition performance
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Fanning, Jonathan D.
2008-04-01
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.
Exploration–exploitation trade-off features a saltatory search behaviour
Volchenkov, Dimitri; Helbach, Jonathan; Tscherepanow, Marko; Kühnel, Sina
2013-01-01
Searching experiments conducted in different virtual environments over a gender-balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatio-temporal scales. We have suggested and solved analytically a simple statistical model of the coherent-noise type describing the exploration–exploitation trade-off in humans (‘should I stay’ or ‘should I go’). The model exhibits a variety of saltatory behaviours, ranging from Lévy flights occurring under uncertainty to Brownian walks performed by a treasure hunter confident of the eventual success. PMID:23782535
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Yilin; Wilkins, Michael J.; Yabusaki, Steven B.
2012-12-12
Biomass and shotgun global proteomics data that reflected relative protein abundances from samples collected during the 2008 experiment at the U.S. Department of Energy Integrated Field-Scale Subsurface Research Challenge site in Rifle, Colorado, provided an unprecedented opportunity to validate a genome-scale metabolic model of Geobacter metallireducens and assess its performance with respect to prediction of metal reduction, biomass yield, and growth rate under dynamic field conditions. Reconstructed from annotated genomic sequence, biochemical, and physiological data, the constraint-based in silico model of G. metallireducens relates an annotated genome sequence to the physiological functions with 697 reactions controlled by 747 enzyme-coding genes.more » Proteomic analysis showed that 180 of the 637 G. metallireducens proteins detected during the 2008 experiment were associated with specific metabolic reactions in the in silico model. When the field-calibrated Fe(III) terminal electron acceptor process reaction in a reactive transport model for the field experiments was replaced with the genome-scale model, the model predicted that the largest metabolic fluxes through the in silico model reactions generally correspond to the highest abundances of proteins that catalyze those reactions. Central metabolism predicted by the model agrees well with protein abundance profiles inferred from proteomic analysis. Model discrepancies with the proteomic data, such as the relatively low fluxes through amino acid transport and metabolism, revealed pathways or flux constraints in the in silico model that could be updated to more accurately predict metabolic processes that occur in the subsurface environment.« less
NASA Astrophysics Data System (ADS)
Frolov, Sergey; Garau, Bartolame; Bellingham, James
2014-08-01
Regular grid ("lawnmower") survey is a classical strategy for synoptic sampling of the ocean. Is it possible to achieve a more effective use of available resources if one takes into account a priori knowledge about variability in magnitudes of uncertainty and decorrelation scales? In this article, we develop and compare the performance of several path-planning algorithms: optimized "lawnmower," a graph-search algorithm (A*), and a fully nonlinear genetic algorithm. We use the machinery of the best linear unbiased estimator (BLUE) to quantify the ability of a vehicle fleet to synoptically map distribution of phytoplankton off the central California coast. We used satellite and in situ data to specify covariance information required by the BLUE estimator. Computational experiments showed that two types of sampling strategies are possible: a suboptimal space-filling design (produced by the "lawnmower" and the A* algorithms) and an optimal uncertainty-aware design (produced by the genetic algorithm). Unlike the space-filling designs that attempted to cover the entire survey area, the optimal design focused on revisiting areas of high uncertainty. Results of the multivehicle experiments showed that fleet performance predictors, such as cumulative speed or the weight of the fleet, predicted the performance of a homogeneous fleet well; however, these were poor predictors for comparing the performance of different platforms.
Self-consistency tests of large-scale dynamics parameterizations for single-column modeling
Edman, Jacob P.; Romps, David M.
2015-03-18
Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less
Fine-Granularity Functional Interaction Signatures for Characterization of Brain Conditions
Hu, Xintao; Zhu, Dajiang; Lv, Peili; Li, Kaiming; Han, Junwei; Wang, Lihong; Shen, Dinggang; Guo, Lei; Liu, Tianming
2014-01-01
In the human brain, functional activity occurs at multiple spatial scales. Current studies on functional brain networks and their alterations in brain diseases via resting-state functional magnetic resonance imaging (rs-fMRI) are generally either at local scale (regionally confined analysis and inter-regional functional connectivity analysis) or at global scale (graph theoretic analysis). In contrast, inferring functional interaction at fine-granularity sub-network scale has not been adequately explored yet. Here our hypothesis is that functional interaction measured at fine-granularity subnetwork scale can provide new insight into the neural mechanisms of neurological and psychological conditions, thus offering complementary information for healthy and diseased population classification. In this paper, we derived fine-granularity functional interaction (FGFI) signatures in subjects with Mild Cognitive Impairment (MCI) and Schizophrenia by diffusion tensor imaging (DTI) and rsfMRI, and used patient-control classification experiments to evaluate the distinctiveness of the derived FGFI features. Our experimental results have shown that the FGFI features alone can achieve comparable classification performance compared with the commonly used inter-regional connectivity features. However, the classification performance can be substantially improved when FGFI features and inter-regional connectivity features are integrated, suggesting the complementary information achieved from the FGFI signatures. PMID:23319242
Large-scale seismic waveform quality metric calculation using Hadoop
NASA Astrophysics Data System (ADS)
Magana-Zook, S.; Gaylord, J. M.; Knapp, D. R.; Dodge, D. A.; Ruppert, S. D.
2016-09-01
In this work we investigated the suitability of Hadoop MapReduce and Apache Spark for large-scale computation of seismic waveform quality metrics by comparing their performance with that of a traditional distributed implementation. The Incorporated Research Institutions for Seismology (IRIS) Data Management Center (DMC) provided 43 terabytes of broadband waveform data of which 5.1 TB of data were processed with the traditional architecture, and the full 43 TB were processed using MapReduce and Spark. Maximum performance of 0.56 terabytes per hour was achieved using all 5 nodes of the traditional implementation. We noted that I/O dominated processing, and that I/O performance was deteriorating with the addition of the 5th node. Data collected from this experiment provided the baseline against which the Hadoop results were compared. Next, we processed the full 43 TB dataset using both MapReduce and Apache Spark on our 18-node Hadoop cluster. These experiments were conducted multiple times with various subsets of the data so that we could build models to predict performance as a function of dataset size. We found that both MapReduce and Spark significantly outperformed the traditional reference implementation. At a dataset size of 5.1 terabytes, both Spark and MapReduce were about 15 times faster than the reference implementation. Furthermore, our performance models predict that for a dataset of 350 terabytes, Spark running on a 100-node cluster would be about 265 times faster than the reference implementation. We do not expect that the reference implementation deployed on a 100-node cluster would perform significantly better than on the 5-node cluster because the I/O performance cannot be made to scale. Finally, we note that although Big Data technologies clearly provide a way to process seismic waveform datasets in a high-performance and scalable manner, the technology is still rapidly changing, requires a high degree of investment in personnel, and will likely require significant changes in other parts of our infrastructure. Nevertheless, we anticipate that as the technology matures and third-party tool vendors make it easier to manage and operate clusters, Hadoop (or a successor) will play a large role in our seismic data processing.
Computational Fluid Dynamics (CFD) Simulations of Jet Mixing in Tanks of Different Scales
NASA Technical Reports Server (NTRS)
Breisacher, Kevin; Moder, Jeffrey
2010-01-01
For long-duration in-space storage of cryogenic propellants, an axial jet mixer is one concept for controlling tank pressure and reducing thermal stratification. Extensive ground-test data from the 1960s to the present exist for tank diameters of 10 ft or less. The design of axial jet mixers for tanks on the order of 30 ft diameter, such as those planned for the Ares V Earth Departure Stage (EDS) LH2 tank, will require scaling of available experimental data from much smaller tanks, as well designing for microgravity effects. This study will assess the ability for Computational Fluid Dynamics (CFD) to handle a change of scale of this magnitude by performing simulations of existing ground-based axial jet mixing experiments at two tank sizes differing by a factor of ten. Simulations of several axial jet configurations for an Ares V scale EDS LH2 tank during low Earth orbit (LEO) coast are evaluated and selected results are also presented. Data from jet mixing experiments performed in the 1960s by General Dynamics with water at two tank sizes (1 and 10 ft diameter) are used to evaluate CFD accuracy. Jet nozzle diameters ranged from 0.032 to 0.25 in. for the 1 ft diameter tank experiments and from 0.625 to 0.875 in. for the 10 ft diameter tank experiments. Thermally stratified layers were created in both tanks prior to turning on the jet mixer. Jet mixer efficiency was determined by monitoring the temperatures on thermocouple rakes in the tanks to time when the stratified layer was mixed out. Dye was frequently injected into the stratified tank and its penetration recorded. There were no velocities or turbulence quantities available in the experimental data. A commercially available, time accurate, multi-dimensional CFD code with free surface tracking (FLOW-3D from Flow Science, Inc.) is used for the simulations presented. Comparisons are made between computed temperatures at various axial locations in the tank at different times and those observed experimentally. The affect of various modeling parameters on the agreement obtained are assessed.
NASA Astrophysics Data System (ADS)
Rajib, A.; Merwade, V.; Liu, Z.; Lane, C.; Golden, H. E.; Tavakoly, A. A.; Follum, M. L.
2017-12-01
There have been many initiatives to develop frameworks for continental-scale modeling and mapping floodplain dynamics. The choice of a model for such needs should be governed by its suitability to be executed in high performance cyber platforms, ability to integrate supporting hydraulic/hydrodynamic tools, and ability to assimilate earth observations. Furthermore, disseminating large volume of outputs for public use and interoperability with similar frameworks should be considered. Considering these factors, we have conducted a series of modeling experiments and developed a suite of cyber-enabled platforms that have transformed Soil and Water Assessment Tool (SWAT) into an appropriate model for use in a continental-scale, high resolution, near real-time flood information framework. Our first experiment uses a medium size watershed in Indiana, USA and attempts burning-in a high resolution, National Hydrography Dataset Plus(NHDPlus) into the SWAT model. This is crucial with a view to make the outputs comparable with other global/national initiatives. The second experiment is built upon the first attempt to add a modified landscape representation in the model which differentiates between the upland and floodplain processes. Our third experiment involves two separate efforts: coupling SWAT with a hydrodynamic model LISFLOOD-FP and a new generation, low complexity hydraulic model AutoRoute. We have executed the prototype "loosely-coupled" models for the Upper Mississippi-Ohio River Basin in the USA, encompassing 1 million square km drainage area and nearly 0.2 million NHDPlus river reaches. The preliminary results suggest reasonable accuracy for both streamflow and flood inundation. In this presentation, we will also showcase three cyber-enabled platforms, including SWATShare to run and calibrate large scale SWAT models online using high performance computational resources, HydroGlobe to automatically extract and assimilate multiple remotely sensed earth observations in model sub-basins, and SWATFlow to visualize/download streamflow and flood inundation maps through an interactive interface. With all these transformational changes to enhance and support SWAT, it is expected that the model can be a sustainable alternative in the Global Flood Partnership program.
Ishizu, Hidenori; Sekiguchi, Toshio; Ikari, Takahiro; Kitamura, Kei-Ichiro; Kitani, Yoichiro; Endo, Masato; Urata, Makoto; Kinoshita, Yasuko; Hattori, Atsuhiko; Srivastav, Ajai K; Mishima, Hiroyuki; Mizusawa, Kanta; Takahashi, Akiyoshi; Suzuki, Nobuo
2018-06-01
We examined the effects of α-melanocyte-stimulating hormone (α-MSH) on bone metabolism using regenerating goldfish scales. Normally developed scales on the bodies of goldfish were removed to allow the regeneration of scales under anesthesia. Thereafter, the influence of α-MSH on the regeneration of goldfish scales was investigated in vivo. In brief, α-MSH was injected at a low dose (0.1 μg/g body weight) or a high dose (1 μg/g body weight) into goldfish every other day. Ten days after removing the scales, we collected regenerating scales and analyzed osteoblastic and osteoclastic activities as respective marker enzyme (alkaline phosphatase for osteoblasts, tartrate-resistant acid phosphatase for osteoclasts) activity in the regenerating scales as well as plasma calcium levels. At both doses, osteoblastic and osteoclastic activities in the regenerating scales increased significantly. Plasma calcium concentrations in the α-MSH-treated group (high doses) were significantly higher than those in the control group. Next, in vitro experiments were performed to confirm the results of in vivo experiments. In the cultured regenerating scales, osteoblastic and osteoclastic activities significantly increased with α-MSH (10 -7 and 10 -6 M) treatment. In addition, real-time PCR analysis indicated that osteoclastogenesis in α-MSH-treated scales was induced by the receptor activator of the NF-κB/receptor activator of the NF-κB ligand/osteoprotegerin pathway. Furthermore, we found that α-MSH receptors (melanocortin receptors 4 and 5) were detected in the regenerating scales. Thus, in teleosts, we are the first to demonstrate that α-MSH functions in bone metabolism and promotes bone resorption via melatonin receptors 4 and/or 5. Copyright © 2018 Elsevier Inc. All rights reserved.
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
JET DT Scenario Extrapolation and Optimization with METIS
NASA Astrophysics Data System (ADS)
Urban, Jakub; Jaulmes, Fabien; Artaud, Jean-Francois
2017-10-01
Prospective JET (Joint European Torus) DT operation scenarios are modelled by the fast integrated code METIS. METIS combines scaling laws, e.g. for global and pedestal energy or density peaking, with simplified transport and source models, while retaining fundamental nonlinear couplings, in particular in the fusion power. We have tuned METIS parameters to match JET-ILW high performance experiments, including baseline and hybrid. Based on recent observations, we assume a weaker input power scaling than IPB98 and a 10% confinement improvement due to the higher ion mass. The rapidity of METIS is utilized to scan the performance of JET DT scenarios with respect to fundamental parameters, such as plasma current, magnetic field, density or heating power. Simplified, easily parameterized waveforms are used to study the effect the ramp-up speed or heating timing. Finally, an efficient Bayesian optimizer is employed to seek the most performant scenarios in terms of the fusion power or gain.
Source localization in electromyography using the inverse potential problem
NASA Astrophysics Data System (ADS)
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2011-02-01
We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.
Composting in small laboratory pilots: performance and reproducibility.
Lashermes, G; Barriuso, E; Le Villio-Poitrenaud, M; Houot, S
2012-02-01
Small-scale reactors (<10 l) have been employed in composting research, but few attempts have assessed the performance of composting considering the transformations of organic matter. Moreover, composting at small scales is often performed by imposing a fixed temperature, thus creating artificial conditions, and the reproducibility of composting has rarely been reported. The objectives of this study are to design an innovative small-scale composting device safeguarding self-heating to drive the composting process and to assess the performance and reproducibility of composting in small-scale pilots. The experimental setup included six 4-l reactors used for composting a mixture of sewage sludge and green wastes. The performance of the process was assessed by monitoring the temperature, O(2) consumption and CO(2) emissions, and characterising the biochemical evolution of organic matter. A good reproducibility was found for the six replicates with coefficients of variation for all parameters generally lower than 19%. An intense self-heating ensured the existence of a spontaneous thermophilic phase in all reactors. The average loss of total organic matter (TOM) was 46% of the initial content. Compared to the initial mixture, the hot water soluble fraction decreased by 62%, the hemicellulose-like fraction by 68%, the cellulose-like fraction by 50% and the lignin-like fractions by 12% in the final compost. The TOM losses, compost stabilisation and evolution of the biochemical fractions were similar to observed in large reactors or on-site experiments, excluding the lignin degradation, which was less important than in full-scale systems. The reproducibility of the process and the quality of the final compost make it possible to propose the use of this experimental device for research requiring a mass reduction of the initial composted waste mixtures. Copyright © 2011 Elsevier Ltd. All rights reserved.
Modified Moral Distress Scale (MDS-11): Validation Study Among Italian Nurses.
Badolamenti, Sondra; Fida, Roberto; Biagioli, Valentina; Caruso, Rosario; Zaghini, Francesco; Sili, Alessandro; Rea, Teresa
2017-01-01
Moral distress (MD) has significant implications on individual and organizational health. However there is a lack of an instrument to assess it among Italian nurses. The main aim of this study was to validate a brief instrument to assess MD, developed from the Corley's Moral Distress Scale (MDS). The modified MDS scale was subjected to content and cultural validity. The scale was administered to 347 nurses. Psychometric analysis were performed to assess construct validity. The scale consists of 11 items, investigating MD in nursing practice in different clinical settings. The dimensionality of the scale was investigated through exploratory factor analysis (EFA), which showed a two-dimensional structure labeled futility and potential damage. The futility refers to feelings of powerlessness and ineffectiveness in some clinical situations; the potential damage dimension captures feelings of powerlessness when nurses are forced to tolerate or perform perceived abusive clinical proceedings. Nurses who experienced higher MD, were more lilely to experience burnout. The modified MDS showed good psychometric properties, and it is valid and reliable for assessing moral distress among Italian nurses. Hence, the modified MDS allows to monitor the distress experienced by nurses and it is an important contribution to the scientific community and all those dealing with well-being of health workers.
NASA Astrophysics Data System (ADS)
Javaherchi, Teymour; Stelzenmuller, Nick; Seydel, Joseph; Aliseda, Alberto
2013-11-01
We investigate, through a combination of scale model experiments and numerical simulations, the evolution of the flow field around the rotor and in the wake of Marine Hydrokinetic (MHK) turbines. Understanding the dynamics of this flow field is the key to optimizing the energy conversion of single devices and the arrangement of turbines in commercially viable arrays. This work presents a comparison between numerical and experimental results from two different case studies of scaled horizontal axis MHK turbines (45:1 scale). In the first case study, we investigate the effect of Reynolds number (Re = 40,000 to 100,000) and Tip Speed Ratio (TSR = 5 to 12) variation on the performance and wake structure of a single turbine. In the second case, we study the effect of the turbine downstream spacing (5d to 14d) on the performance and wake development in a coaxial configuration of two turbines. These results provide insights into the dynamics of Horizontal Axis Hydrokinetic Turbines, and by extension to Horizontal Axis Wind Turbines in close proximity to each other, and highlight the capabilities and limitations of the numerical models. Once validated at laboratory scale, the numerical model can be used to address other aspects of MHK turbines at full scale. Supported by DOE through the National Northwest Marine Renewable Energy Center.
Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard
2013-09-06
Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.
Ho, Andy H Y; Potash, Jordan S; Fong, Ted C T; Ho, Vania F L; Chen, Eric Y H; Lau, Robert H W; Au Yeung, Friendly S W; Ho, Rainbow T H
2015-01-01
Stigma of mental illness is a global public health concern, but there lacks a standardized and cross-culturally validated instrument for assessing the complex experience of stigma among people living with mental illness (PLMI) in the Chinese context. This study examines the psychometric properties of a Chinese version of the Stigma Scale (CSS), and explores the relationships between stigma, self-esteem and depression. A cross-sectional survey was conducted with a community sample of 114 Chinese PLMI in Hong Kong. Participants completed the CSS, the Chinese Self-Stigma of Mental Illness Scale, the Chinese Rosenberg Self-Esteem Scale, and the Chinese Patient Health Questionnaire-9. An exploratory factor analysis was conducted to identify the underlying factors of the CSS; concurrent validity assessment was performed via correlation analysis. The original 28-item three-factor structure of the Stigma Scale was found to be a poor fit to the data, whereas a revised 14-item three-factor model provided a good fit with all 14 items loaded significantly onto the original factors: discrimination, disclosure and positive aspects of mental illness. The revised model also displayed moderate to good internal consistency and good construct validity. Further findings revealed that the total stigma scale score and all three of its subscale scores correlated negatively with self-esteem; but only total stigma, discrimination and disclosure correlated positively with depression. The CSS is a short and user-friendly self-administrated questionnaire that proves valuable for understanding the multifaceted stigma experiences among PLMI as well as their impact on psychiatric recovery and community integration in Chinese communities. Copyright © 2014 Elsevier Inc. All rights reserved.
Fluvial experiments using inertial sensors.
NASA Astrophysics Data System (ADS)
Maniatis, Georgios; Valyrakis, Manousos; Hodge, Rebecca; Drysdale, Tim; Hoey, Trevor
2017-04-01
During the last four years we have announced results on the development of a smart pebble that is constructed and calibrated specifically for capturing the dynamics of coarse sediment motion in river beds, at a grain scale. In this presentation we report details of our experimental validation across a range of flow regimes. The smart pebble contains Inertial Measurements Units (IMUs), which are sensors capable of recording the inertial acceleration and the angular velocity of the rigid bodies into which they are attached. IMUs are available across a range of performance levels, with commensurate increase in size, cost and performance as one progresses from integrated-circuit devices for use in commercial applications such as gaming and mobile phones, to larger brick-sized systems sometimes found in industrial applications such as vibration monitoring and quality control, or even the rack-mount equipment used in some aerospace and navigation applications (which can go as far as to include lasers and optical components). In parallel with developments in commercial and industrial settings, geomorphologists started recently to explore means of deploying IMUs in smart pebbles. The less-expensive, chip-scale IMUs have been shown to have adequate performance for this application, as well as offering a sufficiently compact form-factor. Four prototype sensors have been developed so far, and the latest (400 g acceleration range, 50-200 Hz sampling frequency) has been tested in fluvial laboratory experiments. We present results from three different experimental regimes designed for the evaluation of this sensor: a) an entrainment threshold experiment ; b) a bed impact experiment ; and c) a rolling experiment. All experiments used a 100 mm spherical sensor, and set a) were repeated using an equivalent size elliptical sensor. The experiments were conducted in the fluvial laboratory of the University of Glasgow (0.9 m wide flume) under different hydraulic conditions. The use of IMU results into direct parametrization of the inertial forces of grains which for the tested grain sizes were, as expected, always comparable to the independently measured hydrodynamic forces. However, the validity of IMU measurements is subjected to specific design, processing and experimental considerations, and we present the results of our analysis of these.
Lin, Wei-Quan; Wu, Jiang; Yuan, Le-Xin; Zhang, Sheng-Chao; Jing, Meng-Juan; Zhang, Hui-Shan; Luo, Jia-Li; Lei, Yi-Xiong; Wang, Pei-Xi
2015-01-01
Objective: To explore the impact of workplace violence on job performance and quality of life of community healthcare workers in China, especially the relationship of these three variables. Methods: From December 2013 to April 2014, a total of 1404 healthcare workers were recruited by using the random cluster sampling method from Community Health Centers in Guangzhou and Shenzhen. The workplace violence scale, the job performance scale and the quality of life scale (SF-36) were self-administered. The structural equation model constructed by Amos 17.0 was employed to assess the relationship among these variables. Results: Our study found that 51.64% of the respondents had an experience of workplace violence. It was found that both job performance and quality of life had a negative correlation with workplace violence. A positive association was identified between job performance and quality of life. The path analysis showed the total effect (β = −0.243) of workplace violence on job performance consisted of a direct effect (β = −0.113) and an indirect effect (β = −0.130), which was mediated by quality of life. Conclusions: Workplace violence among community healthcare workers is prevalent in China. The workplace violence had negative effects on the job performance and quality of life of CHCs’ workers. The study suggests that improvement in the quality of life may lead to an effective reduction of the damages in job performance caused by workplace violence. PMID:26610538
NASA Technical Reports Server (NTRS)
Swanson, Gregory T.; Cassell, Alan M.
2011-01-01
Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.
Channel optimization of high-intensity laser beams in millimeter-scale plasmas.
Ceurvorst, L; Savin, A; Ratan, N; Kasim, M F; Sadler, J; Norreys, P A; Habara, H; Tanaka, K A; Zhang, S; Wei, M S; Ivancic, S; Froula, D H; Theobald, W
2018-04-01
Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>10^{18}W/cm^{2}) kilojoule laser pulses through large density scale length (∼390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.
Turbulence measurements in high Reynolds number boundary layers
NASA Astrophysics Data System (ADS)
Vallikivi, Margit; Smits, Alexander
2013-11-01
Measurements are conducted in zero pressure gradient turbulent boundary layers for Reynolds numbers from Reθ = 9,000 to 225,000. The experiments were performed in the High Reynolds number Test Facility (HRTF) at Princeton University, which uses compressed air as the working fluid. Nano-Scale Thermal Anemometry Probes (NSTAPs) are used to acquire data with very high spatial and temporal precision. These new data are used to study the scaling behavior of the streamwise velocity fluctuations in the boundary layer and make comparisons with the scaling of other wall-bounded turbulent flows. Supported under ONR Grant N00014-09-1-0263 (program manager Ron Joslin) and NSF Grant CBET-1064257 (program manager Henning Winter).
Scaling dependence and synchronization of forced mercury beating heart systems
NASA Astrophysics Data System (ADS)
Biswas, Animesh; Das, Dibyendu; Parmananda, P.
2017-04-01
We perform experiments on a nonautonomous Mercury beating heart system, which is forced to pulsate using an external square wave potential. At suitable frequencies and volumes, the drop exhibits pulsation with polygonal shapes having n corners. We find the scaling dependence of the forcing frequency νn on the volume V of the drop and establish the relationship νn∝n/√{V } . It is shown that the geometrical shape of substrate is important for obtaining closer match to these scaling relationships. Furthermore, we study synchronization of two nonidentical drops driven by the same frequency and establish that synchrony happens when the relationship n2/n1=√{V2/V1 } is satisfied.
Channel optimization of high-intensity laser beams in millimeter-scale plasmas
NASA Astrophysics Data System (ADS)
Ceurvorst, L.; Savin, A.; Ratan, N.; Kasim, M. F.; Sadler, J.; Norreys, P. A.; Habara, H.; Tanaka, K. A.; Zhang, S.; Wei, M. S.; Ivancic, S.; Froula, D. H.; Theobald, W.
2018-04-01
Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>1018W/cm 2 ) kilojoule laser pulses through large density scale length (˜390 -570 μ m ) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Meyer, P. J.
1984-01-01
Structure and correlation functions are used to describe atmospheric variability during the 10-11 April day of AVE-SESAME 1979 that coincided with the Red River Valley tornado outbreak. The special mesoscale rawinsonde data are employed in calculations involving temperature, geopotential height, horizontal wind speed and mixing ratio. Functional analyses are performed in both the lower and upper troposphere for the composite 24 h experiment period and at individual 3 h observation times. Results show that mesoscale features are prominent during the composite period. Fields of mixing ratio and horizontal wind speed exhibit the greatest amounts of small-scale variance, whereas temperature and geopotential height contain the least. Results for the nine individual times show that small-scale variance is greatest during the convective outbreak. The functions also are used to estimate random errors in the rawinsonde data. Finally, sensitivity analyses are presented to quantify confidence limits of the structure functions.
High Resolution Model Intercomparison Project (HighResMIP v1.0) for CMIP6
NASA Astrophysics Data System (ADS)
Haarsma, Reindert J.; Roberts, Malcolm J.; Vidale, Pier Luigi; Senior, Catherine A.; Bellucci, Alessio; Bao, Qing; Chang, Ping; Corti, Susanna; Fučkar, Neven S.; Guemas, Virginie; von Hardenberg, Jost; Hazeleger, Wilco; Kodama, Chihiro; Koenigk, Torben; Leung, L. Ruby; Lu, Jian; Luo, Jing-Jia; Mao, Jiafu; Mizielinski, Matthew S.; Mizuta, Ryo; Nobre, Paulo; Satoh, Masaki; Scoccimarro, Enrico; Semmler, Tido; Small, Justin; von Storch, Jin-Song
2016-11-01
Robust projections and predictions of climate variability and change, particularly at regional scales, rely on the driving processes being represented with fidelity in model simulations. The role of enhanced horizontal resolution in improved process representation in all components of the climate system is of growing interest, particularly as some recent simulations suggest both the possibility of significant changes in large-scale aspects of circulation as well as improvements in small-scale processes and extremes. However, such high-resolution global simulations at climate timescales, with resolutions of at least 50 km in the atmosphere and 0.25° in the ocean, have been performed at relatively few research centres and generally without overall coordination, primarily due to their computational cost. Assessing the robustness of the response of simulated climate to model resolution requires a large multi-model ensemble using a coordinated set of experiments. The Coupled Model Intercomparison Project 6 (CMIP6) is the ideal framework within which to conduct such a study, due to the strong link to models being developed for the CMIP DECK experiments and other model intercomparison projects (MIPs). Increases in high-performance computing (HPC) resources, as well as the revised experimental design for CMIP6, now enable a detailed investigation of the impact of increased resolution up to synoptic weather scales on the simulated mean climate and its variability. The High Resolution Model Intercomparison Project (HighResMIP) presented in this paper applies, for the first time, a multi-model approach to the systematic investigation of the impact of horizontal resolution. A coordinated set of experiments has been designed to assess both a standard and an enhanced horizontal-resolution simulation in the atmosphere and ocean. The set of HighResMIP experiments is divided into three tiers consisting of atmosphere-only and coupled runs and spanning the period 1950-2050, with the possibility of extending to 2100, together with some additional targeted experiments. This paper describes the experimental set-up of HighResMIP, the analysis plan, the connection with the other CMIP6 endorsed MIPs, as well as the DECK and CMIP6 historical simulations. HighResMIP thereby focuses on one of the CMIP6 broad questions, "what are the origins and consequences of systematic model biases?", but we also discuss how it addresses the World Climate Research Program (WCRP) grand challenges.
NASA Astrophysics Data System (ADS)
Clark, Daniel
2015-11-01
In order to achieve the several hundred Gbar stagnation pressures necessary for inertial confinement fusion ignition, implosion experiments on the National Ignition Facility (NIF) require the compression of deuterium-tritium fuel layers by a convergence ratio as high as forty. Such high convergence implosions are subject to degradation by a range of perturbations, including the growth of small-scale defects due to hydrodynamic instabilities, as well as longer scale modulations due to radiation flux asymmetries in the enclosing hohlraum. Due to the broad range of scales involved, and also the genuinely three-dimensional (3-D) character of the flow, accurately modeling NIF implosions remains at the edge of current radiation hydrodynamics simulation capabilities. This talk describes the current state of progress of 3-D, high-resolution, capsule-only simulations of NIF implosions aimed at accurately describing the performance of specific NIF experiments. Current simulations include the effects of hohlraum radiation asymmetries, capsule surface defects, the capsule support tent and fill tube, and use a grid resolution shown to be converged in companion two-dimensional simulations. The results of detailed simulations of low foot implosions from the National Ignition Campaign are contrasted against results for more recent high foot implosions. While the simulations suggest that low foot performance was dominated by ablation front instability growth, especially the defect seeded by the capsule support tent, high foot implosions appear to be dominated by hohlraum flux asymmetries, although the support tent still plays a significant role. Most importantly, it is found that a single, standard simulation methodology appears adequate to model both implosion types and gives confidence that such a model can be used to guide future implosion designs toward ignition. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat
NASA Astrophysics Data System (ADS)
Hua, H.
2016-12-01
Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.
Track-based event recognition in a realistic crowded environment
NASA Astrophysics Data System (ADS)
van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.
2014-10-01
Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Rethinking key–value store for parallel I/O optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kougkas, Anthony; Eslami, Hassan; Sun, Xian-He
2015-01-26
Key-value stores are being widely used as the storage system for large-scale internet services and cloud storage systems. However, they are rarely used in HPC systems, where parallel file systems are the dominant storage solution. In this study, we examine the architecture differences and performance characteristics of parallel file systems and key-value stores. We propose using key-value stores to optimize overall Input/Output (I/O) performance, especially for workloads that parallel file systems cannot handle well, such as the cases with intense data synchronization or heavy metadata operations. We conducted experiments with several synthetic benchmarks, an I/O benchmark, and a real application.more » We modeled the performance of these two systems using collected data from our experiments, and we provide a predictive method to identify which system offers better I/O performance given a specific workload. The results show that we can optimize the I/O performance in HPC systems by utilizing key-value stores.« less
Numerical investigation of design and operation parameters on CHI spheromak performance
NASA Astrophysics Data System (ADS)
O'Bryan, J. B.; Romero-Talamás, C. R.; Woodruff, S.
2017-10-01
Nonlinear, numerical computation with the NIMROD code is used to explore magnetic self-organization in spheromaks formed with coaxial helicity injection, particularly with regard to how externally controllable parameters affect the resulting spheromak performance. The overall goal of our study is to inform the design and operational parameters of a future proof-of-principle spheromak experiment. Our calculations start from vacuum magnetic fields and model multiple distinct phases of evolution. Results indicate that modest changes to the design and operation of past experiments, e.g. SSPX [E.B. Hooper et al. PPCF 2012], could have significantly improved the plasma-current injector coupling efficiency and performance, particularly with respect to peak temperature and lifetime. While we frequently characterize performance relative to SSPX, our conclusions extrapolate to fundamentally different experimental designs. We also explore adiabatic magnetic compression of spheromaks, which may allow for a small-scale, high-performance and high-yield pulsed neutron source. This work is supported by DAPRA under Grant No. N66001-14-1-4044.
Performance evaluation of bimodal thermite composites : nano- vs miron-scale particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, K. M.; Pantoya, M.; Son, S. F.
2004-01-01
In recent years many studies of metastable interstitial composites (MIC) have shown vast combustion improvements over traditional thermite materials. The main difference between these two materials is the size of the fuel particles in the mixture. Decreasing the fuel size from the micron to nanometer range significantly increases the combustion wave speed and ignition sensitivity. Little is known, however, about the critical level of nano-sized fuel particles needed to enhance the performance of the traditional thermite. Ignition sensitivity experiments were performed using Al/MoO{sub 3} pellets at a theoretical maximum density of 50% (2 g/cm{sup 3}). The Al fuel particles weremore » prepared as bi-modal size distributions with micron (i.e., 4 and 20 {micro}m diameter) and nano-scale Al particles. The micron-scale Al was replaced in 10% increments by 80 nm Al particles until the fuel was 100% 80 nm Al. These bi-modal distributions allow the unique characteristics of nano-scale materials to be better understood. The pellets were ignited using a 50-W CO{sub 2} laser. High speed imaging diagnostics were used to measure ignition delay times, and micro-thermocouples were used to measure ignition temperatures. Combustion wave speeds were also examined.« less
A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.
Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang
2016-04-01
Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.
Capillary Driven Flows Along Differentially Wetted Interior Corners
NASA Technical Reports Server (NTRS)
Golliher, Eric L. (Technical Monitor); Nardin, C. L.; Weislogel, M. M.
2005-01-01
Closed-form analytic solutions useful for the design of capillary flows in a variety of containers possessing interior corners were recently collected and reviewed. Low-g drop tower and aircraft experiments performed at NASA to date show excellent agreement between theory and experiment for perfectly wetting fluids. The analytical expressions are general in terms of contact angle, but do not account for variations in contact angle between the various surfaces within the system. Such conditions may be desirable for capillary containment or to compute the behavior of capillary corner flows in containers consisting of different materials with widely varying wetting characteristics. A simple coordinate rotation is employed to recast the governing system of equations for flows in containers with interior corners with differing contact angles on the faces of the corner. The result is that a large number of capillary driven corner flows may be predicted with only slightly modified geometric functions dependent on corner angle and the two (or more) contact angles of the system. A numerical solution is employed to verify the new problem formulation. The benchmarked computations support the use of the existing theoretical approach to geometries with variable wettability. Simple experiments to confirm the theoretical findings are recommended. Favorable agreement between such experiments and the present theory may argue well for the extension of the analytic results to predict fluid performance in future large length scale capillary fluid systems for spacecraft as well as for small scale capillary systems on Earth.
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-31
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
NASA Astrophysics Data System (ADS)
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2018-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012–2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue. PMID:29541091
Hsu, Li-Ling; Hsieh, Suh-Ing
2011-11-01
This article is a report of a quasi-experimental study of the effects of blended modules on nursing students' learning of ethics course content. There is yet to be an empirically supported mix of strategies on which a working blended learning model can be built for nursing education. This was a two-group pretest and post-test quasi-experimental study in 2008 involving a total of 233 students. Two of the five clusters were designated the experimental group to experience a blended learning model, and the rest were designated the control group to be given classroom lectures only. The Case Analysis Attitude Scale, Case Analysis Self-Evaluation Scale, Blended Learning Satisfaction Scale, and Metacognition Scale were used in pretests and post-tests for the students to rate their own performance. In this study, the experimental group did not register significantly higher mean scores on the Case Analysis Attitude Scale at post-test and higher mean ranks on the Case Analysis Self-Evaluation Scale, the Blended Learning Satisfaction Scale, and the Metacognition Scale at post-test than the control group. Moreover, the experimental group registered significant progress in the mean ranks on the Case Analysis Self-Evaluation Scale and the Metacognition Scale from pretest to post-test. No between-subjects effects of four scales at post-test were found. Newly developed course modules, be it blended learning or a combination of traditional and innovative components, should be tested repeatedly for effectiveness and popularity for the purpose of facilitating the ultimate creation of a most effective course module for nursing education. © 2011 Blackwell Publishing Ltd.
Measured Boundary Layer Transition and Rotor Hover Performance at Model Scale
NASA Technical Reports Server (NTRS)
Overmeyer, Austin D.; Martin, Preston B.
2017-01-01
An experiment involving a Mach-scaled, 11:08 f t: diameter rotor was performed in hover during the summer of 2016 at NASA Langley Research Center. The experiment investigated the hover performance as a function of the laminar to turbulent transition state of the boundary layer, including both natural and fixed transition cases. The boundary layer transition locations were measured on both the upper and lower aerodynamic surfaces simultaneously. The measurements were enabled by recent advances in infrared sensor sensitivity and stability. The infrared thermography measurement technique was enhanced by a paintable blade surface heater, as well as a new high-sensitivity long wave infrared camera. The measured transition locations showed extensive amounts, x=c>0:90, of laminar flow on the lower surface at moderate to high thrust (CT=s > 0:068) for the full blade radius. The upper surface showed large amounts, x=c > 0:50, of laminar flow at the blade tip for low thrust (CT=s < 0:045). The objective of this paper is to provide an experimental data set for comparisons to newly developed and implemented rotor boundary layer transition models in CFD and rotor design tools. The data is expected to be used as part of the AIAA Rotorcraft SimulationWorking Group
A Reactor Development Scenario for the FuZE Sheared-Flow Stabilized Z-pinch
NASA Astrophysics Data System (ADS)
McLean, Harry S.; Higginson, D. P.; Schmidt, A.; Tummel, K. K.; Shumlak, U.; Nelson, B. A.; Claveau, E. L.; Forbes, E. G.; Golingo, R. P.; Stepanov, A. D.; Weber, T. R.; Zhang, Y.
2017-10-01
We present a conceptual design, scaling calculations, and development path for a pulsed fusion reactor based on a flow-stabilized Z-pinch. Experiments performed on the ZaP and ZaP-HD devices have largely demonstrated the basic physics of sheared-flow stabilization at pinch currents up to 100 kA. Initial experiments on the FuZE device, a high-power upgrade of ZaP, have achieved 20 usec of stability at pinch current 100-200 kA and pinch diameter few mm for a pinch length of 50 cm. Scaling calculations based on a quasi-steady-state power balance show that extending stable duration to 100 usec at a pinch current of 1.5 MA and pinch length of 50 cm, results in a reactor plant Q 5. Future performance milestones are proposed for pinch currents of: 300 kA, where Te and Ti are calculated to exceed 1-2 keV; 700 kA, where DT fusion power would be expected to exceed pinch input power; and 1 MA, where fusion energy per pulse exceeds input energy per pulse. This work funded by USDOE ARPA-E and performed under the auspices of Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-734770.
NASA Astrophysics Data System (ADS)
Cho, Y. J.; Zullah, M. A.; Faizal, M.; Choi, Y. D.; Lee, Y. H.
2012-11-01
A variety of technologies has been proposed to capture the energy from waves. Some of the more promising designs are undergoing demonstration testing at commercial scales. Due to the complexity of most offshore wave energy devices and their motion response in different sea states, physical tank tests are common practice for WEC design. Full scale tests are also necessary, but are expensive and only considered once the design has been optimized. Computational Fluid Dynamics (CFD) is now recognized as an important complement to traditional physical testing techniques in offshore engineering. Once properly calibrated and validated to the problem, CFD offers a high density of test data and results in a reasonable timescale to assist with design changes and improvements to the device. The purpose of this study is to investigate the performance of a newly developed direct drive hydro turbine (DDT), which will be built in a caisson for extraction of wave energy. Experiments and CFD analysis are conducted to clarify the turbine performance and internal flow characteristics. The results show that commercial CFD code can be applied successfully to the simulation of the wave motion in the water tank. The performance of the turbine for wave energy converter is studied continuously for a ongoing project.
An experimental study of windturbine noise from blade-tower wake interaction
NASA Astrophysics Data System (ADS)
Marcus, E. N.; Harris, W. L.
1983-04-01
A program of experiments has been conducted to study the impulsive noise of a horizontal axis windturbine. These tests were performed on a 1/53 scale model of the DOE-NASA MOD-1 windturbine. Experiments were performed in the M.I.T. 5 x 7-1/2 ft Anechoic Windtunnel Facility. The impulsive noise of a horizontal axis windturbine is observed to result from repeated blade passage through the mean velocity deficit induced in the lee of the windturbine support tower. The two factors which most influence this noise are rotation speed and tower drag coefficient. The intensity of noise from blade tower wake interaction is predicted to increase with the fourth power of the RPM and the second power of the tower drag coefficient. These predictions are confirmed in experiments. Further experiments are also presented in order to observe directionality of the acoustic field as well as the acoustic influence of tower shape and blade number.
NASA Technical Reports Server (NTRS)
Stutzman, Warren L.; Safaai-Jazi, A.; Pratt, Timothy; Nelson, B.; Laster, J.; Ajaz, H.
1993-01-01
Virginia Tech has performed a comprehensive propagation experiment using the Olympus satellite beacons at 12.5, 19.77, and 29.66 GHz (which we refer to as 12, 20, and 30 GHz). Four receive terminals were designed and constructed, one terminal at each frequency plus a portable one with 20 and 30 GHz receivers for microscale and scintillation studies. Total power radiometers were included in each terminal in order to set the clear air reference level for each beacon and also to predict path attenuation. More details on the equipment and the experiment design are found elsewhere. Statistical results for one year of data collection were analyzed. In addition, the following studies were performed: a microdiversity experiment in which two closely spaced 20 GHz receivers were used; a comparison of total power and Dicke switched radiometer measurements, frequency scaling of scintillations, and adaptive power control algorithm development. Statistical results are reported.
Time scales of foam stability in shallow conduits: Insights from analogue experiments
NASA Astrophysics Data System (ADS)
Spina, L.; Scheu, B.; Cimarelli, C.; Arciniega-Ceballos, A.; Dingwell, D. B.
2016-10-01
Volcanic systems can exhibit periodical trends in degassing activity, characterized by a wide range of time scales. Understanding the dynamics that control such periodic behavior can provide a picture of the processes occurring in the feeding system. Toward this end, we analyzed the periodicity of outgassing in a series of decompression experiments performed on analogue material (argon-saturated silicone oil plus glass beads/fibers) scaled to serve as models of basaltic magma. To define the effects of liquid viscosity and crystal content on the time scale of outgassing, we investigated both: (1) pure liquid systems, at differing viscosities (100 and 1000 Pa s), and (2) particle-bearing suspensions (diluted and semidiluted). The results indicate that under dynamic conditions (e.g., decompressive bubble growth and fluid ascent within the conduit), the periodicity of foam disruption may be up to several orders of magnitude less than estimates based on the analysis of static conditions. This difference in foam disruption time scale is inferred to result from the contribution of bubble shear and bubble growth to inter-bubble film thinning. The presence of particles in the semidiluted regime is further linked to shorter bubble bursting times, likely resulting from contributions of the presence of a solid network and coalescence processes to the relative increase in bubble breakup rates. Finally, it is argued that these experiments represent a good analogue of gas-piston activity (i.e., the periodical rise-and-fall of a basaltic lava lake surface), implying a dominant role for shallow foam accumulation as a source process for these phenomena.
NASA Technical Reports Server (NTRS)
Thomas, Randy; Stueber, Thomas J.
2013-01-01
The System Identification (SysID) Rack is a real-time hardware-in-the-loop data acquisition (DAQ) and control instrument rack that was designed and built to support inlet testing in the NASA Glenn Research Center 10- by 10-Foot Supersonic Wind Tunnel. This instrument rack is used to support experiments on the Combined-Cycle Engine Large-Scale Inlet for Mode Transition Experiment (CCE? LIMX). The CCE?LIMX is a testbed for an integrated dual flow-path inlet configuration with the two flow paths in an over-and-under arrangement such that the high-speed flow path is located below the lowspeed flow path. The CCE?LIMX includes multiple actuators that are designed to redirect airflow from one flow path to the other; this action is referred to as "inlet mode transition." Multiple phases of experiments have been planned to support research that investigates inlet mode transition: inlet characterization (Phase-1) and system identification (Phase-2). The SysID Rack hardware design met the following requirements to support Phase-1 and Phase-2 experiments: safely and effectively move multiple actuators individually or synchronously; sample and save effector control and position sensor feedback signals; automate control of actuator positioning based on a mode transition schedule; sample and save pressure sensor signals; and perform DAQ and control processes operating at 2.5 KHz. This document describes the hardware components used to build the SysID Rack including their function, specifications, and system interface. Furthermore, provided in this document are a SysID Rack effectors signal list (signal flow); system identification experiment setup; illustrations indicating a typical SysID Rack experiment; and a SysID Rack performance overview for Phase-1 and Phase-2 experiments. The SysID Rack described in this document was a useful tool to meet the project objectives.