Chasing the silver bullet: measuring driver fatigue using simple and complex tasks.
Baulk, S D; Biggs, S N; Reid, K J; van den Heuvel, C J; Dawson, D
2008-01-01
Driver fatigue remains a significant cause of motor-vehicle accidents worldwide. New technologies are increasingly utilised to improve road safety, but there are no effective on-road measures for fatigue. While simulated driving tasks are sensitive, and simple performance tasks have been used in industrial fatigue management systems (FMS) to quantify risk, little is known about the relationship between such measures. Establishing a simple, on-road measure of fatigue, as a fitness-to-drive tool, is an important issue for road safety and accident prevention, particularly as many fatigue related accidents are preventable. This study aimed to measure fatigue-related performance decrements using a simple task (reaction time - RT) and a complex task (driving simulation), and to determine the potential for a link between such measures, thus improving FMS success. Fifteen volunteer participants (7 m, 8 f) aged 22-56 years (mean 33.6 years), underwent 26 h of supervised wakefulness before an 8h recovery sleep opportunity. Participants were tested using a 30-min interactive driving simulation test, bracketed by a 10-min psychomotor vigilance task (PVT) at 4, 8, 18 and 24h of wakefulness, and following recovery sleep. Extended wakefulness caused significant decrements in PVT and driving performance. Although these measures are clearly linked, our analyses suggest that driving simulation cannot be replaced by a simple PVT. Further research is needed to closely examine links between performance measures, and to facilitate accurate management of fitness to drive, which requires more complex assessments of performance than RT alone.
Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT
NASA Astrophysics Data System (ADS)
Agarwal, M.; Hendriks, E. A.; Stoel, B. C.; Bakker, M. E.; Reiber, J. H. C.; Staring, M.
2012-02-01
For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.
Contribution of strategy use to performance on complex and simple span tasks.
Bailey, Heather; Dunlosky, John; Kane, Michael J
2011-04-01
Simple and complex span tasks are widely thought to measure related but separable memory constructs. Recently, however, research has demonstrated that simple and complex span tasks may tap, in part, the same construct because both similarly predict performance on measures of fluid intelligence (Gf) when the number of items retrieved from secondary memory (SM) is equated (Unsworth & Engle, Journal of Memory and Language 54:68-80 2006). Two studies (n = 105 and n = 152) evaluated whether retrieval from SM is influenced by individual differences in the use of encoding strategies during span tasks. Results demonstrated that, after equating the number of items retrieved from SM, simple and complex span performance similarly predicted Gf performance, but rates of effective strategy use did not mediate the span-Gf relationships. Moreover, at the level of individual differences, effective strategy use was more highly related to complex span performance than to simple span performance. Thus, even though individual differences in effective strategy use influenced span performance on trials that required retrieval from SM, strategic behavior at encoding cannot account for the similarities between simple and complex span tasks.
Modeling Age-Related Differences in Immediate Memory Using SIMPLE
ERIC Educational Resources Information Center
Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.
2006-01-01
In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…
Measuring Drag Force in Newtonian Liquids
ERIC Educational Resources Information Center
Mawhinney, Matthew T.; O'Donnell, Mary Kate; Fingerut, Jonathan; Habdas, Piotr
2012-01-01
The experiments described in this paper have two goals. The first goal is to show how students can perform simple but fundamental measurements of objects moving through simple liquids (such as water, oil, or honey). In doing so, students can verify Stokes' law, which governs the motion of spheres through simple liquids, and see how it fails at…
NASA Technical Reports Server (NTRS)
Dabney, Philip W.; Harding, David J.; Valett, Susan R.; Vasilyev, Aleksey A.; Yu, Anthony W.
2012-01-01
The Slope Imaging Multi-polarization Photon-counting Lidar (SIMPL) is a multi-beam, micropulse airborne laser altimeter that acquires active and passive polarimetric optical remote sensing measurements at visible and near-infrared wavelengths. SIMPL was developed to demonstrate advanced measurement approaches of potential benefit for improved, more efficient spaceflight laser altimeter missions. SIMPL data have been acquired for wide diversity of forest types in the summers of 2010 and 2011 in order to assess the potential of its novel capabilities for characterization of vegetation structure and composition. On each of its four beams SIMPL provides highly-resolved measurements of forest canopy structure by detecting single-photons with 15 cm ranging precision using a narrow-beam system operating at a laser repetition rate of 11 kHz. Associated with that ranging data SIMPL provides eight amplitude parameters per beam unlike the single amplitude provided by typical laser altimeters. Those eight parameters are received energy that is parallel and perpendicular to that of the plane-polarized transmit pulse at 532 nm (green) and 1064 nm (near IR), for both the active laser backscatter retro-reflectance and the passive solar bi-directional reflectance. This poster presentation will cover the instrument architecture and highlight the performance of the SIMPL instrument with examples taken from measurements for several sites with distinct canopy structures and compositions. Specific performance areas such as probability of detection, after pulsing, and dead time, will be highlighted and addressed, along with examples of their impact on the measurements and how they limit the ability to accurately model and recover the canopy properties. To assess the sensitivity of SIMPL's measurements to canopy properties an instrument model has been implemented in the FLIGHT radiative transfer code, based on Monte Carlo simulation of photon transport. SIMPL data collected in 2010 over the Smithsonian Environmental Research Center, MD are currently being modelled and compared to other remote sensing and in situ data sets. Results on the adaptation of FLIGHT to model micropulse, single'photon ranging measurements are presented elsewhere at this conference. NASA's ICESat-2 spaceflight mission, scheduled for launch in 2016, will utilize a multi-beam, micropulse, single-photon ranging measurement approach (although non-polarimetric and only at 532 nm). Insights gained from the analysis and modelling of SIMPL data will help guide preparations for that mission, including development of calibration/validation plans and algorithms for the estimation of forest biophysical parameters.
Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.
2015-01-01
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887
The Use of Conjunctions in Cognitively Simple versus Complex Oral L2 Tasks
ERIC Educational Resources Information Center
Michel, Marije C.
2013-01-01
The present study explores the use of conjunctions in simple versus complex argumentative tasks performed by second language (L2) learners as a specific measure for the amount of reasoning involved in task performance. The Cognition Hypothesis (Robinson, 2005) states that an increase in cognitive task complexity promotes improvements in L2…
Simple arithmetic: not so simple for highly math anxious individuals.
Chang, Hyesang; Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G
2017-12-01
Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low-compared to high-math anxious individuals perform better when they activate this network less-a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. © The Author (2017). Published by Oxford University Press.
Simple arithmetic: not so simple for highly math anxious individuals
Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G
2017-01-01
Abstract Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low—compared to high—math anxious individuals perform better when they activate this network less—a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. PMID:29140499
Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J
2015-09-30
To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.
FY2017 Report on NISC Measurements and Detector Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Madison Theresa; Meierbachtol, Krista Cruse; Jordan, Tyler Alexander
FY17 work focused on automation, both of the measurement analysis and comparison of simulations. The experimental apparatus was relocated and weeks of continuous measurements of the spontaneous fission source 252Cf was performed. Programs were developed to automate the conversion of measurements into ROOT data framework files with a simple terminal input. The complete analysis of the measurement (which includes energy calibration and the identification of correlated counts) can now be completed with a documented process which involves one simple execution line as well. Finally, the hurdles of slow MCNP simulations resulting in low simulation statistics have been overcome with themore » generation of multi-run suites which make use of the highperformance computing resources at LANL. Preliminary comparisons of measurements and simulations have been performed and will be the focus of FY18 work.« less
Ground temperature measurement by PRT-5 for maps experiment
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.
Multicenter validation of a bedside antisaccade task as a measure of executive function
Hellmuth, J.; Mirsky, J.; Heuer, H.W.; Matlin, A.; Jafari, A.; Garbutt, S.; Widmeyer, M.; Berhel, A.; Sinha, L.; Miller, B.L.; Kramer, J.H.
2012-01-01
Objective: To create and validate a simple, standardized version of the antisaccade (AS) task that requires no specialized equipment for use as a measure of executive function in multicenter clinical studies. Methods: The bedside AS (BAS) task consisted of 40 pseudorandomized AS trials presented on a laptop computer. BAS performance was compared with AS performance measured using an infrared eye tracker in normal elders (NE) and individuals with mild cognitive impairment (MCI) or dementia (n = 33). The neuropsychological domain specificity of the BAS was then determined in a cohort of NE, MCI, and dementia (n = 103) at UCSF, and the BAS was validated as a measure of executive function in a 6-center cohort (n = 397) of normal adults and patients with a variety of brain diseases. Results: Performance on the BAS and laboratory AS task was strongly correlated and BAS performance was most strongly associated with neuropsychological measures of executive function. Even after controlling for disease severity and processing speed, BAS performance was associated with multiple assessments of executive function, most strongly the informant-based Frontal Systems Behavior Scale. Conclusions: The BAS is a simple, valid measure of executive function in aging and neurologic disease. PMID:22573640
Jet engine performance enhancement through use of a wave-rotor topping cycle
NASA Technical Reports Server (NTRS)
Wilson, Jack; Paxson, Daniel E.
1993-01-01
A simple model is used to calculate the thermal efficiency and specific power of simple jet engines and jet engines with a wave-rotor topping cycle. The performance of the wave rotor is based on measurements from a previous experiment. Applied to the case of an aircraft flying at Mach 0.8, the calculations show that an engine with a wave rotor topping cycle may have gains in thermal efficiency of approximately 1 to 2 percent and gains in specific power of approximately 10 to 16 percent over a simple jet engine with the same overall compression ratio. Even greater gains are possible if the wave rotor's performance can be improved.
Improved perceptual-motor performance measurement system
NASA Technical Reports Server (NTRS)
Parker, J. F., Jr.; Reilly, R. E.
1969-01-01
Battery of tests determines the primary dimensions of perceptual-motor performance. Eighteen basic measures range from simple tests to sophisticated electronic devices. Improved system has one unit for the subject containing test display and response elements, and one for the experimenter where test setups, programming, and scoring are accomplished.
ERIC Educational Resources Information Center
Espin, Christine A.; Busch, Todd W.; Lembke, Erica S.; Hampton, David D.; Seo, Kyounghee; Zukowski, Beth A.
2013-01-01
The technical adequacy of curriculum-based measures in the form of short and simple vocabulary-matching probes to predict students' performance and progress in science at the secondary level was investigated. Participants were 198 seventh-grade students from 10 science classrooms. Curriculum-based measurements (CBM) were 5-min vocabulary-matching…
Performance measurement for information systems: Industry perspectives
NASA Technical Reports Server (NTRS)
Bishop, Peter C.; Yoes, Cissy; Hamilton, Kay
1992-01-01
Performance measurement has become a focal topic for information systems (IS) organizations. Historically, IS performance measures have dealt with the efficiency of the data processing function. Today, the function of most IS organizations goes beyond simple data processing. To understand how IS organizations have developed meaningful performance measures that reflect their objectives and activities, industry perspectives on IS performance measurement was studied. The objectives of the study were to understand the state of the practice in IS performance techniques for IS performance measurement; to gather approaches and measures of actual performance measures used in industry; and to report patterns, trends, and lessons learned about performance measurement to NASA/JSC. Examples of how some of the most forward looking companies are shaping their IS processes through measurement is provided. Thoughts on the presence of a life-cycle to performance measures development and a suggested taxonomy for performance measurements are included in the appendices.
Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A
2008-12-01
Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.
ERIC Educational Resources Information Center
Stevens, Olinger; Leigh, Erika
2012-01-01
Scope and Method of Study: The purpose of the study is to use an empirical approach to identify a simple, economical, efficient, and technically adequate performance measure that teachers can use to assess student growth in mathematics. The current study has been designed to expand the body of research for math CBM to further examine technical…
A Few Simple Classroom Experiments with a Permanent U-Shaped Magnet
ERIC Educational Resources Information Center
Babovic, Miloš; Babovic, Vukota
2017-01-01
A few simple experiments in the magnetic field of a permanent U-shaped magnet are described. Among them, pin oscillations inside the magnet are particularly interesting. These easy to perform and amusing measurements can help pupils understand magnetic phenomena and mutually connect knowledge of various physics branches.
Performance tests and quality control of cathode ray tube displays.
Roehrig, H; Blume, H; Ji, T L; Browne, M
1990-08-01
Spatial resolution, noise, characteristic curve, and absolute luminance are the essential parameters that describe physical image quality of a display. This paper presents simple procedures for assessing the performance of a cathode ray tube (CRT) in terms of these parameters as well as easy set up techniques. The procedures can be used in the environment where the CRT is used. The procedures are based on a digital representation of the Society of Motion Pictures and Television Engineers pattern plus a few simple other digital patterns. Additionally, measurement techniques are discussed for estimating brightness uniformity, veiling glare, and distortion. Apart from the absolute luminance, all performance features can be assessed with an uncalibrated photodetector and the eyes of a human observer. The measurement techniques especially enable the user to perform comparisons of different display systems.
Swallowing performance in patients with head and neck cancer: a simple clinical test.
Patterson, Joanne M; McColl, Elaine; Carding, Paul N; Kelly, Charles; Wilson, Janet A
2009-10-01
Few simple clinical measures are available to monitor swallowing performance in head and neck cancer. Water swallow tests (WST) have been used as a part of clinical assessments in neurological dysphagia. The aim of this paper is to evaluate the utility of the 100 mL WST in head and neck cancer patients. The 100 mL WST was performed on 167 head and neck cancer patients. Measures were compared with respect to tumor site/stage, gender and age. The cohort was compared with published healthy controls. The test was quick to administer with excellent compliance. Patients had significantly poorer swallows than the published control group (mean reduction 1.6 mL/s). Function worsened with increased tumor stage and for patients with pharyngeal tumors. The 100 mL WST is an effective swallowing performance measure and is easily incorporated into a clinical examination. This paper provides benchmark data on the 100 mL WST for individuals with head and neck cancer.
Reducing failures of working memory with performance feedback.
Adam, Kirsten C S; Vogel, Edward K
2016-10-01
Fluctuations in attentional control can lead to failures of working memory (WM), in which the subject is no better than chance at reporting items from a recent display. In three experiments, we used a whole-report measure of visual WM to examine the impact of feedback on the rate of failures. In each experiment, subjects remembered an array of colored items across a blank delay, and then reported the identity of items using a whole-report procedure. In Experiment 1, we gave subjects simple feedback about the number of items they correctly identified at the end of each trial. In Experiment 2, we gave subjects additional information about the cumulative number of items correctly identified within each block. Finally, in Experiment 3, we gave subjects weighted feedback in which poor trials resulted in lost points and consistent successful performance received "streak" points. Surprisingly, simple feedback (Exp. 1) was ineffective at improving average performance or decreasing the rate of poor-performance trials. Simple cumulative feedback (Exp. 2) modestly decreased poor-performance trials (by 7 %). Weighted feedback produced the greatest benefits, decreasing the frequency of poor-performance trials by 28 % relative to baseline performance. This set of results demonstrates the usefulness of whole-report WM measures for investigating the effects of feedback on WM performance. Further, we showed that only a feedback structure that specifically discouraged lapses using negative feedback led to large reductions in WM failures.
Lautz, L S; Struijs, J; Nolte, T M; Breure, A M; van der Grinten, E; van de Meent, D; van Zelm, R
2017-02-01
In this study, the removal of pharmaceuticals from wastewater as predicted by SimpleTreat 4.0 was evaluated. Field data obtained from literature of 43 pharmaceuticals, measured in 51 different activated sludge WWTPs were used. Based on reported influent concentrations, the effluent concentrations were calculated with SimpleTreat 4.0 and compared to measured effluent concentrations. The model predicts effluent concentrations mostly within a factor of 10, using the specific WWTP parameters as well as SimpleTreat default parameters, while it systematically underestimates concentrations in secondary sludge. This may be caused by unexpected sorption, resulting from variability in WWTP operating conditions, and/or QSAR applicability domain mismatch and background concentrations prior to measurements. Moreover, variability in detection techniques and sampling methods can cause uncertainty in measured concentration levels. To find possible structural improvements, we also evaluated SimpleTreat 4.0 using several specific datasets with different degrees of uncertainty and variability. This evaluation verified that the most influencing parameters for water effluent predictions were biodegradation and the hydraulic retention time. Results showed that model performance is highly dependent on the nature and quality, i.e. degree of uncertainty, of the data. The default values for reactor settings in SimpleTreat result in realistic predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Conservation of Mechanical and Electric Energy: Simple Experimental Verification
ERIC Educational Resources Information Center
Ponikvar, D.; Planinsic, G.
2009-01-01
Two similar experiments on conservation of energy and transformation of mechanical into electrical energy are presented. Both can be used in classes, as they offer numerous possibilities for discussion with students and are simple to perform. Results are presented and are precise within 20% for the version of the experiment where measured values…
ERIC Educational Resources Information Center
Valdez, Pablo; Reilly, Thomas; Waterhouse, Jim
2008-01-01
Cognitive performance is affected by an individual's characteristics and the environment, as well as by the nature of the task and the amount of practice at it. Mental performance tests range in complexity and include subjective estimates of mood, simple objective tests (reaction time), and measures of complex performance that require decisions to…
Motor-cognitive dual-task deficits in individuals with early-mid stage Huntington disease.
Fritz, Nora E; Hamana, Katy; Kelson, Mark; Rosser, Anne; Busse, Monica; Quinn, Lori
2016-09-01
Huntington disease (HD) results in a range of cognitive and motor impairments that progress throughout the disease stages; however, little research has evaluated specific dual-task abilities in this population, and the degree to which they may be related to functional ability. The purpose of this study was to a) examine simple and complex motor-cognitive dual-task performance in individuals with HD, b) determine relationships between dual-task walking ability and disease-specific measures of motor, cognitive and functional ability, and c) examine the relationship of dual-task measures to falls in individuals with HD. Thirty-two individuals with HD were evaluated for simple and complex dual-task ability using the Walking While Talking Test. Demographics and disease-specific measures of motor, cognitive and functional ability were also obtained. Individuals with HD had impairments in simple and complex dual-task ability. Simple dual-task walking was correlated to disease-specific motor scores as well as cognitive performance, but complex dual-task walking was correlated with total functional capacity, as well as a range of cognitive measures. Number of prospective falls was moderately-strongly correlated to dual-task measures. Our results suggest that individuals with HD have impairments in cognitive-motor dual-task ability that are related to disease progression and specifically functional ability. Dual-task measures appear to evaluate a unique construct in individuals with early to mid-stage HD, and may have value in improving the prediction of falls risk in this population. Copyright © 2016 Elsevier B.V. All rights reserved.
Low-Cost Detection of Thin Film Stress during Fabrication
NASA Technical Reports Server (NTRS)
Nabors, Sammy A.
2015-01-01
NASA's Marshall Space Flight Center has developed a simple, cost-effective optical method for thin film stress measurements during growth and/or subsequent annealing processes. Stress arising in thin film fabrication presents production challenges for electronic devices, sensors, and optical coatings; it can lead to substrate distortion and deformation, impacting the performance of thin film products. NASA's technique measures in-situ stress using a simple, noncontact fiber optic probe in the thin film vacuum deposition chamber. This enables real-time monitoring of stress during the fabrication process and allows for efficient control of deposition process parameters. By modifying process parameters in real time during fabrication, thin film stress can be optimized or controlled, improving thin film product performance.
Brain Modularity Mediates the Relation between Task Complexity and Performance
NASA Astrophysics Data System (ADS)
Ye, Fengdan; Yue, Qiuhai; Martin, Randi; Fischer-Baum, Simon; Ramos-Nuã+/-Ez, Aurora; Deem, Michael
Recent work in cognitive neuroscience has focused on analyzing the brain as a network, rather than a collection of independent regions. Prior studies taking this approach have found that individual differences in the degree of modularity of the brain network relate to performance on cognitive tasks. However, inconsistent results concerning the direction of this relationship have been obtained, with some tasks showing better performance as modularity increases, and other tasks showing worse performance. A recent theoretical model suggests that these inconsistencies may be explained on the grounds that high-modularity networks favor performance on simple tasks whereas low-modularity networks favor performance on complex tasks. The current study tests these predictions by relating modularity from resting-state fMRI to performance on a set of behavioral tasks. Complex and simple tasks were defined on the basis of whether they drew on executive attention. Consistent with predictions, we found a negative correlation between individuals' modularity and their performance on the complex tasks but a positive correlation with performance on the simple tasks. The results presented here provide a framework for linking measures of whole brain organization to cognitive processing.
Lorenzetti, Silvio; Lamparter, Thomas; Lüthy, Fabian
2017-12-06
The velocity of a barbell can provide important insights on the performance of athletes during strength training. The aim of this work was to assess the validity and reliably of four simple measurement devices that were compared to 3D motion capture measurements during squatting. Nine participants were assessed when performing 2 × 5 traditional squats with a weight of 70% of the 1 repetition maximum and ballistic squats with a weight of 25 kg. Simultaneously, data was recorded from three linear position transducers (T-FORCE, Tendo Power and GymAware), an accelerometer based system (Myotest) and a 3D motion capture system (Vicon) as the Gold Standard. Correlations between the simple measurement devices and 3D motion capture of the mean and the maximal velocity of the barbell, as well as the time to maximal velocity, were calculated. The correlations during traditional squats were significant and very high (r = 0.932, 0.990, p < 0.01) and significant and moderate to high (r = 0.552, 0.860, p < 0.01). The Myotest could only be used during the ballistic squats and was less accurate. All the linear position transducers were able to assess squat performance, particularly during traditional squats and especially in terms of mean velocity and time to maximal velocity.
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
NASA Astrophysics Data System (ADS)
Ivković, Saša S.; Marković, Marija Z.; Ivković, Dragica Ž.; Cvetanović, Nikola
2017-09-01
Equivalent series resistance (ESR) represents the measurement of total energy loss in a capacitor. In this paper a simple method for measuring the ESR of ceramic capacitors based on the analysis of the oscillations of an LCR circuit is proposed. It is shown that at frequencies under 3300 Hz, the ESR is directly proportional to the period of oscillations. Based on the determined dependence of the ESR on the period, a method is devised and tested for measuring coil inductance. All measurements were performed using the standard equipment found in student laboratories, which makes both methods very suitable for implementation at high school and university levels.
Cyanuric acide (CA) is widely used as a chlorine stabilizer in outdoor pools. No simple method exists for CA measurement in the urine of exposed swimmers. The high hydrophilicity of CA makes usage of solid phase sorbents to extract it from urine nearly impossible because of samp...
Spectral properties of thermal fluctuations on simple liquid surfaces below shot-noise levels.
Aoki, Kenichiro; Mitsui, Takahisa
2012-07-01
We study the spectral properties of thermal fluctuations on simple liquid surfaces, sometimes called ripplons. Analytical properties of the spectral function are investigated and are shown to be composed of regions with simple analytic behavior with respect to the frequency or the wave number. The derived expressions are compared to spectral measurements performed orders of magnitude below shot-noise levels, which is achieved using a novel noise reduction method. The agreement between the theory of thermal surface fluctuations and the experiment is found to be excellent, elucidating the spectral properties of the surface fluctuations. The measurement method requires relatively only a small sample both spatially (few μm) and temporally (~20 s). The method also requires relatively weak light power (~0.5 mW) so that it has a broad range of applicability, including local measurements, investigations of time-dependent phenomena, and noninvasive measurements.
A simple rain attenuation model for earth-space radio links operating at 10-35 GHz
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Yon, K. M.
1986-01-01
The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.
Analysis of Aptitude, Training, and Job Performance Measures
1982-02-01
provisions have been made for students with unsatisfactory performance to be " recycled ", that is, to retake a module or part of a course (e.g., 2841, Marine...clear, simple language o questions avoided trivia , and were related to task performance o response choices were parallel and realistic. The assessment
Microburst vertical wind estimation from horizontal wind measurements
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1994-01-01
The vertical wind or downdraft component of a microburst-generated wind shear can significantly degrade airplane performance. Doppler radar and lidar are two sensor technologies being tested to provide flight crews with early warning of the presence of hazardous wind shear. An inherent limitation of Doppler-based sensors is the inability to measure velocities perpendicular to the line of sight, which results in an underestimate of the total wind shear hazard. One solution to the line-of-sight limitation is to use a vertical wind model to estimate the vertical component from the horizontal wind measurement. The objective of this study was to assess the ability of simple vertical wind models to improve the hazard prediction capability of an airborne Doppler sensor in a realistic microburst environment. Both simulation and flight test measurements were used to test the vertical wind models. The results indicate that in the altitude region of interest (at or below 300 m), the simple vertical wind models improved the hazard estimate. The radar simulation study showed that the magnitude of the performance improvement was altitude dependent. The altitude of maximum performance improvement occurred at about 300 m.
Working Memory in L2 Reading: Does Capacity Predict Performance?
ERIC Educational Resources Information Center
Harrington, Michael; Sawyer, Mark
A study was conducted at the International University of Japan to see if second language (L2) working capacity correlates with L2 reading ability in advanced English-as-a-Second-Language (ESL) learners. The study consisted of a set of memory tests (Simple Digit, Simple Word, and Complex Span Test) and a set of measures of reading skills given to…
Bohnen, Jordan D; George, Brian C; Williams, Reed G; Schuller, Mary C; DaRosa, Debra A; Torbeck, Laura; Mullen, John T; Meyerson, Shari L; Auyang, Edward D; Chipman, Jeffrey G; Choi, Jennifer N; Choti, Michael A; Endean, Eric D; Foley, Eugene F; Mandell, Samuel P; Meier, Andreas H; Smink, Douglas S; Terhune, Kyla P; Wise, Paul E; Soper, Nathaniel J; Zwischenberger, Joseph B; Lillemoe, Keith D; Dunnington, Gary L; Fryer, Jonathan P
Intraoperative performance assessment of residents is of growing interest to trainees, faculty, and accreditors. Current approaches to collect such assessments are limited by low participation rates and long delays between procedure and evaluation. We deployed an innovative, smartphone-based tool, SIMPL (System for Improving and Measuring Procedural Learning), to make real-time intraoperative performance assessment feasible for every case in which surgical trainees participate, and hypothesized that SIMPL could be feasibly integrated into surgical training programs. Between September 1, 2015 and February 29, 2016, 15 U.S. general surgery residency programs were enrolled in an institutional review board-approved trial. SIMPL was made available after 70% of faculty and residents completed a 1-hour training session. Descriptive and univariate statistics analyzed multiple dimensions of feasibility, including training rates, volume of assessments, response rates/times, and dictation rates. The 20 most active residents and attendings were evaluated in greater detail. A total of 90% of eligible users (1267/1412) completed training. Further, 13/15 programs began using SIMPL. Totally, 6024 assessments were completed by 254 categorical general surgery residents (n = 3555 assessments) and 259 attendings (n = 2469 assessments), and 3762 unique operations were assessed. There was significant heterogeneity in participation within and between programs. Mean percentage (range) of users who completed ≥1, 5, and 20 assessments were 62% (21%-96%), 34% (5%-75%), and 10% (0%-32%) across all programs, and 96%, 75%, and 32% in the most active program. Overall, response rate was 70%, dictation rate was 24%, and mean response time was 12 hours. Assessments increased from 357 (September 2015) to 1146 (February 2016). The 20 most active residents each received mean 46 assessments by 10 attendings for 20 different procedures. SIMPL can be feasibly integrated into surgical training programs to enhance the frequency and timeliness of intraoperative performance assessment. We believe SIMPL could help facilitate a national competency-based surgical training system, although local and systemic challenges still need to be addressed. Copyright © 2016. Published by Elsevier Inc.
The Developmental Influence of Primary Memory Capacity on Working Memory and Academic Achievement
2015-01-01
In this study, we investigate the development of primary memory capacity among children. Children between the ages of 5 and 8 completed 3 novel tasks (split span, interleaved lists, and a modified free-recall task) that measured primary memory by estimating the number of items in the focus of attention that could be spontaneously recalled in serial order. These tasks were calibrated against traditional measures of simple and complex span. Clear age-related changes in these primary memory estimates were observed. There were marked individual differences in primary memory capacity, but each novel measure was predictive of simple span performance. Among older children, each measure shared variance with reading and mathematics performance, whereas for younger children, the interleaved lists task was the strongest single predictor of academic ability. We argue that these novel tasks have considerable potential for the measurement of primary memory capacity and provide new, complementary ways of measuring the transient memory processes that predict academic performance. The interleaved lists task also shared features with interference control tasks, and our findings suggest that young children have a particular difficulty in resisting distraction and that variance in the ability to resist distraction is also shared with measures of educational attainment. PMID:26075630
The developmental influence of primary memory capacity on working memory and academic achievement.
Hall, Debbora; Jarrold, Christopher; Towse, John N; Zarandi, Amy L
2015-08-01
In this study, we investigate the development of primary memory capacity among children. Children between the ages of 5 and 8 completed 3 novel tasks (split span, interleaved lists, and a modified free-recall task) that measured primary memory by estimating the number of items in the focus of attention that could be spontaneously recalled in serial order. These tasks were calibrated against traditional measures of simple and complex span. Clear age-related changes in these primary memory estimates were observed. There were marked individual differences in primary memory capacity, but each novel measure was predictive of simple span performance. Among older children, each measure shared variance with reading and mathematics performance, whereas for younger children, the interleaved lists task was the strongest single predictor of academic ability. We argue that these novel tasks have considerable potential for the measurement of primary memory capacity and provide new, complementary ways of measuring the transient memory processes that predict academic performance. The interleaved lists task also shared features with interference control tasks, and our findings suggest that young children have a particular difficulty in resisting distraction and that variance in the ability to resist distraction is also shared with measures of educational attainment. (c) 2015 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
dos Santos, J. M. F.; Veloso, J. F. C. A.; Monteiro, C. M. B.
2004-01-01
We describe a simple experiment intended for didactic laboratory vacuum classes of undergraduate courses, using a helium leak detector. The helium throughput flowing into the vacuum volume due to the permeability of materials can be taken as a real leak, which can be measured with the helium leak detector. The experiment allows students to perform actual measurements of helium permeability constants of different materials, and access the dependence of the helium permeability throughput on the material thickness, area and helium pressure differential. As an example, a set of measurements are presented for Kapton foils, exhibiting results that are in good agreement with those presented in the literature.
Souissi, Makram; Abedelmalek, Salma; Chtourou, Hamdi; Atheymen, Rim; Hakim, Ahmed; Sahnoun, Zouhair
2012-01-01
Purpose The purpose of the present study was to evaluate the ergogenic effect of caffeine ingestion on mood state, simple reaction time, and muscle power during the Wingate test recorded in the morning on elite Judoists. Methods Twelve elite judoists (age: 21.08 ± 1.16 years, body mass: 83.75 ± 20.2 kg, height: 1.76 ±6.57 m) participated in this study. Mood states, simple reaction time, and muscle power during the Wingate test were measured during two test sessions at 07:00 h and after placebo or caffeine ingestion (i.e. 5 mg/kg). Plasma concentrations of caffeine were measured before (T0) and 1-h after caffeine’ ingestion (T1) and after the Wingate test (T3). Results Our results revealed an increase of the anxiety and the vigor (P<0.01), a reduction of the simple reaction time (P<0.001) and an improvement of the peak and mean powers during the Wingate test. However, the fatigue index during this test was unaffected by the caffeine ingestion. In addition, plasma concentration of caffeine was significantly higher at T1 in comparison with T0. Conclusions In conclusion, the results of this study suggest that morning caffeine ingestion has ergogenic properties with the potential to benefit performance, increase anxiety and vigor, and decrease the simple reaction time. PMID:23012635
Thornberg, Steven [Peralta, NM; Brown, Jason [Albuquerque, NM
2012-06-19
A method of detecting leaks and measuring volumes as well as an apparatus, the Power-free Pump Module (PPM), that is a self-contained leak test and volume measurement apparatus that requires no external sources of electrical power during leak testing or volume measurement, where the invention is a portable, pneumatically-controlled instrument capable of generating a vacuum, calibrating volumes, and performing quantitative leak tests on a closed test system or device, all without the use of alternating current (AC) power. Capabilities include the ability is to provide a modest vacuum (less than 10 Torr), perform a pressure rise leak test, measure the gas's absolute pressure, and perform volume measurements. All operations are performed through a simple rotary control valve which controls pneumatically-operated manifold valves.
Thornberg, Steven M; Brown, Jason
2015-02-17
A method of detecting leaks and measuring volumes as well as a device, the Power-free Pump Module (PPM), provides a self-contained leak test and volume measurement apparatus that requires no external sources of electrical power during leak testing or volume measurement. The PPM is a portable, pneumatically-controlled instrument capable of generating a vacuum, calibrating volumes, and performing quantitative leak tests on a closed test system or device, all without the use of alternating current (AC) power. Capabilities include the ability is to provide a modest vacuum (less than 10 Torr) using a venturi pump, perform a pressure rise leak test, measure the gas's absolute pressure, and perform volume measurements. All operations are performed through a simple rotary control valve which controls pneumatically-operated manifold valves.
USDA-ARS?s Scientific Manuscript database
An ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed capable of simultaneously measuring chlortetracycline (CTC), epi-chlortetracycline (epi-CTC), isochlortetracycline (ICTC), oxytetracycline, and tetracycline in swine manure. A simple sample pr...
Application of Support Vector Machine to Forex Monitoring
NASA Astrophysics Data System (ADS)
Kamruzzaman, Joarder; Sarker, Ruhul A.
Previous studies have demonstrated superior performance of artificial neural network (ANN) based forex forecasting models over traditional regression models. This paper applies support vector machines to build a forecasting model from the historical data using six simple technical indicators and presents a comparison with an ANN based model trained by scaled conjugate gradient (SCG) learning algorithm. The models are evaluated and compared on the basis of five commonly used performance metrics that measure closeness of prediction as well as correctness in directional change. Forecasting results of six different currencies against Australian dollar reveal superior performance of SVM model using simple linear kernel over ANN-SCG model in terms of all the evaluation metrics. The effect of SVM parameter selection on prediction performance is also investigated and analyzed.
Comparing Motor Skills in Autism Spectrum Individuals With and Without Speech Delay
Barbeau, Elise B.; Meilleur, Andrée‐Anne S.; Zeffiro, Thomas A.
2015-01-01
Movement atypicalities in speed, coordination, posture, and gait have been observed across the autism spectrum (AS) and atypicalities in coordination are more commonly observed in AS individuals without delayed speech (DSM‐IV Asperger) than in those with atypical or delayed speech onset. However, few studies have provided quantitative data to support these mostly clinical observations. Here, we compared perceptual and motor performance between 30 typically developing and AS individuals (21 with speech delay and 18 without speech delay) to examine the associations between limb movement control and atypical speech development. Groups were matched for age, intelligence, and sex. The experimental design included: an inspection time task, which measures visual processing speed; the Purdue Pegboard, which measures finger dexterity, bimanual performance, and hand‐eye coordination; the Annett Peg Moving Task, which measures unimanual goal‐directed arm movement; and a simple reaction time task. We used analysis of covariance to investigate group differences in task performance and linear regression models to explore potential associations between intelligence, language skills, simple reaction time, and visually guided movement performance. AS participants without speech delay performed slower than typical participants in the Purdue Pegboard subtests. AS participants without speech delay showed poorer bimanual coordination than those with speech delay. Visual processing speed was slightly faster in both AS groups than in the typical group. Altogether, these results suggest that AS individuals with and without speech delay differ in visually guided and visually triggered behavior and show that early language skills are associated with slower movement in simple and complex motor tasks. Autism Res 2015, 8: 682–693. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:25820662
Takeuchi, Hikaru; Sugiura, Motoaki; Sassa, Yuko; Sekiguchi, Atsushi; Yomogida, Yukihito; Taki, Yasuyuki; Kawashima, Ryuta
2012-01-01
The difference between the speed of simple cognitive processes and the speed of complex cognitive processes has various psychological correlates. However, the neural correlates of this difference have not yet been investigated. In this study, we focused on working memory (WM) for typical complex cognitive processes. Functional magnetic resonance imaging data were acquired during the performance of an N-back task, which is a measure of WM for typical complex cognitive processes. In our N-back task, task speed and memory load were varied to identify the neural correlates responsible for the difference between the speed of simple cognitive processes (estimated from the 0-back task) and the speed of WM. Our findings showed that this difference was characterized by the increased activation in the right dorsolateral prefrontal cortex (DLPFC) and the increased functional interaction between the right DLPFC and right superior parietal lobe. Furthermore, the local gray matter volume of the right DLPFC was correlated with participants' accuracy during fast WM tasks, which in turn correlated with a psychometric measure of participants' intelligence. Our findings indicate that the right DLPFC and its related network are responsible for the execution of the fast cognitive processes involved in WM. Identified neural bases may underlie the psychometric differences between the speed with which subjects perform simple cognitive tasks and the speed with which subjects perform more complex cognitive tasks, and explain the previous traditional psychological findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herchko, S; Ding, G
2016-06-15
Purpose: To develop an accurate, straightforward, and user-independent method for performing light versus radiation field coincidence quality assurance utilizing EPID images, a simple phantom made of readily-accessible materials, and a free software program. Methods: A simple phantom consisting of a blocking tray, graph paper, and high-density wire was constructed. The phantom was used to accurately set the size of a desired light field and imaged on the electronic portal imaging device (EPID). A macro written for use in ImageJ, a free image processing software, was then use to determine the radiation field size utilizing the high density wires on themore » phantom for a pixel to distance calibration. The macro also performs an analysis on the measured radiation field utilizing the tolerances recommended in the AAPM Task Group #142. To verify the accuracy of this method, radiochromic film was used to qualitatively demonstrate agreement between the film and EPID results, and an additional ImageJ macro was used to quantitatively compare the radiation field sizes measured both with the EPID and film images. Results: The results of this technique were benchmarked against film measurements, which have been the gold standard for testing light versus radiation field coincidence. The agreement between this method and film measurements were within 0.5 mm. Conclusion: Due to the operator dependency associated with tracing light fields and measuring radiation fields by hand when using film, this method allows for a more accurate comparison between the light and radiation fields with minimal operator dependency. Removing the need for radiographic or radiochromic film also eliminates a reoccurring cost and increases procedural efficiency.« less
Hyperoxia and Hypoxic Hypoxia Effects on Simple and Choice Reaction Times.
Dart, Todd; Gallo, Megan; Beer, Jeremy; Fischer, Joseph; Morgan, Thomas; Pilmanis, Andrew
2017-12-01
Effects of exposure to hyperoxia (PiO2 > 105 mmHg), normoxia (PiO2 95-105 mmHg) and hypoxia (PiO2 < 95 mmHg) on simple and choice reaction performance tasks were evaluated. Ten subjects performed simple and choice reaction time tests (SRT and CRT, respectively) at ground level for 40 min (20 min normoxic, 20 min hyperoxic, randomly assigned), 3048 m (10,000 ft) for 75 min (15 min hyperoxic, 60 min hypoxic), 4572 m (15,000 ft) for 60 min (15 min hyperoxic, 45 min hypoxic), and 6096 m (20,000 ft) for 35 min (15 min hyperoxic, 20 min hypoxic). SRT and CRT tests were also conducted at ground level 1 h after normoxic rest (recovery) to assess any recovery time effect on these psychomotor tasks. Total response time (TRT) significantly increased by 15 ms to 25 ms at all three altitudes for both the SRT and CRT tasks. At and below 4572 m, the performance changes were gradual over the duration of the exposures, whereas at 6096 m these changes were immediate. After 1 h, no performance decrement was measured. There was no statistical evidence that ground-level performance on these tasks was improved in hyperoxic vs. normoxic conditions. Results suggest mild decrements in reaction time due to hypoxia may occur as low as 3048 m (10,000 ft) while hyperoxia showed no positive effect on accuracy or reaction time at ground level or higher when performing simple and choice psychomotor reaction tasks.Dart T, Gallo M, Beer J, Fischer J, Morgan T, Pilmanis A. Hyperoxia and hypoxic hypoxia effects on simple and choice reaction times. Aerosp Med Hum Perform. 2017; 88(12):1073-1080.
Beyond Benchmarking: Value-Adding Metrics
ERIC Educational Resources Information Center
Fitz-enz, Jac
2007-01-01
HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…
How to use 3D shadows for simple microscopy and vibrometry
NASA Astrophysics Data System (ADS)
Parikesit, Gea O. F.; Kusumaningtyas, Indraswari
2017-07-01
In 2014, we reported that shadows can be displayed in 3D using a stereoscopic setup. We now report that the 3D shadows can also be used to perform simple measurements, which are suitable for physics education in schools and colleges. Two different types of measurements are demonstrated, i.e. microscopy and vibrometry. Both types of measurements take advantage of the geometrical optics of the 3D shadows, where the 3D position of an object can be estimated using the coordinates of the colored light sources and the coordinates of the colored shadow images. We also include several student activities that can raise the students’ curiosity and capability.
The Slope Imaging Multi-Polarization Photon-Counting Lidar: Development and Performance Results
NASA Technical Reports Server (NTRS)
Dabney, Phillip
2010-01-01
The Slope Imaging Multi-polarization Photon-counting Lidar is an airborne instrument developed to demonstrate laser altimetry measurement methods that will enable more efficient observations of topography and surface properties from space. The instrument was developed through the NASA Earth Science Technology Office Instrument Incubator Program with a focus on cryosphere remote sensing. The SIMPL transmitter is an 11 KHz, 1064 nm, plane-polarized micropulse laser transmitter that is frequency doubled to 532 nm and split into four push-broom beams. The receiver employs single-photon, polarimetric ranging at 532 and 1064 nm using Single Photon Counting Modules in order to achieve simultaneous sampling of surface elevation, slope, roughness and depolarizing scattering properties, the latter used to differentiate surface types. Data acquired over ice-covered Lake Erie in February, 2009 are documenting SIMPL s measurement performance and capabilities, demonstrating differentiation of open water and several ice cover types. ICESat-2 will employ several of the technologies advanced by SIMPL, including micropulse, single photon ranging in a multi-beam, push-broom configuration operating at 532 nm.
van den Boer, Cindy; Muller, Sara H; Vincent, Andrew D; Züchner, Klaus; van den Brekel, Michiel W M; Hilgers, Frans J M
2013-09-01
Breathing through a tracheostomy results in insufficient warming and humidification of inspired air. This loss of air-conditioning can be partially compensated for with the application of a heat and moisture exchanger (HME) over the tracheostomy. In vitro (International Organization for Standardization [ISO] standard 9360-2:2001) and in vivo measurements of the effects of an HME are complex and technically challenging. The aim of this study was to develop a simple method to measure the ex vivo HME performance comparable with previous in vitro and in vivo results. HMEs were weighed at the end of inspiration and at the end of expiration at different breathing volumes. Four HMEs (Atos Medical, Hörby, Sweden) with known in vivo humidity and in vitro water loss values were tested. The associations between weight change, volume, and absolute humidity were determined using both linear and non-linear mixed effects models. The rating between the 4 HMEs by weighing correlated with previous intra-tracheal measurements (R(2) = 0.98), and the ISO standard (R(2) = 0.77). Assessment of the weight change between end of inhalation and end of exhalation is a valid and simple method of measuring the water exchange performance of an HME.
Work Measurement as a Generalized Quantum Measurement
NASA Astrophysics Data System (ADS)
Roncaglia, Augusto J.; Cerisola, Federico; Paz, Juan Pablo
2014-12-01
We present a new method to measure the work w performed on a driven quantum system and to sample its probability distribution P (w ). The method is based on a simple fact that remained unnoticed until now: Work on a quantum system can be measured by performing a generalized quantum measurement at a single time. Such measurement, which technically speaking is denoted as a positive operator valued measure reduces to an ordinary projective measurement on an enlarged system. This observation not only demystifies work measurement but also suggests a new quantum algorithm to efficiently sample the distribution P (w ). This can be used, in combination with fluctuation theorems, to estimate free energies of quantum states on a quantum computer.
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
Assessment of simple colorimetric procedures to determine smoking status of diabetic subjects.
Smith, R F; Mather, H M; Ellard, G A
1998-02-01
The performance of a simple colorimetric assay for urinary nicotine metabolites to assess smoking status in diabetic subjects (n = 251) was investigated. Several variations of the colorimetric assay and a qualitative extraction procedure were evaluated in comparison with a cotinine immunoassay as the "gold standard." Among these, the best overall performance was achieved with the qualitative test (sensitivity 95%; specificity 100%). The quantitative measurement of total nicotine metabolites performed less well (sensitivity 92%; specificity 97%) but could be improved by incorporating a blank extraction (sensitivity 98%; specificity 98%). Allowance for diuresis appeared to offer no advantage over the other methods. These results support previous findings regarding the use of these colorimetric procedures in nondiabetic subjects and, contrary to other recent observations, their performance was not impaired in diabetic patients.
Osofundiya, Olufunmilola; Benden, Mark E; Dowdy, Diane; Mehta, Ranjana K
2016-06-01
Recent evidence of obesity-related changes in the prefrontal cortex during cognitive and seated motor activities has surfaced; however, the impact of obesity on neural activity during ambulation remains unclear. The purpose of this study was to determine obesity-specific neural cost of simple and complex ambulation in older adults. Twenty non-obese and obese individuals, 65years and older, performed three tasks varying in the types of complexity of ambulation (simple walking, walking+cognitive dual-task, and precision walking). Maximum oxygenated hemoglobin, a measure of neural activity, was measured bilaterally using a portable functional near infrared spectroscopy system, and gait speed and performance on the complex tasks were also obtained. Complex ambulatory tasks were associated with ~2-3.5 times greater cerebral oxygenation levels and ~30-40% slower gait speeds when compared to the simple walking task. Additionally, obesity was associated with three times greater oxygenation levels, particularly during the precision gait task, despite obese adults demonstrating similar gait speeds and performances on the complex gait tasks as non-obese adults. Compared to existing studies that focus solely on biomechanical outcomes, the present study is one of the first to examine obesity-related differences in neural activity during ambulation in older adults. In order to maintain gait performance, obesity was associated with higher neural costs, and this was augmented during ambulatory tasks requiring greater precision control. These preliminary findings have clinical implications in identifying individuals who are at greater risk of mobility limitations, particularly when performing complex ambulatory tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design and performance of optimal detectors for guided wave structural health monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dib, G.; Udpa, L.
2016-01-01
Ultrasonic guided wave measurements in a long term structural health monitoring system are affected by measurement noise, environmental conditions, transducer aging and malfunction. This results in measurement variability which affects detection performance, especially in complex structures where baseline data comparison is required. This paper derives the optimal detector structure, within the framework of detection theory, where a guided wave signal at the sensor is represented by a single feature value that can be used for comparison with a threshold. Three different types of detectors are derived depending on the underlying structure’s complexity: (i) Simple structures where defect reflections can bemore » identified without the need for baseline data; (ii) Simple structures that require baseline data due to overlap of defect scatter with scatter from structural features; (iii) Complex structure with dense structural features that require baseline data. The detectors are derived by modeling the effects of variabilities and uncertainties as random processes. Analytical solutions for the performance of detectors in terms of the probability of detection and false alarm are derived. A finite element model is used to generate guided wave signals and the performance results of a Monte-Carlo simulation are compared with the theoretical performance. initial results demonstrate that the problems of signal complexity and environmental variability can in fact be exploited to improve detection performance.« less
Electromagnetic properties of ice coated surfaces
NASA Technical Reports Server (NTRS)
Dominek, A.; Walton, E.; Wang, N.; Beard, L.
1989-01-01
The electromagnetic scattering from ice coated structures is examined. The influence of ice is shown from a measurement standpoint and related to a simple analytical model. A hardware system for the realistic measurement of ice coated structures is also being developed to use in an existing NASA Lewis icing tunnel. Presently, initial measurements have been performed with a simulated tunnel to aid in the development.
New simple and low-cost methods for periodic checks of Cyclone® Plus Storage Phosphor System.
Edalucci, Elisabetta; Maffione, Anna Margherita; Fornasier, Maria Rosa; De Denaro, Mario; Scian, Giovanni; Dore, Franca; Rubello, Domenico
2017-01-01
The recent large use of the Cyclone® Plus Storage Phosphor System, especially in European countries, as imaging system for quantification of radiochemical purity of radiopharmaceuticals raised the problem of setting the periodic controls as required by European Legislation. We described simple, low-cost methods for Cyclone® Plus quality controls, which can be useful to evaluate the performance measurement of this imaging system.
Measuring Student and School Progress with the California API. CSE Technical Report.
ERIC Educational Resources Information Center
Thum, Yeow Meng
This paper focuses on interpreting the major conceptual features of California's Academic Performance Index (API) as a coherent set of statistical procedures. To facilitate a characterization of its statistical properties, the paper casts the index as a simple weighted average of the subjective worth of students' normative performance and presents…
USDA-ARS?s Scientific Manuscript database
Colostrum affects gut and uterine gland development in the neonatal piglet, suggesting that subsequent growth and reproductive performance may be affected. Measuring immunoglobulin in piglet serum using the immunoglobulin immunocrit on day 1 of age provides a simple inexpensive indication of the amo...
Rotor Hover Performance and Flowfield Measurements with Untwisted and Highly-Twisted Blades
NASA Technical Reports Server (NTRS)
Ramasamy, Manikandan; Gold, Nili P.; Bhagwat, Mahendra J.
2010-01-01
The flowfield and performance characteristics of highly-twisted blades were analyzed at various thrust conditions to improve the fundamental understanding relating the wake effects on rotor performance. Similar measurements made using untwisted blades served as the baseline case. Twisted blades are known to give better hover performance than untwisted blades at high thrust coefficients typical of those found in full-scale rotors. However, the present experiments were conducted at sufficiently low thrust (beginning from zero thrust), where the untwisted blades showed identical, if not better, performance when compared with the highly-twisted blades. The flowfield measurements showed some key wake differences between the two rotors, as well. These observations when combined with simple blade element momentum theory (also called annular disk momentum theory) helped further the understanding of rotor performance characteristics.
Hasar, U C
2009-05-01
A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.
Friction Coefficient Determination by Electrical Resistance Measurements
ERIC Educational Resources Information Center
Tunyagi, A.; Kandrai, K.; Fülöp, Z.; Kapusi, Z.; Simon, A.
2018-01-01
A simple and low-cost, DIY-type, Arduino-driven experiment is presented for the study of friction and measurement of the friction coefficient, using a conductive rubber cord as a force sensor. It is proposed for high-school or college/university-level students. We strongly believe that it is worthwhile planning, designing and performing Arduino…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busby, L.
This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.
TEACHING PHYSICS: Atwood's machine: experiments in an accelerating frame
NASA Astrophysics Data System (ADS)
Teck Chee, Chia; Hong, Chia Yee
1999-03-01
Experiments in an accelerating frame are often difficult to perform, but simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine.
Atwood's Machine: Experiments in an Accelerating Frame.
ERIC Educational Resources Information Center
Chee, Chia Teck; Hong, Chia Yee
1999-01-01
Experiments in an accelerating frame are hard to perform. Illustrates how simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine. (Author/CCM)
Measuring cognitive load: performance, mental effort and simulation task complexity.
Haji, Faizal A; Rojas, David; Childs, Ruth; de Ribaupierre, Sandrine; Dubrowski, Adam
2015-08-01
Interest in applying cognitive load theory in health care simulation is growing. This line of inquiry requires measures that are sensitive to changes in cognitive load arising from different instructional designs. Recently, mental effort ratings and secondary task performance have shown promise as measures of cognitive load in health care simulation. We investigate the sensitivity of these measures to predicted differences in intrinsic load arising from variations in task complexity and learner expertise during simulation-based surgical skills training. We randomly assigned 28 novice medical students to simulation training on a simple or complex surgical knot-tying task. Participants completed 13 practice trials, interspersed with computer-based video instruction. On trials 1, 5, 9 and 13, knot-tying performance was assessed using time and movement efficiency measures, and cognitive load was assessed using subjective rating of mental effort (SRME) and simple reaction time (SRT) on a vibrotactile stimulus-monitoring secondary task. Significant improvements in knot-tying performance (F(1.04,24.95) = 41.1, p < 0.001 for movements; F(1.04,25.90) = 49.9, p < 0.001 for time) and reduced cognitive load (F(2.3,58.5) = 57.7, p < 0.001 for SRME; F(1.8,47.3) = 10.5, p < 0.001 for SRT) were observed in both groups during training. The simple-task group demonstrated superior knot tying (F(1,24) = 5.2, p = 0.031 for movements; F(1,24) = 6.5, p = 0.017 for time) and a faster decline in SRME over the first five trials (F(1,26) = 6.45, p = 0.017) compared with their peers. Although SRT followed a similar pattern, group differences were not statistically significant. Both secondary task performance and mental effort ratings are sensitive to changes in intrinsic load among novices engaged in simulation-based learning. These measures can be used to track cognitive load during skills training. Mental effort ratings are also sensitive to small differences in intrinsic load arising from variations in the physical complexity of a simulation task. The complementary nature of these subjective and objective measures suggests their combined use is advantageous in simulation instructional design research. © 2015 John Wiley & Sons Ltd.
Smirr, Jean-Loup; Guilbaud, Sylvain; Ghalbouni, Joe; Frey, Robert; Diamanti, Eleni; Alléaume, Romain; Zaquine, Isabelle
2011-01-17
Fast characterization of pulsed spontaneous parametric down conversion (SPDC) sources is important for applications in quantum information processing and communications. We propose a simple method to perform this task, which only requires measuring the counts on the two output channels and the coincidences between them, as well as modeling the filter used to reduce the source bandwidth. The proposed method is experimentally tested and used for a complete evaluation of SPDC sources (pair emission probability, total losses, and fidelity) of various bandwidths. This method can find applications in the setting up of SPDC sources and in the continuous verification of the quality of quantum communication links.
Saffar, Saber; Abdullah, Amir
2014-03-01
Vibration amplitude of transducer's elements is the influential parameters in the performance of high power airborne ultrasonic transducers to control the optimum vibration without material yielding. The vibration amplitude of elements of provided high power airborne transducer was determined by measuring temperature of the provided high power airborne transducer transducer's elements. The results showed that simple thermocouples can be used both to measure the vibration amplitude of transducer's element and an indicator to power transmission to the air. To verify our approach, the power transmission to the air has been investigated by other common method experimentally. The experimental results displayed good agreement with presented approach. Copyright © 2013 Elsevier B.V. All rights reserved.
Measuring and characterizing beat phenomena with a smartphone
NASA Astrophysics Data System (ADS)
Osorio, M.; Pereyra, C. J.; Gau, D. L.; Laguarda, A.
2018-03-01
Nowadays, smartphones are in everyone’s life. Apart from being excellent tools for work and communication, they can also be used to perform several measurements of simple physical magnitudes, serving as a mobile and inexpensive laboratory, ideal for use physics lectures in high schools or universities. In this article, we use a smartphone to analyse the acoustic beat phenomena by using a simple experimental setup, which can complement lessons in the classroom. The beats were created by the superposition of the waves generated by two tuning forks, with their natural frequencies previously characterized using different applications. After the characterization, we recorded the beats and analysed the oscillations in time and frequency.
Some Simple Formulas for Posterior Convergence Rates
2014-01-01
We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278
Hyponatremia in liver cirrhosis: pathophysiological principles of management.
Castello, L; Pirisi, M; Sainaghi, P P; Bartoli, E
2005-02-01
Hyponatremia is common in cirrhosis, where it impairs encephalopathy. It could be either due to excess water, or reduced Na, or a combination of both. The diagnosis can be established with clinical skills aided by simple data like weight, blood pressure and plasma electrolytes. The quantitative estimates of the water surfeit or solute deficit, easily performed with simple formulas and measurements, guide accurate and programmed treatment procedures, avoiding the occurrence of the ominous central pontine myelinolysis.
Measurement of the shock front velocity produced in a T-tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djurović, S.; Mijatović, Z.; Vujičić, B.
2015-01-15
A set of shock front velocity measurements is described in this paper. The shock waves were produced in a small electromagnetically driven shock T-tube. Most of the measurements were performed in hydrogen. The shock front velocity measurements in other gases and the velocity of the gas behind the shock front were also analyzed, as well as the velocity dependence on applied input energy. Some measurements with an applied external magnetic field were also performed. The used method of shock front velocity is simple and was shown to be very reliable. Measured values were compared with the calculated ones for themore » incident and reflected shock waves.« less
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
Evidence for a Cognitive Control Network for Goal-Directed Attention in Simple Sustained Attention
ERIC Educational Resources Information Center
Hilti, Caroline C.; Jann, Kay; Heinemann, Doerthe; Federspiel, Andrea; Dierks, Thomas; Seifritz, Erich; Cattapan-Ludewig, Katja
2013-01-01
The deterioration of performance over time is characteristic for sustained attention tasks. This so-called "performance decrement" is measured by the increase of reaction time (RT) over time. Some behavioural and neurobiological mechanisms of this phenomenon are not yet fully understood. Behaviourally, we examined the increase of RT over time and…
Biofeedback and Performance: An Update
1984-12-01
low tension level), Sabourin and Rioux (1979) measured performance on standard laboratory tasks: memorization of nonsense syllables, simple reaction... Sabourin and Rioux point out, however, most subjects undergoing laboratory experimentation are not usually (in the absence of specific treatment to...and extraneous muscle activity interferes with efficient acquisition of a psychomotor skill. Keeping in mind the effect noted by Sabourin and Rioux
Morrow, Sarah A; Menon, Suresh; Rosehart, Heather; Sharma, Manas
2017-02-01
One of the most frequently disabling symptoms in Multiple Sclerosis (MS) is cognitive impairment which is often insidious in onset and therefore difficult to recognize in the early stages, for both persons with MS and clinicians. A biomarker that would help identify those at risk of cognitive impairment, or with only mild impairment, would be a useful tool for clinicians. Using MRI, already an integral tool in the diagnosis and monitoring of disease activity in MS, would be ideal. Thus, this study aimed to determine if simple measures on routine MRI could serve as potential biomarkers for cognitive impairment in MS. We retrospectively identified 51 persons with MS who had a cognitive assessment and MRI within six months of the MRI. Simple linear measurements of the hippocampi, bifrontral and third ventricular width, bicaudate width and the anterior, mid and posterior corpus callosum were made. Pearson's correlations examined the relationship between these MRI measures and cognitive tests, and MRI measures were compared in persons with MS who were either normal or cognitively impaired on objective cognitive tests using Analysis of Covariance (ANCOVA). Bicaudate span and third ventricular width were both negatively correlated, while corpus callosal measures were positive correlated with cognitive test performance. After controlling for potential confounders, bicaudate span was significant different on measures of immediate recall. Both anterior and posterior corpus collosal measure were significantly different on measures of verbal fluency, immediate recall and higher executive function; while the anterior corpus callosum was also significantly different on processing speed. The middle corpus collosal measure was significantly different on immediate recall and higher executive function. This study presents data demonstrating that simple to apply MRI measures of atrophy may serve as biomarkers for cognitive impairment in persons with MS. Further prospective studies are needed to validate these findings. Copyright © 2016 Elsevier B.V. All rights reserved.
Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki
2010-12-01
To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P < 0.001) regardless of the degree of hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.
Methods for quantifying simple gravity sensing in Drosophila melanogaster.
Inagaki, Hidehiko K; Kamikouchi, Azusa; Ito, Kei
2010-01-01
Perception of gravity is essential for animals: most animals possess specific sense organs to detect the direction of the gravitational force. Little is known, however, about the molecular and neural mechanisms underlying their behavioral responses to gravity. Drosophila melanogaster, having a rather simple nervous system and a large variety of molecular genetic tools available, serves as an ideal model for analyzing the mechanisms underlying gravity sensing. Here we describe an assay to measure simple gravity responses of flies behaviorally. This method can be applied for screening genetic mutants of gravity perception. Furthermore, in combination with recent genetic techniques to silence or activate selective sets of neurons, it serves as a powerful tool to systematically identify neural substrates required for the proper behavioral responses to gravity. The assay requires 10 min to perform, and two experiments can be performed simultaneously, enabling 12 experiments per hour.
Heritability of Cognitive Abilities as Measured by Mental Chronometric Tasks: A Meta-Analysis
ERIC Educational Resources Information Center
Beaujean, A.A.
2005-01-01
The purpose of this study is to meta-analyze the published studies that measure the performance differences in mental chronometric tasks using a behavioral genetic research design. Because chronometric tasks are so simple, individual differences in the time it takes to complete them are largely due to underlying biological and physiological…
Measuring Drag Force in Newtonian Liquids
NASA Astrophysics Data System (ADS)
Mawhinney, Matthew T.; O'Donnell, Mary Kate; Fingerut, Jonathan; Habdas, Piotr
2012-03-01
The experiments described in this paper have two goals. The first goal is to show how students can perform simple but fundamental measurements of objects moving through simple liquids (such as water, oil, or honey). In doing so, students can verify Stokes' law, which governs the motion of spheres through simple liquids, and see how it fails at higher object speeds. Moreover, they can qualitatively study fluid patterns at various object speeds (Reynolds numbers). The second goal is to help students make connections between physics and other sciences. Specifically, the results of these experiments can be used to help students understand the role of fluid motion in determining the shape of an organism, or where it lives. At Saint Josephs University we have developed these experiments as part of a newly developed course in biomechanics where both physics and biology undergraduate students bring their ideas and expertise to enrich a shared learning environment.
ERIC Educational Resources Information Center
Gilbert, George L., Ed.
1983-01-01
Provides directions for setup and performance of two demonstrations. The first demonstrates the principles of Raoult's Law; using a simple apparatus designed to measure vapor pressure. The second illustrates the energy available from alcohol combustion (includes safety precautions) using an alcohol-fueled missile. (JM)
A measurement-based performability model for a multiprocessor system
NASA Technical Reports Server (NTRS)
Ilsueh, M. C.; Iyer, Ravi K.; Trivedi, K. S.
1987-01-01
A measurement-based performability model based on real error-data collected on a multiprocessor system is described. Model development from the raw errror-data to the estimation of cumulative reward is described. Both normal and failure behavior of the system are characterized. The measured data show that the holding times in key operational and failure states are not simple exponential and that semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different failure types and recovery procedures.
The CAHPER Fitness-Performance Test Manual: For Boys and Girls 7 to 17 Years of Age.
ERIC Educational Resources Information Center
Canadian Association for Health, Physical Education, and Recreation, Ottawa (Ontario).
Outlined in this manual is Canada's first National Test of Physical Fitness. Each test item is a valid and reliable measure of fitness, simple enough for any teacher not trained in fitness measurement to administer. Each of the six tests measures a different aspect of fitness: (1) the one-minute speed sit-up tests the strength and endurance of the…
Uematsu, Masahiro; Ito, Makiko; Hama, Yukihiro; Inomata, Takayuki; Fujii, Masahiro; Nishio, Teiji; Nakamura, Naoki; Nakagawa, Keiichi
2012-01-01
In this paper, we suggest a new method for verifying the motion of a binary multileaf collimator (MLC) in helical tomotherapy. For this we used a combination of a cylindrical scintillator and a general‐purpose camcorder. The camcorder records the light from the scintillator following photon irradiation, which we use to track the motion of the binary MLC. The purpose of this study is to demonstrate the feasibility of this method as a binary MLC quality assurance (QA) tool. First, the verification was performed using a simple binary MLC pattern with a constant leaf open time; secondly, verification using the binary MLC pattern used in a clinical setting was also performed. Sinograms of simple binary MLC patterns, in which leaves that were open were detected as “open” from the measured light, define the sensitivity which, in this case, was 1.000. On the other hand, the specificity, which gives the fraction of closed leaves detected as “closed”, was 0.919. The leaf open error identified by our method was −1.3±7.5%. The 68.6% of observed leaves were performed within ± 3% relative error. The leaf open error was expressed by the relative errors calculated on the sinogram. In the clinical binary MLC pattern, the sensitivity and specificity were 0.994 and 0.997, respectively. The measurement could be performed with −3.4±8.0% leaf open error. The 77.5% of observed leaves were performed within ± 3% relative error. With this method, we can easily verify the motion of the binary MLC, and the measurement unit developed was found to be an effective QA tool. PACS numbers: 87.56.Fc, 87.56.nk PMID:22231222
A discrete-time adaptive control scheme for robot manipulators
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.
High-Throughput Density Measurement Using Magnetic Levitation.
Ge, Shencheng; Wang, Yunzhe; Deshler, Nicolas J; Preston, Daniel J; Whitesides, George M
2018-06-20
This work describes the development of an integrated analytical system that enables high-throughput density measurements of diamagnetic particles (including cells) using magnetic levitation (MagLev), 96-well plates, and a flatbed scanner. MagLev is a simple and useful technique with which to carry out density-based analysis and separation of a broad range of diamagnetic materials with different physical forms (e.g., liquids, solids, gels, pastes, gums, etc.); one major limitation, however, is the capacity to perform high-throughput density measurements. This work addresses this limitation by (i) re-engineering the shape of the magnetic fields so that the MagLev system is compatible with 96-well plates, and (ii) integrating a flatbed scanner (and simple optical components) to carry out imaging of the samples that levitate in the system. The resulting system is compatible with both biological samples (human erythrocytes) and nonbiological samples (simple liquids and solids, such as 3-chlorotoluene, cholesterol crystals, glass beads, copper powder, and polymer beads). The high-throughput capacity of this integrated MagLev system will enable new applications in chemistry (e.g., analysis and separation of materials) and biochemistry (e.g., cellular responses under environmental stresses) in a simple and label-free format on the basis of a universal property of all matter, i.e., density.
Scheperle, Rachel A; Abbas, Paul J
2015-01-01
The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.
ERIC Educational Resources Information Center
Allodi, Mara Westling
2013-01-01
The principles of new public management -- market mechanisms, accountability and standards -- have been applied in the education system. These methods are supposed to increase efficiency, but there is also a risk of negative consequences from the services provided if the measures of performance target a reduced range of goals, ignore relevant…
Sparks, Rachel; Salskov, Alex H; Chang, Anita S; Wentworth, Kelly L; Gupta, Pritha P; Staiger, Thomas O; Anawalt, Bradley D
2015-01-01
Complete documentation of patient comorbidities in the medical record is important for clinical care, hospital reimbursement, and quality performance measures. We designed a pocket card reminder and brief educational intervention aimed at hospitalists with the goal of improving documentation of 6 common comorbidities present on admission: coagulation abnormalities, metastatic cancer, anemia, fluid and electrolyte abnormalities, malnutrition, and obesity. Two internal medicine inpatient teams led by 10 hospitalist physicians at an academic medical center received the educational intervention and pocket card reminder (n = 520 admissions). Two internal medicine teams led by nonhospitalist physicians served as a control group (n = 590 admissions). Levels of documentation of 6 common comorbidities, expected length of stay, and expected mortality were measured at baseline and during the 9-month study period. The intervention was associated with increased documentation of anemia, fluid and electrolyte abnormalities, malnutrition, and obesity in the intervention group, both compared to baseline and compared to the control group during the study period. The expected length of stay increased in the intervention group during the study period. A simple educational intervention and pocket card reminder were associated with improved documentation and hospital quality measures at an academic medical center.
Direct thrust measurements and modelling of a radio-frequency expanding plasma thruster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lafleur, T.; Charles, C.; Boswell, R. W.
2011-08-15
It is shown analytically that the thrust from a simple plasma thruster (in the absence of a magnetic field) is given by the maximum upstream electron pressure, even if the plasma diverges downstream. Direct thrust measurements of a thruster are then performed using a pendulum thrust balance and a laser displacement sensor. A maximum thrust of about 2 mN is obtained at 700 W for a thruster length of 17.5 cm and a flow rate of 0.9 mg s{sup -1}, while a larger thrust of 4 mN is obtained at a similar power for a length of 9.5 cm andmore » a flow rate of 1.65 mg s{sup -1}. The measured thrusts are in good agreement with the maximum upstream electron pressure found from measurements of the plasma parameters and in fair agreement with a simple global approach used to model the thruster.« less
Assessment of cell concentration and viability of isolated hepatocytes using flow cytometry.
Wigg, Alan J; Phillips, John W; Wheatland, Loretta; Berry, Michael N
2003-06-01
The assessment of cell concentration and viability of freshly isolated hepatocyte preparations has been traditionally performed using manual counting with a Neubauer counting chamber and staining for trypan blue exclusion. Despite the simple and rapid nature of this assessment, concerns about the accuracy of these methods exist. Simple flow cytometry techniques which determine cell concentration and viability are available yet surprisingly have not been extensively used or validated with isolated hepatocyte preparations. We therefore investigated the use of flow cytometry using TRUCOUNT Tubes and propidium iodide staining to measure cell concentration and viability of isolated rat hepatocytes in suspension. Analysis using TRUCOUNT Tubes provided more accurate and reproducible measurement of cell concentration than manual cell counting. Hepatocyte viability, assessed using propidium iodide, correlated more closely than did trypan blue exclusion with all indicators of hepatocyte integrity and function measured (lactate dehydrogenase leakage, cytochrome p450 content, cellular ATP concentration, ammonia and lactate removal, urea and albumin synthesis). We conclude that flow cytometry techniques can be used to measure cell concentration and viability of isolated hepatocyte preparations. The techniques are simple, rapid, and more accurate than manual cell counting and trypan blue staining and the results are not affected by protein-containing media.
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
Determination of high temperature strains using a PC based vision system
NASA Astrophysics Data System (ADS)
McNeill, Stephen R.; Sutton, Michael A.; Russell, Samuel S.
1992-09-01
With the widespread availability of video digitizers and cheap personal computers, the use of computer vision as an experimental tool is becoming common place. These systems are being used to make a wide variety of measurements that range from simple surface characterization to velocity profiles. The Sub-Pixel Digital Image Correlation technique has been developed to measure full field displacement and gradients of the surface of an object subjected to a driving force. The technique has shown its utility by measuring the deformation and movement of objects that range from simple translation to fluid velocity profiles to crack tip deformation of solid rocket fuel. This technique has recently been improved and used to measure the surface displacement field of an object at high temperature. The development of a PC based Sub-Pixel Digital Image Correlation system has yielded an accurate and easy to use system for measuring surface displacements and gradients. Experiments have been performed to show the system is viable for measuring thermal strain.
Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing
2015-01-23
This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006-2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008-2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008-2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice.
Nowosielski, Robert J; Trick, Lana M; Toxopeus, Ryan
2018-02-01
Distracted driving (driving while performing a secondary task) causes many collisions. Most research on distracted driving has focused on operating a cell-phone, but distracted driving can include eating while driving, conversing with passengers or listening to music or audiobooks. Although the research has focused on the deleterious effects of distraction, there may be situations where distraction improves driving performance. Fatigue and boredom are also associated with collision risk and it is possible that secondary tasks can help alleviate the effects of fatigue and boredom. Furthermore, it has been found that individuals with high levels of executive functioning as measured by the OSPAN (Operation Span) task show better driving while multitasking. In this study, licensed drivers were tested in a driving simulator (a car body surrounded by screens) that simulated simple or complex roads. Road complexity was manipulated by increasing traffic, scenery, and the number of curves in the drive. Participants either drove, or drove while listening to an audiobook. Driving performance was measured in terms of braking response time to hazards (HRT): the time required to brake in response to pedestrians or vehicles that suddenly emerged from the periphery into the path of the vehicle, speed, standard deviation of speed, standard deviation of lateral position (SDLP). Overall, braking times to hazards were higher on the complex drive than the simple one, though the effects of secondary tasks such as audiobooks were especially deleterious on the complex drive. In contrast, on the simple drive, driving while listening to an audiobook lead to faster HRT. We found evidence that individuals with high OSPAN scores had faster HRTs when listening to an audiobook. These results suggest that there are environmental and individual factors behind difference in the allocation of attention while listening to audiobooks while driving. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Hui; Liu, Tao; Qiu, Quan; Ding, Peng; He, Yan-Hui; Chen, Wei-Qing
2015-01-01
This study aimed to develop and validate a simple risk score for detecting individuals with impaired fasting glucose (IFG) among the Southern Chinese population. A sample of participants aged ≥20 years and without known diabetes from the 2006–2007 Guangzhou diabetes cross-sectional survey was used to develop separate risk scores for men and women. The participants completed a self-administered structured questionnaire and underwent simple clinical measurements. The risk scores were developed by multiple logistic regression analysis. External validation was performed based on three other studies: the 2007 Zhuhai rural population-based study, the 2008–2010 Guangzhou diabetes cross-sectional study and the 2007 Tibet population-based study. Performance of the scores was measured with the Hosmer-Lemeshow goodness-of-fit test and ROC c-statistic. Age, waist circumference, body mass index and family history of diabetes were included in the risk score for both men and women, with the additional factor of hypertension for men. The ROC c-statistic was 0.70 for both men and women in the derivation samples. Risk scores of ≥28 for men and ≥18 for women showed respective sensitivity, specificity, positive predictive value and negative predictive value of 56.6%, 71.7%, 13.0% and 96.0% for men and 68.7%, 60.2%, 11% and 96.0% for women in the derivation population. The scores performed comparably with the Zhuhai rural sample and the 2008–2010 Guangzhou urban samples but poorly in the Tibet sample. The performance of pre-existing USA, Shanghai, and Chengdu risk scores was poorer in our population than in their original study populations. The results suggest that the developed simple IFG risk scores can be generalized in Guangzhou city and nearby rural regions and may help primary health care workers to identify individuals with IFG in their practice. PMID:25625405
Consistency of Lower-Body Dimensions Using Surface Landmarks and Simple Measurement Tools.
Caia, Johnpaul; Weiss, Lawrence W; Chiu, Loren Z F; Schilling, Brian K; Paquette, Max R
2016-09-01
Caia, J, Weiss, LW, Chiu, LZF, Schilling, BK, and Paquette, MR. Consistency of lower-body dimensions using surface landmarks and simple measurement tools. J Strength Cond Res 30(9): 2600-2608, 2016-Body dimensions may influence various types of physical performance. This study was designed to establish the reliability and precision of bilateral lower-body dimensions using surface anatomic landmarks and either sliding calipers or goniometry. Fifty university students (25 men and 25 women) were measured on 2 separate occasions separated by 48 or 72 hours. A small digital caliper was used to acquire longitudinal dimensions of the feet, whereas a larger broad-blade caliper was used to measure lower-limb, hip, and pelvic dimensions. Quadriceps angle (Q-angle) was determined through surface goniometry. Data for all foot and lower-limb dimensions were both reliable and precise (intraclass correlation coefficient (ICC) ≥0.72, SEM 0.1-0.5 cm). Measures of Q-angle were also reliable and precise (ICC ≥0.85, SEM 0.2-0.4°). Findings from this investigation demonstrate that lower-body dimensions may be reliably and precisely measured through simple practical tests, when surface anatomic landmarks and standardized procedures are used. Although intertester reliability remains to be established, meticulous adherence to specific measurement protocols is likely to yield viable output for lower-body dimensions when more sophisticated methods are unavailable or inappropriate.
Helicopter Pilot Performance for Discrete-maneuver Flight Tasks
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Bourne, S. M.; Hindson, W. S.
1984-01-01
This paper describes a current study of several basic helicopter flight maneuvers. The data base consists of in-flight measurements from instrumented helicopters using experienced pilots. The analysis technique is simple enough to apply without automatic data processing, and the results can be used to build quantitative matah models of the flight task and some aspects of the pilot control strategy. In addition to describing the performance measurement technqiue, some results are presented which define the aggressiveness and amplitude of maneuvering for several lateral maneuvers including turns and sidesteps.
Evaluating firms' R&D performance using best worst method.
Salimi, Negin; Rezaei, Jafar
2018-02-01
Since research and development (R&D) is the most critical determinant of the productivity, growth and competitive advantage of firms, measuring R&D performance has become the core of attention of R&D managers, and an extensive body of literature has examined and identified different R&D measurements and determinants of R&D performance. However, measuring R&D performance and assigning the same level of importance to different R&D measures, which is the common approach in existing studies, can oversimplify the R&D measuring process, which may result in misinterpretation of the performance and consequently fallacy R&D strategies. The aim of this study is to measure R&D performance taking into account the different levels of importance of R&D measures, using a multi-criteria decision-making method called Best Worst Method (BWM) to identify the weights (importance) of R&D measures and measure the R&D performance of 50 high-tech SMEs in the Netherlands using the data gathered in a survey among SMEs and from R&D experts. The results show how assigning different weights to different R&D measures (in contrast to simple mean) results in a different ranking of the firms and allow R&D managers to formulate more effective strategies to improve their firm's R&D performance by applying knowledge regarding the importance of different R&D measures. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Herler, Jürgen; Dirnwöber, Markus
2011-10-31
Estimating the impacts of global and local threats on coral reefs requires monitoring reef health and measuring coral growth and calcification rates at different time scales. This has traditionally been mostly performed in short-term experimental studies in which coral fragments were grown in the laboratory or in the field but measured ex situ. Practical techniques in which growth and measurements are performed over the long term in situ are rare. Apart from photographic approaches, weight increment measurements have also been applied. Past buoyant weight measurements under water involved a complicated and little-used apparatus. We introduce a new method that combines previous field and laboratory techniques to measure the buoyant weight of entire, transplanted corals under water. This method uses an electronic balance fitted into an acrylic glass underwater housing and placed atop of an acrylic glass cube. Within this cube, corals transplanted onto artificial bases can be attached to the balance and weighed at predetermined intervals while they continue growth in the field. We also provide a set of simple equations for the volume and weight determinations required to calculate net growth rates. The new technique is highly accurate: low error of weight determinations due to variation of coral density (< 0.08%) and low standard error (< 0.01%) for repeated measurements of the same corals. We outline a transplantation technique for properly preparing corals for such long-term in situ experiments and measurements.
Siegfried, Konrad; Hahn-Tomer, Sonja; Koelsch, Andreas; Osterwalder, Eva; Mattusch, Juergen; Staerk, Hans-Joachim; Meichtry, Jorge M; De Seta, Graciela E; Reina, Fernando D; Panigatti, Cecilia; Litter, Marta I; Harms, Hauke
2015-05-21
Numerous articles have reported the occurrence of arsenic in drinking water in Argentina, and the resulting health effects in severely affected regions of the country. Arsenic in drinking water in Argentina is largely naturally occurring due to elevated background content of the metalloid in volcanic sediments, although, in some regions, mining can contribute. While the origin of arsenic release has been discussed extensively, the problem of drinking water contamination has not yet been solved. One key step in progress towards mitigation of problems related with the consumption of As-containing water is the availability of simple detection tools. A chemical test kit and the ARSOlux biosensor were evaluated as simple analytical tools for field measurements of arsenic in the groundwater of Rafaela (Santa Fe, Argentina), and the results were compared with ICP-MS and HPLC-ICP-MS measurements. A survey of the groundwater chemistry was performed to evaluate possible interferences with the field tests. The results showed that the ARSOlux biosensor performed better than the chemical field test, that the predominant species of arsenic in the study area was arsenate and that arsenic concentration in the studied samples had a positive correlation with fluoride and vanadium, and a negative one with calcium and iron.
Siegfried, Konrad; Hahn-Tomer, Sonja; Koelsch, Andreas; Osterwalder, Eva; Mattusch, Juergen; Staerk, Hans-Joachim; Meichtry, Jorge M.; De Seta, Graciela E.; Reina, Fernando D.; Panigatti, Cecilia; Litter, Marta I.; Harms, Hauke
2015-01-01
Numerous articles have reported the occurrence of arsenic in drinking water in Argentina, and the resulting health effects in severely affected regions of the country. Arsenic in drinking water in Argentina is largely naturally occurring due to elevated background content of the metalloid in volcanic sediments, although, in some regions, mining can contribute. While the origin of arsenic release has been discussed extensively, the problem of drinking water contamination has not yet been solved. One key step in progress towards mitigation of problems related with the consumption of As-containing water is the availability of simple detection tools. A chemical test kit and the ARSOlux biosensor were evaluated as simple analytical tools for field measurements of arsenic in the groundwater of Rafaela (Santa Fe, Argentina), and the results were compared with ICP-MS and HPLC-ICP-MS measurements. A survey of the groundwater chemistry was performed to evaluate possible interferences with the field tests. The results showed that the ARSOlux biosensor performed better than the chemical field test, that the predominant species of arsenic in the study area was arsenate and that arsenic concentration in the studied samples had a positive correlation with fluoride and vanadium, and a negative one with calcium and iron. PMID:26006123
Individual Differences in Dual Task Performance.
1981-06-10
SJCURITY CLASSIFICATION OF THIS PAGE("en Data Entered) second experimental series, we compared ability to detect visual and audi- tory targets in single...used performance on a simple secondary task executed during an easy primary task to predict performance on a more diffi- cult’version of the same...process concepts developed b’j cognitive ps:choloqists°’ In this laborators, we adapted a number of experimental tasks to 4ield individual measures of the
Resting sympatho-vagal balance is related to 10 km running performance in master endurance athletes.
Cataldo, Angelo; Bianco, Antonino; Paoli, Antonio; Cerasola, Dario; Alagna, Saverio; Messina, Giuseppe; Zangla, Daniele; Traina, Marcello
2018-01-12
Relationships between heart rate recovery after exercise (HRR, baseline heart rate variability measures (HRV), and time to perform a 10Km running trial (t10Km) were evaluated in "master" athletes of endurance to assess whether the measured indexes may be useful for monitoring the training status of the athletes. Ten "master" athletes of endurance, aged 40-60 years, were recruited. After baseline measures of HRV, the athletes performed a graded maximal test on treadmill and HRR was measured at 1 and 2 minutes from recovery. Subsequently they performed a 10Km running trial and t10Km was related to HRV and HRR indexes. The time to perform a 10Km running trial was significantly correlated with baseline HRV indexes. No correlation was found between t10Km and HRR. Baseline HRV measures, but not HRR, were significantly correlated with the time of performance on 10km running in "master" athletes. The enhanced parasympathetic function at rest appears to be a condition to a better performance on 10km running. HRV can be simple and useful measurements for monitoring the training stratus of athletes and their physical condition in proximity of a competition.
NASA Technical Reports Server (NTRS)
Carr, James L.; Madani, Houria
2007-01-01
Geostationary Operational Environmental Satellite (GOES) Image Navigation and Registration (INR) performance is specified at the 3- level, meaning that 99.7% of a collection of individual measurements must comply with specification thresholds. Landmarks are measured by the Replacement Product Monitor (RPM), part of the operational GOES ground system, to assess INR performance and to close the INR loop. The RPM automatically discriminates between valid and invalid measurements enabling it to run without human supervision. In general, this screening is reliable, but a small population of invalid measurements will be falsely identified as valid. Even a small population of invalid measurements can create problems when assessing performance at the 3-sigma level. This paper describes an additional layer of quality control whereby landmarks of the highest quality ("platinum") are identified by their self-consistency. The platinum screening criteria are not simple statistical outlier tests against sigma values in populations of INR errors. In-orbit INR performance metrics for GOES-12 and GOES-13 are presented using the platinum landmark methodology.
Wall proximity corrections for hot-wire readings in turbulent flows
NASA Technical Reports Server (NTRS)
Hebbar, K. S.
1980-01-01
This note describes some details of recent (successful) attempts of wall proximity corrections for hot-wire measurements performed in a three-dimensional incompressible turbulent boundary layer. A simple and quite satisfactory method of estimating wall proximity effects on hot-wire readings is suggested.
Metadata: Pure and Simple, or Is It?
ERIC Educational Resources Information Center
Chalmers, Marilyn
2002-01-01
Discusses issues concerning metadata in Web pages based on experiences in a vocational education center library in Queensland (Australia). Highlights include Dublin Core elements; search engines; controlled vocabulary; performance measurement to assess usage patterns and provide quality control over the vocabulary; and considerations given the…
Visual conspicuity: a new simple standard, its reliability, validity and applicability.
Wertheim, A H
2010-03-01
A general standard for quantifying conspicuity is described. It derives from a simple and easy method to quantitatively measure the visual conspicuity of an object. The method stems from the theoretical view that the conspicuity of an object is not a property of that object, but describes the degree to which the object is perceptually embedded in, i.e. laterally masked by, its visual environment. First, three variations of a simple method to measure the strength of such lateral masking are described and empirical evidence for its reliability and its validity is presented, as are several tests of predictions concerning the effects of viewing distance and ambient light. It is then shown how this method yields a conspicuity standard, expressed as a number, which can be made part of a rule of law, and which can be used to test whether or not, and to what extent, the conspicuity of a particular object, e.g. a traffic sign, meets a predetermined criterion. An additional feature is that, when used under different ambient light conditions, the method may also yield an index of the amount of visual clutter in the environment. Taken together the evidence illustrates the methods' applicability in both the laboratory and in real-life situations. STATEMENT OF RELEVANCE: This paper concerns a proposal for a new method to measure visual conspicuity, yielding a numerical index that can be used in a rule of law. It is of importance to ergonomists and human factor specialists who are asked to measure the conspicuity of an object, such as a traffic or rail-road sign, or any other object. The new method is simple and circumvents the need to perform elaborate (search) experiments and thus has great relevance as a simple tool for applied research.
The Sun lightens and enlightens: high noon shadow measurements
NASA Astrophysics Data System (ADS)
Babović, Vukota; Babović, Miloš
2014-11-01
Contemporary physicists and science experts include Eratosthenes’ measurement of the Earth's circumference as one of the most beautiful experiments ever performed in physics. Upon revisiting this famous event in the history of science, we find that some interesting generalizations are possible. On the basis of a rather simple model of the Earth's insolation, we have managed, using some advanced mathematics, to derive a new formula for determining the length of the year, generalized in such a way that it can be used for all planets with sufficiently small eccentricity of the orbit and for all locations with daily sunrises and sunsets. The practical technique that our formula offers is simple to perform, entirely Eratosthenian in spirit, and only requires the angle of the noonday sun to be found on successive days around an equinox. Our results show that this kind of approach to the problem of the Earth's insolation deserves to be included in university courses, especially those which cover astronomy and environmental physics.
The Critical Power Model as a Potential Tool for Anti-doping
Puchowicz, Michael J.; Mizelman, Eliran; Yogev, Assaf; Koehle, Michael S.; Townsend, Nathan E.; Clarke, David C.
2018-01-01
Existing doping detection strategies rely on direct and indirect biochemical measurement methods focused on detecting banned substances, their metabolites, or biomarkers related to their use. However, the goal of doping is to improve performance, and yet evidence from performance data is not considered by these strategies. The emergence of portable sensors for measuring exercise intensities and of player tracking technologies may enable the widespread collection of performance data. How these data should be used for doping detection is an open question. Herein, we review the basis by which performance models could be used for doping detection, followed by critically reviewing the potential of the critical power (CP) model as a prototypical performance model that could be used in this regard. Performance models are mathematical representations of performance data specific to the athlete. Some models feature parameters with physiological interpretations, changes to which may provide clues regarding the specific doping method. The CP model is a simple model of the power-duration curve and features two physiologically interpretable parameters, CP and W′. We argue that the CP model could be useful for doping detection mainly based on the predictable sensitivities of its parameters to ergogenic aids and other performance-enhancing interventions. However, our argument is counterbalanced by the existence of important limitations and unresolved questions that need to be addressed before the model is used for doping detection. We conclude by providing a simple worked example showing how it could be used and propose recommendations for its implementation. PMID:29928234
NASA Astrophysics Data System (ADS)
Vilão, Rui C.; Melo, Santino L. S.
2014-12-01
We address the production of musical tones by a simple musical instrument of the Brazilian tradition: the berimbau-de-barriga. The vibration physics of the string and of the air mass inside the gourd are reviewed. Straightforward measurements of an actual berimbau, which illustrate the basic physical phenomena, are performed using a PC-based "soundcard oscilloscope." The inharmonicity of the string and the role of the gourd are discussed in the context of known results in the psychoacoustics of pitch definition.
Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J
2016-03-01
Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.
Farinati, F; Cardin, F; Di Mario, F; Sava, G A; Piccoli, A; Costa, F; Penon, G; Naccarato, R
1987-08-01
The endoscopic diagnosis of chronic atrophic gastritis is often underestimated, and most of the procedures adopted to increase diagnostic accuracy are time consuming and complex. In this study, we evaluated the usefulness of the determination of gastric juice pH by means of litmus paper. Values obtained by this method correlate well with gastric acid secretory capacity as measured by gastric acid analysis (r = -0.64, p less than 0.001) and are not affected by the presence of bile. Gastric juice pH determination increases sensitivity and other diagnostic parameters such as performance index (Youden J test), positive predictive value, and post-test probability difference by 50%. Furthermore, the negative predictive value is very high, the probability of missing a patient with chronic atrophic gastritis with this simple method being 2% for fundic and 15% for antral atrophic change. We conclude that gastric juice pH determination, which substantially increases diagnostic accuracy and is very simple to perform, should be routinely adopted.
Little bits of diamond: Optically detected magnetic resonance of nitrogen-vacancy centers
NASA Astrophysics Data System (ADS)
Zhang, Haimei; Belvin, Carina; Li, Wanyi; Wang, Jennifer; Wainwright, Julia; Berg, Robbie; Bridger, Joshua
2018-03-01
We give instructions for the construction and operation of a simple apparatus for performing optically detected magnetic resonance measurements on diamond samples containing high concentrations of nitrogen-vacancy (NV) centers. Each NV center has a spin degree of freedom that can be manipulated and monitored by a combination of visible and microwave radiation. We observe Zeeman shifts in the presence of small external magnetic fields and describe a simple method to optically measure magnetic field strengths with a spatial resolution of several microns. The activities described are suitable for use in an advanced undergraduate lab course, powerfully connecting core quantum concepts to cutting edge applications. An even simpler setup, appropriate for use in more introductory settings, is also presented.
Evaluating the beam quality of double-cladding fiber lasers in applications.
Yan, Ping; Wang, Xuejiao; Gong, Mali; Xiao, Qirong
2016-08-10
We put forward a new βFL factor, which is used exclusively in fiber lasers and is suitable to assess beam quality and choose the LP01 mode as the new suitable ideal beam. We present a new simple measurement method and verify the reasonability of the βFL factor in experiment in a 20/400 μm fiber laser. Furthermore, we use the βFL factor to evaluate the beam quality of a 3-kW-level fiber laser. It can be concluded that βFL is a key factor not only for assessing the performance of the high-power fiber laser that is our main focus, but also for the simple measurement.
Proxy case mix measures for nursing homes.
Cyr, A B
1983-01-01
Nursing home case mix measures are needed for the same purposes that spurred the intensive development of case mix measures for hospitals: management and planning decisions, organizational performance research, and reimbursement policy analysis. This paper develops and validates a pair of complementary measures that are simple to compute, are easy to interpret, and use generally available data. They are not, however, definitive. A secondary purpose of this paper is thus to galvanize the development of data bases that will give rise to superior case mix measures for nursing homes.
Why the Long Face? The Mechanics of Mandibular Symphysis Proportions in Crocodiles
Walmsley, Christopher W.; Smits, Peter D.; Quayle, Michelle R.; McCurry, Matthew R.; Richards, Heather S.; Oldfield, Christopher C.; Wroe, Stephen; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Background Crocodilians exhibit a spectrum of rostral shape from long snouted (longirostrine), through to short snouted (brevirostrine) morphologies. The proportional length of the mandibular symphysis correlates consistently with rostral shape, forming as much as 50% of the mandible’s length in longirostrine forms, but 10% in brevirostrine crocodilians. Here we analyse the structural consequences of an elongate mandibular symphysis in relation to feeding behaviours. Methods/Principal Findings Simple beam and high resolution Finite Element (FE) models of seven species of crocodile were analysed under loads simulating biting, shaking and twisting. Using beam theory, we statistically compared multiple hypotheses of which morphological variables should control the biomechanical response. Brevi- and mesorostrine morphologies were found to consistently outperform longirostrine types when subject to equivalent biting, shaking and twisting loads. The best predictors of performance for biting and twisting loads in FE models were overall length and symphyseal length respectively; for shaking loads symphyseal length and a multivariate measurement of shape (PC1– which is strongly but not exclusively correlated with symphyseal length) were equally good predictors. Linear measurements were better predictors than multivariate measurements of shape in biting and twisting loads. For both biting and shaking loads but not for twisting, simple beam models agree with best performance predictors in FE models. Conclusions/Significance Combining beam and FE modelling allows a priori hypotheses about the importance of morphological traits on biomechanics to be statistically tested. Short mandibular symphyses perform well under loads used for feeding upon large prey, but elongate symphyses incur high strains under equivalent loads, underlining the structural constraints to prey size in the longirostrine morphotype. The biomechanics of the crocodilian mandible are largely consistent with beam theory and can be predicted from simple morphological measurements, suggesting that crocodilians are a useful model for investigating the palaeobiomechanics of other aquatic tetrapods. PMID:23342027
NASA Astrophysics Data System (ADS)
Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.
2018-01-01
We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.
EIT Noise Resonance Power Broadening: a probe for coherence dynamics
NASA Astrophysics Data System (ADS)
Crescimanno, Michael; O'Leary, Shannon; Snider, Charles
2012-06-01
EIT noise correlation spectroscopy holds promise as a simple, robust method for performing high resolution spectroscopy used in devices as diverse as magnetometers and clocks. One useful feature of these noise correlation resonances is that they do not power broaden with the EIT window. We report on measurements of the eventual power broadening (at higher optical powers) of these resonances and a simple, quantitative theoretical model that relates the observed power broadening slope with processes such as two-photon detuning gradients and coherence diffusion. These processes reduce the ground state coherence relative to that of a homogeneous system, and thus the power broadening slope of the EIT noise correlation resonance may be a simple, useful probe for coherence dynamics.
May, S L; May, W A; Bourdoux, P P; Pino, S; Sullivan, K M; Maberly, G F
1997-05-01
The measurement of urinary iodine in population-based surveys provides a biological indicator of the severity of iodine-deficiency disorders. We describe the steps performed to validate a simple, inexpensive, manual urinary iodine acid digestion method, and compare the results using this method with those of other urinary iodine methods. Initially, basic performance characteristics were evaluated: the average recovery of added iodine was 100.4 +/- 8.7% (mean +/- SD), within-assay precision (CV) over the assay range 0-0.95 mumol/L (0-12 micrograms/dL) was < 6%, between-assay precision over the same range was < 12%, and assay sensitivity was 0.05 mumol/L (0.6 microgram/dL). There were no apparent effects on the method by thiocyanate, a known interfering substance. In a comparison with five other methods performed in four different laboratories, samples were collected to test the method performance over a wide range of urinary iodine values (0.04-3.7 mumol/L, or 0.5-47 micrograms/dL). There was a high correlation between all methods and the interpretation of the results was consistent. We conclude that the simple, manual acid digestion method is suitable for urinary iodine analysis.
Hands-on Scorecarding in the Higher Education Sector
ERIC Educational Resources Information Center
Scholey, Cam; Armitage, Howard
2006-01-01
The balanced scorecard, introduced by Robert Kaplan and David Norton, has evolved from an improved performance measurement system to an integrated strategic planning, implementation, and scorecarding system. Simple yet powerful second-generation balanced scorecards depict the organization's strategy through a series of strategy maps and scorecards…
Information Processing in Memory Tasks.
ERIC Educational Resources Information Center
Johnston, William A.
The intensity of information processing engendered in different phases of standard memory tasks was examined in six experiments. Processing intensity was conceptualized as system capacity consumed, and was measured via a divided-attention procedure in which subjects performed a memory task and a simple reaction-time (RT) task concurrently. The…
Food Cravings Consume Limited Cognitive Resources
ERIC Educational Resources Information Center
Kemps, Eva; Tiggemann, Marika; Grigg, Megan
2008-01-01
Using Tiffany's (1990) cognitive model of drug use and craving as a theoretical basis, the present experiments investigated whether cravings for food expend limited cognitive resources. Cognitive performance was assessed by simple reaction time (Experiment 1) and an established measure of working memory capacity, the operation span task…
Polymer optical fiber tapering using chemical solvent and polishing
NASA Astrophysics Data System (ADS)
Supian, L. S.; Syuhaimi Ab-Rahman, Mohd; Arsad, Norhana
2017-11-01
A method for developing polymer optical fiber (POF) directional coupler is introduced where the initial procedure includes using chemical solvent to remove the cladding, and bare out the core in order to align the unclad center of the fiber with other similar fiber to develop a coupler. The process is safe, simple, inexpensive and require low operation skill. The etched fiber offers improvement to the performance of various POF devices, i.e, couplers and sensors. Instead of relying only on silica or glass fiber, POF now can be used as an alternative to improve the network performance in short distance communication system. The measurement parameters laid out offer great outcomes. However, the couplers intended to be developed is yet to be realized, where deeper research and various experiments are needed in order to develop a simple but optimum performance coupler that can be used for various applications.
Kraemer, D; Chen, G
2014-02-01
Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.
Evaluating the Recovery Curve for Clinically Assessed Reaction Time After Concussion.
Del Rossi, Gianluca
2017-08-01
A change in reaction time is one of various clinical measures of neurocognitive function that can be monitored after concussion and has been reported to be among the most sensitive indicators of cognitive impairment. To determine the timeline for clinically assessed simple reaction time to return to baseline after a concussion in high school athletes. Observational study. Athletic training room. Twenty-one high school-aged volunteers. Participants completed 8 trials of the ruler-drop test during each session. Along with baseline measures, a total of 6 additional test sessions were completed over the course of 4 weeks after a concussion (days 3, 7, 10, 14, 21, and 28). The mean reaction times calculated for all participants from each of the 7 test sessions were analyzed to assess the change in reaction time over the 7 time intervals. After a concussion and compared with baseline, simple reaction time was, on average, 26 milliseconds slower at 48 to 72 hours postinjury (P < .001), almost 18 milliseconds slower on day 7 (P < .001), and about 9 milliseconds slower on day 10 (P < .001). Simple reaction time did not return to baseline levels until day 14 postinjury. Clinically assessed simple reaction time appeared to return to baseline levels within a timeframe that mirrors other measures of cognitive performance (approximately 14 days).
ERIC Educational Resources Information Center
Oh, Hyeon-Joo; Walker, Michael E.
2007-01-01
This study evaluated (1) whether essay placement (either at the beginning or at the end of the test battery) impacts test-takers' performance on the critical reading, mathematics, and writing multiple choice measures; and (2) whether essay prompt type (either a simple one-line prompt or a prompt including a short passage) affects test-takers'…
Methodological issues in measures of imitative reaction times.
Aicken, Michael D; Wilson, Andrew D; Williams, Justin H G; Mon-Williams, Mark
2007-04-01
Ideomotor (IM) theory suggests that observing someone else perform an action activates an internal motor representation of that behaviour within the observer. Evidence supporting the case for an ideomotor theory of imitation has come from studies that show imitative responses to be faster than the same behavioural measures performed in response to spatial cues. In an attempt to replicate these findings, we manipulated the salience of the visual cue and found that we could reverse the advantage of the imitative cue over the spatial cue. We suggest that participants utilised a simple visuomotor mechanism to perform all aspects of this task, with performance being driven by the relative visual salience of the stimuli. Imitation is a more complex motor skill that would constitute an inefficient strategy for rapid performance.
Scheperle, Rachel A.; Abbas, Paul J.
2014-01-01
Objectives The ability to perceive speech is related to the listener’s ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Design Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every-other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex (ACC) with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel-discrimination and the Bamford-Kowal-Bench Sentence-in-Noise (BKB-SIN) test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. Results All electrophysiological measures were significantly correlated with each other and with speech perception for the mixed-model analysis, which takes into account multiple measures per person (i.e. experimental MAPs). The ECAP measures were the best predictor of speech perception. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech; spectral ACC amplitude was the strongest predictor. Conclusions The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be the most useful for within-subject applications, when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered. PMID:25658746
Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor
NASA Astrophysics Data System (ADS)
Alexander, J. P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M. P.; Flanagan, J. W.; Fontes, E.; Heltsley, B. K.; Lyndaker, A.; Peterson, D. P.; Rider, N. T.; Rubin, D. L.; Seeley, R.; Shanks, J.
2014-12-01
We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e+ and e- beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10 - 100 μm on a turn-by-turn, bunch-by-bunch basis at e± beam energies of 2 - 5 GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances.
Detection of Cherenkov Photons with Multi-Anode Photomultipliers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, H.; Moreno, E.; Murrieta, T.
2006-09-25
The present paper describes the laboratory course given at the X Mexican Workshop on Particles and Fields. We describe the setup and procedure used to measure the Cherenkov circles produced by cosmic muons upon traversal of a simple glass radiator system. The main purpose of this exercise is to introduce the students to work with multi-anode photomultipliers such as the one used for this experiment (Hamamatsu R5900-M64), with which measurements requiring position sensitive detection of single photons can be successfully performed. We present a short introduction to multi-anode photomultipliers (MAPMT) and describe the setup and the procedure used to measuremore » the response of a MAPMT to a uniform source of light. Finally, we describe the setup and procedure used to measure the Cherenkov circles produced by cosmic muons upon traversal of a simple glass radiator system.« less
Pireau, Nathalie; De Gheldere, Antoine; Mainard-Simard, Laurence; Lascombes, Pierre; Docquier, Pierre-Louis
2011-04-01
The classical indication for treating a simple bone cyst is usually the risk of fracture, which can be predicted based on three parameters: the bone cyst index, the bone cyst diameter, and the minimal cortical thickness. A retrospective review was carried out based on imaging of 35 simple bone cysts (30 humeral and 5 femoral). The three parameters were measured on standard radiographs, and on T1-weighted and T2-weighted MRI. The measurements were performed by two independent reviewers, and twice by the same reviewer. Kappa values and binary logistic regression were used to assess the ability of the parameters to predict the fracture risk. Inter- and intra-observer agreement was measured. T1-weighted MRI was found to have the best inter- and intraobserver repeatability. The bone cyst index was found to be the best predictor for the risk of fracture.
Abrahamsen, B; Hansen, T B; Høgsberg, I M; Pedersen, F B; Beck-Nielsen, H
1996-01-01
Dual X-ray absorptiometry (DXA) performs noninvasive assessment of bone and soft tissue with high precision. However, soft tissue algorithms assume that 73.2% of the lean body mass is water, a potential source of error in fluid retention. We evaluated DXA (model QDR-2000; Hologic Inc, Waltham, MA), bioelectrical impedance analysis (BIA), and simple anthropometry in 19 patients (9 women and 10 men, mean age 46 y) before and after hemodialysis, removing 0.9-4.3 L (x: 2.8L) of ultrafiltrate. The reduction in fat-free mass (FFM) measured by DXA was highly correlated with the ultrafiltrate, as determined by the reduction in gravimetric weight (r = 0.975, P < 0.0001; SEE: 233 g), whereas BIA was considerably less accurate in assessing FFM reductions (r = 0.66, P < 0.01; SEE: 757 g). Lumbar bone mineral density (BMD) was unaffected by dialysis, as were whole-body fat and BMD. Whole-body bone mineral content, however, was estimated to be 0.6% lower after dialysis. None of the simple anthropometric measurements correlated significantly with the reduction in FFM. In an unmodified clinical setting, DXA appears to be superior to other simple noninvasive methods for determining body composition, particularly when the emphasis is on repeated measurements.
Non-invasive absolute measurement of leaf water content using terahertz quantum cascade lasers.
Baldacci, Lorenzo; Pagano, Mario; Masini, Luca; Toncelli, Alessandra; Carelli, Giorgio; Storchi, Paolo; Tredicucci, Alessandro
2017-01-01
Plant water resource management is one of the main future challenges to fight recent climatic changes. The knowledge of the plant water content could be indispensable for water saving strategies. Terahertz spectroscopic techniques are particularly promising as a non-invasive tool for measuring leaf water content, thanks to the high predominance of the water contribution to the total leaf absorption. Terahertz quantum cascade lasers (THz QCL) are one of the most successful sources of THz radiation. Here we present a new method which improves the precision of THz techniques by combining a transmission measurement performed using a THz QCL source, with simple pictures of leaves taken by an optical camera. As a proof of principle, we performed transmission measurements on six plants of Vitis vinifera L. (cv "Colorino"). We found a linear law which relates the leaf water mass to the product between the leaf optical depth in the THz and the projected area. Results are in optimal agreement with the proposed law, which reproduces the experimental data with 95% accuracy. This method may overcome the issues related to intra-variety heterogeneities and retrieve the leaf water mass in a fast, simple, and non-invasive way. In the future this technique could highlight different behaviours in preserving the water status during drought stress.
Rhudy, Matthew B; Mahoney, Joseph M
2018-04-01
The goal of this work is to compare the differences between various step counting algorithms using both accelerometer and gyroscope measurements from wrist and ankle-mounted sensors. Participants completed four different conditions on a treadmill while wearing an accelerometer and gyroscope on the wrist and the ankle. Three different step counting techniques were applied to the data from each sensor type and mounting location. It was determined that using gyroscope measurements allowed for better performance than the typically used accelerometers, and that ankle-mounted sensors provided better performance than those mounted on the wrist.
Determination of the heat capacities of Lithium/BCX (bromide chloride in thionyl chloride) batteries
NASA Technical Reports Server (NTRS)
Kubow, Stephen A.; Takeuchi, Kenneth J.; Takeuchi, Esther S.
1989-01-01
Heat capacities of twelve different Lithium/BCX (BrCl in thionyl chloride) batteries in sizes AA, C, D, and DD were determined. Procedures and measurement results are reported. The procedure allowed simple, reproducible, and precise determinations of heat capacities of industrially important Lithium/BCX cells, without interfering with performance of the cells. Use of aluminum standards allowed the accuracy of the measurements to be maintained. The measured heat capacities were within 5 percent of calculated heat capacity values.
SU-F-T-267: A Clarkson-Based Independent Dose Verification for the Helical Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagata, H; Juntendo University, Hongo, Tokyo; Hongo, H
2016-06-15
Purpose: There have been few reports for independent dose verification for Tomotherapy. We evaluated the accuracy and the effectiveness of an independent dose verification system for the Tomotherapy. Methods: Simple MU Analysis (SMU, Triangle Product, Ishikawa, Japan) was used as the independent verification system and the system implemented a Clarkson-based dose calculation algorithm using CT image dataset. For dose calculation in the SMU, the Tomotherapy machine-specific dosimetric parameters (TMR, Scp, OAR and MLC transmission factor) were registered as the machine beam data. Dose calculation was performed after Tomotherapy sinogram from DICOM-RT plan information was converted to the information for MUmore » and MLC location at more segmented control points. The performance of the SMU was assessed by a point dose measurement in non-IMRT and IMRT plans (simple target and mock prostate plans). Subsequently, 30 patients’ treatment plans for prostate were compared. Results: From the comparison, dose differences between the SMU and the measurement were within 3% for all cases in non-IMRT plans. In the IMRT plan for the simple target, the differences (Average±1SD) were −0.70±1.10% (SMU vs. TPS), −0.40±0.10% (measurement vs. TPS) and −1.20±1.00% (measurement vs. SMU), respectively. For the mock prostate, the differences were −0.40±0.60% (SMU vs. TPS), −0.50±0.90% (measurement vs. TPS) and −0.90±0.60% (measurement vs. SMU), respectively. For patients’ plans, the difference was −0.50±2.10% (SMU vs. TPS). Conclusion: A Clarkson-based independent dose verification for the Tomotherapy can be clinically available as a secondary check with the similar tolerance level of AAPM Task group 114. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
ERIC Educational Resources Information Center
Barac, Raluca; Moreno, Sylvain; Bialystok, Ellen
2016-01-01
This study examined executive control in sixty-two 5-year-old children who were monolingual or bilingual using behavioral and event-related potentials (ERPs) measures. All children performed equivalently on simple response inhibition (gift delay), but bilingual children outperformed monolinguals on interference suppression and complex response…
Detection of Convexity and Concavity in Context
ERIC Educational Resources Information Center
Bertamini, Marco
2008-01-01
Sensitivity to shape changes was measured, in particular detection of convexity and concavity changes. The available data are contradictory. The author used a change detection task and simple polygons to systematically manipulate convexity/concavity. Performance was high for detecting a change of sign (a new concave vertex along a convex contour…
A Simple Close Range Photogrammetry Technique to Assess Soil Erosion in the Field
USDA-ARS?s Scientific Manuscript database
Evaluating the performance of a soil erosion prediction model depends on the ability to accurately measure the gain or loss of sediment in an area. Recent development in acquiring detailed surface elevation data (DEM) makes it feasible to assess soil erosion and deposition spatially. Digital photogr...
Diane L. Haase
2011-01-01
Roots are critical to seedling performance after outplanting. Although root quality is not as quick and simple to measure as shoot quality, target root characteristics should be included in any seedling quality assessment program. This paper provides a brief review of root characteristics most commonly targeted for operational seedling production. These are: root mass...
Applications of luminescent systems to infectious disease methodology
NASA Technical Reports Server (NTRS)
Picciolo, G. L.; Chappelle, E. W.; Deming, J. W.; Mcgarry, M. A.; Nibley, D. A.; Okrend, H.; Thomas, R. R.
1976-01-01
The characterization of a clinical sample by a simple, fast, accurate, automatable analytical measurement is important in the management of infectious disease. Luminescence assays offer methods rich with options for these measurements. The instrumentation is common to each assay, and the investment is reasonable. Three general procedures were developed to varying degrees of completeness which measure bacterial levels by measuring their ATP, FMN and iron porphyrins. Bacteriuria detection and antibiograms can be determined within half a day. The characterization of the sample for its soluble ATP, FMN or prophyrins was also performed.
Modelling Nitrogen Oxides in Los Angeles Using a Hybrid Dispersion/Land Use Regression Model
NASA Astrophysics Data System (ADS)
Wilton, Darren C.
The goal of this dissertation is to develop models capable of predicting long term annual average NOx concentrations in urban areas. Predictions from simple meteorological dispersion models and seasonal proxies for NO2 oxidation were included as covariates in a land use regression (LUR) model for NOx in Los Angeles, CA. The NO x measurements were obtained from a comprehensive measurement campaign that is part of the Multi-Ethnic Study of Atherosclerosis Air Pollution Study (MESA Air). Simple land use regression models were initially developed using a suite of GIS-derived land use variables developed from various buffer sizes (R²=0.15). Caline3, a simple steady-state Gaussian line source model, was initially incorporated into the land-use regression framework. The addition of this spatio-temporally varying Caline3 covariate improved the simple LUR model predictions. The extent of improvement was much more pronounced for models based solely on the summer measurements (simple LUR: R²=0.45; Caline3/LUR: R²=0.70), than it was for models based on all seasons (R²=0.20). We then used a Lagrangian dispersion model to convert static land use covariates for population density, commercial/industrial area into spatially and temporally varying covariates. The inclusion of these covariates resulted in significant improvement in model prediction (R²=0.57). In addition to the dispersion model covariates described above, a two-week average value of daily peak-hour ozone was included as a surrogate of the oxidation of NO2 during the different sampling periods. This additional covariate further improved overall model performance for all models. The best model by 10-fold cross validation (R²=0.73) contained the Caline3 prediction, a static covariate for length of A3 roads within 50 meters, the Calpuff-adjusted covariates derived from both population density and industrial/commercial land area, and the ozone covariate. This model was tested against annual average NOx concentrations from an independent data set from the EPA's Air Quality System (AQS) and MESA Air fixed site monitors, and performed very well (R²=0.82).
Barringer, J.L.; Johnsson, P.A.
1996-01-01
Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo
We analyze quantum algorithms for cloning of a quantum measurement. Our aim is to mimic two uses of a device performing an unknown von Neumann measurement with a single use of the device. When the unknown device has to be used before the bipartite state to be measured is available we talk about 1{yields}2 learning of the measurement, otherwise the task is called 1{yields}2 cloning of a measurement. We perform the optimization for both learning and cloning for arbitrary dimension d of the Hilbert space. For 1{yields}2 cloning we also propose a simple quantum network that achieves the optimal fidelity.more » The optimal fidelity for 1{yields}2 learning just slightly outperforms the estimate and prepare strategy in which one first estimates the unknown measurement and depending on the result suitably prepares the duplicate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Iain S.; Wray, Craig P.; Guillot, Cyril
2003-08-01
In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less
Spedding, G R; Hedenström, A H; McArthur, J; Rosén, M
2008-01-01
Bird flight occurs over a range of Reynolds numbers (Re; 10(4) < or = Re < or = 10(5), where Re is a measure of the relative importance of inertia and viscosity) that includes regimes where standard aerofoil performance is difficult to predict, compute or measure, with large performance jumps in response to small changes in geometry or environmental conditions. A comparison of measurements of fixed wing performance as a function of Re, combined with quantitative flow visualisation techniques, shows that, surprisingly, wakes of flapping bird wings at moderate flight speeds admit to certain simplifications where their basic properties can be understood through quasi-steady analysis. Indeed, a commonly cited measure of the relative flapping frequency, or wake unsteadiness, the Strouhal number, is seen to be approximately constant in accordance with a simple requirement for maintaining a moderate local angle of attack on the wing. Together, the measurements imply a fine control of boundary layer separation on the wings, with implications for control strategies and wing shape selection by natural and artificial fliers.
Life extending control: An interdisciplinary engineering thrust
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.
Kaban, Leonard B; Cappetta, Alyssa; George, Brian C; Lahey, Edward T; Bohnen, Jordan D; Troulis, Maria J
2017-10-01
There are no universally accepted tools to evaluate operative skills of surgical residents in a timely fashion. The purpose of this study was to determine the feasibility of using a smartphone application, SIMPL (System for Improving and Measuring Procedural Learning), developed by a multi-institutional research collaborative, to achieve a high rate of timely operative evaluations and resident communication and to collect performance data. The authors hypothesized that these goals would be achieved because the process is convenient and efficient. This was a prospective feasibility and engagement study using SIMPL to evaluate residents' operative skills. SIMPL requires the attending surgeon to answer 3 multiple-choice questions: 1) What level of help (Zwisch Scale) was required by the trainee? 2) What was the level of performance? 3) How complex was the case? The evaluator also can dictate a narrative. The sample was composed of 3 faculty members and 3 volunteer senior residents. Predictor variables were the surgeons, trainees, and procedures performed. Outcome variables included number and percentage of procedures performed by faculty-and-resident pairs assessed, time required to complete assessments, time lapsed to submission, percentage of assessments with narratives, and residents' response rates. From March through June 2016, 151 procedures were performed in the operating room by the faculty-and-resident teams. There were 107 assessments submitted (71%). Resident response (self-assessment) to faculty evaluations was 81%. Recorded time to complete assessments (n = 75 of 107) was shorter than 2 minutes. The time lapsed to submission was shorter than 72 hours (100%). Dictations were submitted for 35 evaluations (33%). Data for the type of help, performance, and complexity of cases were collected for each resident. SIMPL facilitates timely intraoperative evaluations of surgical skills, engagement by faculty and residents, and collection of detailed procedural data. Additional prospective trials to assess this tool further are planned. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Wang, Yuanye; Luo, Huan
2017-01-01
In order to deal with external world efficiently, the brain constantly generates predictions about incoming sensory inputs, a process known as "predictive coding." Our recent studies, by employing visual priming paradigms in combination with a time-resolved behavioral measurement, reveal that perceptual predictions about simple features (e.g., left or right orientation) return to low sensory areas not continuously but recurrently in a theta-band (3-4Hz) rhythm. However, it remains unknown whether high-level object processing is also mediated by the oscillatory mechanism and if yes at which rhythm the mechanism works. In the present study, we employed a morph-face priming paradigm and the time-resolved behavioral measurements to examine the fine temporal dynamics of face identity priming performance. First, we reveal classical priming effects and a rhythmic trend within the prime-to-probe SOA of 600ms (Experiment 1). Next, we densely sampled the face priming behavioral performances within this SOA range (Experiment 2). Our results demonstrate a significant ~5Hz oscillatory component in the face priming behavioral performances, suggesting that a rhythmic process also coordinates the object-level prediction (i.e., face identity here). In comparison to our previous studies, the results suggest that the rhythm for the high-level object is faster than that for simple features. We propose that the seemingly distinctive priming rhythms might be attributable to that the object-level and simple feature-level predictions return to different stages along the visual pathway (e.g., FFA area for face priming and V1 area for simple feature priming). In summary, the findings support a general theta-band (3-6Hz) temporal organization mechanism in predictive coding, and that such wax-and-waning pattern in predictive coding may aid the brain to be more readily updated for new inputs. © 2017 Elsevier B.V. All rights reserved.
High current proton beams production at Simple Mirror Ion Source 37.
Skalyga, V; Izotov, I; Razin, S; Sidorov, A; Golubev, S; Kalvas, T; Koivisto, H; Tarvainen, O
2014-02-01
This paper presents the latest results of high current proton beam production at Simple Mirror Ion Source (SMIS) 37 facility at the Institute of Applied Physics (IAP RAS). In this experimental setup, the plasma is created and the electrons are heated by 37.5 GHz gyrotron radiation with power up to 100 kW in a simple mirror trap fulfilling the ECR condition. Latest experiments at SMIS 37 were performed using a single-aperture two-electrode extraction system. Proton beams with currents up to 450 mA at high voltages below 45 kV were obtained. The maximum beam current density was measured to be 600 mA/cm(2). A possibility of further improvement through the development of an advanced extraction system is discussed.
TG (Tri-Goniometry) technique: Obtaining perfect angles in Z-plasty planning with a simple ruler.
Görgülü, Tahsin; Olgun, Abdulkerim
2016-03-01
The Z-plasty is used frequently in hand surgery to release post-burn scar contractures. Correct angles and equalization of each limb are the most important parts of the Z-plasty technique. A simple ruler is enough for equalization of limb but a goniometer is needed for accuracy and equalization of angles. Classically, angles of 30°, 45°, 60°, 75°, and 90° are used. These angles are important when elongating a contracture line or decreasing tension. Our method uses only trigonometry coefficients and a simple ruler, which is easily obtained and sterilized, enabling surgeons to perform all types of Z-plasty perfectly without measuring angles using a goniometer. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
Language deficits in poor comprehenders: a case for the simple view of reading.
Catts, Hugh W; Adlof, Suzanne M; Ellis Weismer, Susan
2006-04-01
To examine concurrently and retrospectively the language abilities of children with specific reading comprehension deficits ("poor comprehenders") and compare them to typical readers and children with specific decoding deficits ("poor decoders"). In Study 1, the authors identified 57 poor comprehenders, 27 poor decoders, and 98 typical readers on the basis of 8th-grade reading achievement. These subgroups' performances on 8th-grade measures of language comprehension and phonological processing were investigated. In Study 2, the authors examined retrospectively subgroups' performances on measures of language comprehension and phonological processing in kindergarten, 2nd, and 4th grades. Word recognition and reading comprehension in 2nd and 4th grades were also considered. Study 1 showed that poor comprehenders had concurrent deficits in language comprehension but normal abilities in phonological processing. Poor decoders were characterized by the opposite pattern of language abilities. Study 2 results showed that subgroups had language (and word recognition) profiles in the earlier grades that were consistent with those observed in 8th grade. Subgroup differences in reading comprehension were inconsistent across grades but reflective of the changes in the components of reading comprehension over time. The results support the simple view of reading and the phonological deficit hypothesis. Furthermore, the findings indicate that a classification system that is based on the simple view has advantages over standard systems that focus only on word recognition and/or reading comprehension.
Time-frequency analysis of functional optical mammographic images
NASA Astrophysics Data System (ADS)
Barbour, Randall L.; Graber, Harry L.; Schmitz, Christoph H.; Tarantini, Frank; Khoury, Georges; Naar, David J.; Panetta, Thomas F.; Lewis, Theophilus; Pei, Yaling
2003-07-01
We have introduced working technology that provides for time-series imaging of the hemoglobin signal in large tissue structures. In this study we have explored our ability to detect aberrant time-frequency responses of breast vasculature for subjects with Stage II breast cancer at rest and in response to simple provocations. The hypothesis being explored is that time-series imaging will be sensitive to the known structural and functional malformations of the tumor vasculature. Mammographic studies were conducted using an adjustable hemisheric measuring head containing 21 source and 21 detector locations (441 source-detector pairs). Simultaneous dual-wavelength studies were performed at 760 and 830 nm at a framing rate of ~2.7 Hz. Optical measures were performed on women lying prone with the breast hanging in a pendant position. Two class of measures were performed: (1) 20- minute baseline measure wherein the subject was at rest; (2) provocation studies wherein the subject was asked to perform some simple breathing maneuvers. Collected data were analyzed to identify the time-frequency structure and central tendencies of the detector responses and those of the image time series. Imaging data were generated using the Normalized Difference Method (Pei et al., Appl. Opt. 40, 5755-5769, 2001). Results obtained clearly document three classes of anomalies when compared to the normal contralateral breast. 1) Breast tumors exhibit altered oxygen supply/demand imbalance in response to an oxidative challenge (breath hold). 2) The vasomotor response of the tumor vasculature is mainly depressed and exhibits an altered modulation. 3) The affected area of the breast wherein the altered vasomotor signature is seen extends well beyond the limits of the tumor itself.
On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images
NASA Astrophysics Data System (ADS)
Eid, Ahmed; Farag, Aly
2005-12-01
The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.
Food Application of Newly Developed Handy-type Glutamate Sensor.
Mukai, Yuuka; Oikawa, Tsutomu
2016-01-01
Tests on physiological functions of umami have been actively conducted and a need recognized for a high-performance quantification device that is simple and cost-effective, and whose use is not limited to a particular location or user. To address this need, Ajinomoto Co. and Tanita Corp. have jointly been researching and developing a simple device for glutamate measurement. The device uses L-glutamate oxidase immobilized on a hydrogen peroxide electrode. L-glutamate in the sample is converted to α-ketoglutaric acid, which produces hydrogen peroxide. Subsequently, the electrical current from the electrochemical reaction of hydrogen peroxide is measured to determine the L-glutamate concentration. In order to evaluate its basic performance, we used this device to measure the concentration of L-glutamate standard solutions. In a concentration range of 0-1.0%, the difference from the theoretical value was minimal. The coefficient of variation (CV) value of 3 measurements was 4% or less. This shows that the device has a reasonable level of precision and accuracy. The device was also used in trial measurements of L-glutamate concentrations in food. There was a good correlation between the results obtained using the developed device and those obtained with an amino acid analyzer; the correlation coefficient was R=0.997 (n=24). In this review, we demonstrate the use of our device to measure the glutamate concentration in miso soup served daily at a home for elderly people, and other foods and ingredients.
Hamed, Saja H; Altrabsheh, Bilal; Assa'd, Tareq; Jaradat, Said; Alshra'ah, Mohammad; Aljamal, Abdulfattah; Alkhatib, Hatim S; Almalty, Abdul-Majeed
2012-12-01
Different probes are used in dermato-cosmetic research to measure the electrical properties of the skin. The principle governing the choice of the geometry and material of the measuring probe is not well defined in the literature and some device's measuring principles are not accessible for the scientific community. The purpose of this work was to develop a simple inexpensive conductance meter for the objective in vivo evaluation of skin hydration. The conductance meter probe was designed using the basic equation governing wave propagation along Transverse Electromagnetic transmission lines. It consisted of two concentric copper circular electrodes printed on FR4 dielectric material. The performance of the probe was validated by evaluating its measurement depth, its ability to monitor in vitro water sorption-desorption and in vivo skin hydration effect in comparison to that of the Corneometer CM 825. The measurement depth of the probe, 15 μm, was comparable to that of CM 825. The in vitro readings of the probe correlated strongly with the amount of water adsorbed on filter paper. Skin hydration after application of a moisturizer was monitored effectively by the new probe with good correlation to the results of CM 825. In conclusion, a simple probe for evaluating skin hydration was made from off-the-shelf materials and its performance was validated in comparison to a commercially available probe. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Vartiainen, Matti V; Holm, Anu; Lukander, Jani; Lukander, Kristian; Koskinen, Sanna; Bornstein, Robert; Hokkanen, Laura
2016-01-01
Mild traumatic brain injuries (MTBI) or concussions often result in problems with attention, executive functions, and motor control. For better identification of these diverse problems, novel approaches integrating tests of cognitive and motor functioning are needed. The aim was to characterize minor changes in motor and cognitive performance after sports-related concussions with a novel test battery, including balance tests and a computerized multilimb reaction time test. The cognitive demands of the battery gradually increase from a simple stimulus response to a complex task requiring executive attention. A total of 113 male ice hockey players (mean age = 24.6 years, SD = 5.7) were assessed before a season. During the season, nine concussed players were retested within 36 hours, four to six days after the concussion, and after the season. A control group of seven nonconcussed players from the same pool of players with comparable demographics were retested after the season. Performance was measured using a balance test and the Motor Cognitive Test battery (MotCoTe) with multilimb responses in simple reaction, choice reaction, inhibition, and conflict resolution conditions. The performance of the concussed group declined at the postconcussion assessment compared to both the baseline measurement and the nonconcussed controls. Significant changes were observed in the concussed group for the multilimb choice reaction and inhibition tests. Tapping and balance showed a similar trend, but no statistically significant difference in performance. In sports-related concussions, complex motor tests can be valuable additions in assessing the outcome and recovery. In the current study, using subtasks with varying cognitive demands, it was shown that while simple motor performance was largely unaffected, the more complex tasks induced impaired reaction times for the concussed subjects. The increased reaction times may reflect the disruption of complex and integrative cognitive function in concussions.
NASA Astrophysics Data System (ADS)
Duta, L.; Mihailescu, N.; Popescu, A. C.; Luculescu, C. R.; Mihailescu, I. N.; Çetin, G.; Gunduz, O.; Oktar, F. N.; Popa, A. C.; Kuncser, A.; Besleaga, C.; Stan, G. E.
2017-08-01
We report on the synthesis by Pulsed Laser Deposition of simple and Ti doped hydroxyapatite thin films of biological (ovine dentine) origin. Detailed physical, chemical, mechanical and biological investigations were performed. Morphological examination of films showed a surface composed of spheroidal particulates, of micronic size. Compositional analyses pointed to the presence of typical natural doping elements of bone, along with a slight non-stoichiometry of the deposited films. Structural investigations proved the monophasic hydroxyapatite nature of both simple and Ti doped films. Ti doping of biological hydroxyapatite induced an overall downgrade of the films crystallinity together with an increase of the films roughness. It is to be emphasized that bonding strength values measured at film/Ti substrate interface were superior to the minimum value imposed by International Standards regulating the load-bearing implant coatings. In vitro tests on Ti doped structures, compared to simple ones, revealed excellent biocompatibility in human mesenchymal stem cell cultures, a higher proliferation rate and a good cytocompatibility. The obtained results aim to elucidate the overall positive role of Ti doping on the hydroxyapatite films performance, and demonstrate the possibility to use this novel type of coatings as feasible materials for future implantology applications.
A Simple Approach for Demonstrating Soil Water Retention and Field Capacity
ERIC Educational Resources Information Center
Howard, A.; Heitman, J. L.; Bowman, D.
2010-01-01
It is difficult to demonstrate the soil water retention relationship and related concepts because the specialized equipment required for performing these measurements is unavailable in most classrooms. This article outlines a low-cost, easily visualized method by which these concepts can be demonstrated in most any classroom. Columns (62.5 cm…
ERIC Educational Resources Information Center
Isaac, Lisa M.; And Others
1993-01-01
Assessed multiple aspects of cognitive performance, medication planning ability, and medication compliance in 20 elderly outpatients. Findings suggest that aspects of attention/concentration, visual and verbal memory, and motor function which are untapped by simple mental status assessment are related to medication access, planning, and compliance…
It Takes Two: Contrasting Tasks and Contrasting Structures.
ERIC Educational Resources Information Center
Eisenstein, Miriam; And Others
1982-01-01
Examines and compares two measures of adult second language learner performance: cued production and elicited imitation. Discusses the utility of each in terms of the contrasting results of the tasks on a carefully delineated area of grammar, namely the related structure of third person simple present and present progressive in WH-questions. (EKN)
Summarizing the Effect of a Wide Array of Amenity Measures into Simple Components
ERIC Educational Resources Information Center
Gunderson, Ronald J.; Ng, Pin T.
2006-01-01
A significant issue existing within the rural economic development literature revolves around the difficulty with sorting out the controversy of the effects of amenity activities on rural economic growth. This problem is due to the different ways amenity attributes are linked to regional economic performance. Numerous researchers utilize principal…
Bakeout Chamber Within Vacuum Chamber
NASA Technical Reports Server (NTRS)
Taylor, Daniel M.; Soules, David M.; Barengoltz, Jack B.
1995-01-01
Vacuum-bakeout apparatus for decontaminating and measuring outgassing from pieces of equipment constructed by mounting bakeout chamber within conventional vacuum chamber. Upgrade cost effective: fabrication and installation of bakeout chamber simple, installation performed quickly and without major changes in older vacuum chamber, and provides quantitative data on outgassing from pieces of equipment placed in bakeout chamber.
A Confirmatory Factor Analysis of the Professional Opinion Scale
ERIC Educational Resources Information Center
Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.
2007-01-01
The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…
"Met" Made Simple: Building Research-Based Teacher Evaluations. Issue Analysis Report
ERIC Educational Resources Information Center
New Teacher Project, 2012
2012-01-01
Groundbreaking new findings from the Bill and Melinda Gates Foundation's Measures of Effective Teaching (MET) project hold the potential to answer crucial questions about how to assess teachers' performance. For the past two years, MET researchers have conducted a research project of unprecedented scope, involving 3,000 teachers in six school…
Smartphone-aided measurements of the speed of sound in different gaseous mixtures
NASA Astrophysics Data System (ADS)
Parolin, Sara Orsola; Pezzi, Giovanni
2013-11-01
Here we describe classroom-based procedures aiming at the estimation of the speed of sound in different gas mixtures with the help of a plastic drain pipe and two iPhones or iPod touches. The procedures were conceived to be performed with simple and readily available tools.
Smartphone-Aided Measurements of the Speed of Sound in Different Gaseous Mixtures
ERIC Educational Resources Information Center
Parolin, Sara Orsola; Pezzi, Giovanni
2013-01-01
Here we describe classroom-based procedures aiming at the estimation of the speed of sound in different gas mixtures with the help of a plastic drain pipe and two iPhones or iPod touches. The procedures were conceived to be performed with simple and readily available tools.
ERIC Educational Resources Information Center
Ray, Keith W.; Goppelt, Joan
2011-01-01
Many leadership development programs are intended to improve individual leaders' skills and abilities to perform. Methods for measuring the effect of such programs range from simple metacognitive self-report surveys to 360-degree feedback, to instrumentation of psychological phenomena. However, the outcomes of some leadership development programs…
USDA-ARS?s Scientific Manuscript database
The promise of genomic selection is that genetic potential can be accurately predicted from genotypes. Simple deoxyribonucleic acid (DNA) tests might replace low accuracy predictions based on performance and pedigree for expensive or lowly heritable measures of puberty and fertility. The promise i...
USDA-ARS?s Scientific Manuscript database
The promise of genomic selection is accurate prediction of animals' genetic potential from their genotypes. Simple DNA tests might replace low accuracy predictions for expensive or lowly heritable measures of puberty and fertility based on performance and pedigree. Knowing which DNA variants affec...
Hemispheric Visual Attentional Imbalance in Patients with Traumatic Brain Injury
ERIC Educational Resources Information Center
Pavlovskaya, Marina; Groswasser, Zeev; Keren, Ofer; Mordvinov, Eugene; Hochstein, Shaul
2007-01-01
We find a spatially asymmetric allocation of attention in patients with traumatic brain injury (TBI) despite the lack of obvious asymmetry in neurological indicators. Identification performance was measured for simple spatial patterns presented briefly to a locus 5 degrees into the left or right hemifield, after precuing attention to the same…
Kim, Hong; Heverling, Harry; Cordeiro, Michael; Vasquez, Vanessa; Stolbach, Andrew
2016-09-01
Opioid overdose is a leading cause of death in the USA. Internet-based teaching can improve medical knowledge among trainees, but there are limited data to show the effect of Internet-based teaching on clinical competence in medical training, including management of opioid poisoning. We used an ecological design to assess the effect of an Internet-based teaching module on the management of a simulated opioid-poisoned patient. We enrolled two consecutive classes of post-graduate year-1 residents from a single emergency medicine program. The first group (RA) was instructed to read a toxicology textbook chapter and the second group (IT) took a brief Internet training module. All participants subsequently managed a simulated opioid-poisoned patient. The participants' performance was evaluated with two types of checklist (simple and time-weighted), along with global assessment scores. Internet-trained participants performed better on both checklist scales. The difference between mean simple checklist scores by the IT and RA groups was 0.23 (95 % CI, 0.016-0.44). The difference between mean time-weighted checklist scores was 0.27 (95 % CI, 0.048-0.49). When measured by global assessment, there was no statistically significant difference between RA and IT participants. These data suggest that the Internet module taught basic principles of management of the opioid-poisoned patient. In this scenario, global assessment and checklist assessment may not measure the same proficiencies. These encouraging results are not sufficient to show that this Internet tool improves clinical performance. We should assess the impact of the Internet module on performance in a true clinical environment.
Phase noise measurements of the 400-kW, 2.115-GHz (S-band) transmitter
NASA Technical Reports Server (NTRS)
Boss, P.; Hoppe, D.; Bhanji, A.
1987-01-01
The measurement theory is described and a test method to perform phase noise verification using off-the-shelf components and instruments is presented. The measurement technique described consists of a double-balanced mixer used as phase detector, followed by a low noise amplifier. An FFT spectrum analyzer is then used to view the modulation components. A simple calibration procedure is outlined that ensures accurate measurements. A block diagram of the configuration is presented as well as actual phase noise data from the 400 kW, 2.115 GHz (S-band) klystron transmitter.
Tinnangwattana, Dangcheewan; Vichak-Ururote, Linlada; Tontivuthikul, Paponrad; Charoenratana, Cholaros; Lerthiranwong, Thitikarn; Tongsong, Theera
2015-01-01
To evaluate the diagnostic performance of IOTA simple rules in predicting malignant adnexal tumors by non-expert examiners. Five obstetric/gynecologic residents, who had never performed gynecologic ultrasound examination by themselves before, were trained for IOTA simple rules by an experienced examiner. One trained resident performed ultrasound examinations including IOTA simple rules on 100 women, who were scheduled for surgery due to ovarian masses, within 24 hours of surgery. The gold standard diagnosis was based on pathological or operative findings. The five-trained residents performed IOTA simple rules on 30 patients for evaluation of inter-observer variability. A total of 100 patients underwent ultrasound examination for the IOTA simple rules. Of them, IOTA simple rules could be applied in 94 (94%) masses including 71 (71.0%) benign masses and 29 (29.0%) malignant masses. The diagnostic performance of IOTA simple rules showed sensitivity of 89.3% (95%CI, 77.8%; 100.7%), specificity 83.3% (95%CI, 74.3%; 92.3%). Inter-observer variability was analyzed using Cohen's kappa coefficient. Kappa indices of the four pairs of raters are 0.713-0.884 (0.722, 0.827, 0.713, and 0.884). IOTA simple rules have high diagnostic performance in discriminating adnexal masses even when are applied by non-expert sonographers, though a training course may be required. Nevertheless, they should be further tested by a greater number of general practitioners before widely use.
De Bartolo, Samuele; Fallico, Carmine; Veltri, Massimo
2013-01-01
Hydraulic conductivity and effective porosity values for the confined sandy loam aquifer of the Montalto Uffugo (Italy) test field were obtained by laboratory and field measurements; the first ones were carried out on undisturbed soil samples and the others by slug and aquifer tests. A direct simple-scaling analysis was performed for the whole range of measurement and a comparison among the different types of fractal models describing the scale behavior was made. Some indications about the largest pore size to utilize in the fractal models were given. The results obtained for a sandy loam soil show that it is possible to obtain global indications on the behavior of the hydraulic conductivity versus the porosity utilizing a simple scaling relation and a fractal model in coupled manner. PMID:24385876
Jackson, Brian A; Faith, Kay Sullivan
2013-02-01
Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.
Friction coefficient determination by electrical resistance measurements
NASA Astrophysics Data System (ADS)
Tunyagi, A.; Kandrai, K.; Fülöp, Z.; Kapusi, Z.; Simon, A.
2018-05-01
A simple and low-cost, DIY-type, Arduino-driven experiment is presented for the study of friction and measurement of the friction coefficient, using a conductive rubber cord as a force sensor. It is proposed for high-school or college/university-level students. We strongly believe that it is worthwhile planning, designing and performing Arduino and compatible sensor-based experiments in physics class in order to ensure a better understanding of phenomena, develop theoretical knowledge and multiple experimental skills.
Shimizu, Miyuki; Kinoshita, Kensuke; Maeno, Takami; Kobayashi, Hiroyuki; Maeno, Tetsuhiro
2017-11-01
Dehydration in older patients has long been considered a significant health problem because it implies increased morbidity and mortality. However, dehydration is detected by a combination of physical signs and blood tests. For older people dwelling at home and in nursing homes, a simple and non-invasive method for detecting dehydration by caregivers is needed. The total body resistance is measured using bioelectrical impedance analysis and is known as an indicator of dehydration. There are no data from older Japanese patients on this issue. We performed this study to examine the relationship between dehydration and total body resistance in Japan. We performed blood tests and measured bioelectrical impedance in older outpatients aged ≥ 65 years from the Internal Medicine Department at Mito Kyodo General Hospital. Patients were classified as dehydrated and non-dehydrated using the dehydration index with a blood urea nitrogen/creatinine ratio > 20, and the mean total body resistance was compared between the two groups. Eighty-one patients were recruited in the study. In the dehydrated group, the mean total body resistance was 439 Ω at 50 kHz, which was significantly higher than that in the non-dehydrated group (408 Ω, P = 0.038). The total body resistance measurements can be used for simple assessment of dehydration among older Japanese patients.
Influence of gravity on cardiac performance.
Pantalos, G M; Sharp, M K; Woodruff, S J; O'Leary, D S; Lorange, R; Everett, S D; Bennett, T E; Shurfranz, T
1998-01-01
Results obtained by the investigators in ground-based experiments and in two parabolic flight series of tests aboard the NASA KC-135 aircraft with a hydraulic simulator of the human systemic circulation have confirmed that a simple lack of hydrostatic pressure within an artificial ventricle causes a decrease in stroke volume of 20%-50%. A corresponding drop in stroke volume (SV) and cardiac output (CO) was observed over a range of atrial pressures (AP), representing a rightward shift of the classic CO versus AP cardiac function curve. These results are in agreement with echocardiographic experiments performed on space shuttle flights, where an average decrease in SV of 15% was measured following a three-day period of adaptation to weightlessness. The similarity of behavior of the hydraulic model to the human system suggests that the simple physical effects of the lack of hydrostatic pressure may be an important mechanism for the observed changes in cardiac performance in astronauts during the weightlessness of space flight.
Suo, Guoquan; Yu, Yanhao; Zhang, Zhiyi; Wang, Shifa; Zhao, Ping; Li, Jianye; Wang, Xudong
2016-12-21
Piezoelectric and triboelectric nanogenerators have been developed as rising energy-harvesting devices in the past few years to effectively convert mechanical energy into electricity. Here, a novel hybrid piezo/triboelectric nanogenerator based on BaTiO 3 NP/PDMS composite film was developed in a simple and low-cost way. The effects of the BTO content and polarization degree on the output performance were systematically studied. The device with 20 wt % BTO in PDMS and a 100-μm-thick film showed the highest output power. We also designed three measurement modes to record hybrid, triboelectric, and piezoelectric outputs separately with a simple structure that has only two electrodes. The hybrid output performance is higher than the tribo- and piezoelectric performances. This work will provide not only a new way to enhance the output power of nanogenerators, but also new opportunities for developing built-in power sources in self-powered electronics.
Simple Levelized Cost of Energy (LCOE) Calculator Documentation | Energy
Analysis | NREL Simple Levelized Cost of Energy (LCOE) Calculator Documentation Simple Levelized Cost of Energy (LCOE) Calculator Documentation Transparent Cost Database Button This is a simple : 1). Cost and Performance Adjust the sliders to suitable values for each of the cost and performance
Schwibbe, Anja; Kothe, Christian; Hampe, Wolfgang; Konradt, Udo
2016-10-01
Sixty years of research have not added up to a concordant evaluation of the influence of spatial and manual abilities on dental skill acquisition. We used Ackerman's theory of ability determinants of skill acquisition to explain the influence of spatial visualization and manual dexterity on the task performance of dental students in two consecutive preclinical technique courses. We measured spatial and manual abilities of applicants to Hamburg Dental School by means of a multiple choice test on Technical Aptitude and a wire-bending test, respectively. Preclinical dental technique tasks were categorized as consistent-simple and inconsistent-complex based on their contents. For analysis, we used robust regression to circumvent typical limitations in dental studies like small sample size and non-normal residual distributions. We found that manual, but not spatial ability exhibited a moderate influence on the performance in consistent-simple tasks during dental skill acquisition in preclinical dentistry. Both abilities revealed a moderate relation with the performance in inconsistent-complex tasks. These findings support the hypotheses which we had postulated on the basis of Ackerman's work. Therefore, spatial as well as manual ability are required for the acquisition of dental skills in preclinical technique courses. These results support the view that both abilities should be addressed in dental admission procedures in addition to cognitive measures.
IOTA simple rules in differentiating between benign and malignant ovarian tumors.
Tantipalakorn, Charuwan; Wanapirak, Chanane; Khunamornpong, Surapan; Sukpan, Kornkanok; Tongsong, Theera
2014-01-01
To evaluate the diagnostic performance of IOTA simple rules in differentiating between benign and malignant ovarian tumors. A study of diagnostic performance was conducted on women scheduled for elective surgery due to ovarian masses between March 2007 and March 2012. All patients underwent ultrasound examination for IOTA simple rules within 24 hours of surgery. All examinations were performed by the authors, who had no any clinical information of the patients, to differentiate between benign and malignant adnexal masses using IOTA simple rules. Gold standard diagnosis was based on pathological or operative findings. A total of 398 adnexal masses, in 376 women, were available for analysis. Of them, the IOTA simple rules could be applied in 319 (80.1%) including 212 (66.5%) benign tumors and 107 (33.6%) malignant tumors. The simple rules yielded inconclusive results in 79 (19.9%) masses. In the 319 masses for which the IOTA simple rules could be applied, sensitivity was 82.9% and specificity 95.3%. The IOTA simple rules have high diagnostic performance in differentiating between benign and malignant adnexal masses. Nevertheless, inconclusive results are relatively common.
Jaśkowski, P; Włodarczyk, D
1997-04-01
Some recent findings suggested that response force measured during reaction time experiments might reflect changes in activation. We performed an experiment in which the effect of sleep deprivation, knowledge of results, and stimulus quality on response force was studied in simple and choice reaction tasks. As expected, both simple and choice reaction times increased with sleep deficit. Further, simple and choice reactions were faster with knowledge of results and slowed down when stimulus quality was degraded. As sleep deprivation affects both arousal and activation, we expected a detrimental effect of sleep on force amplitude. On the other hand, knowledge of results was expected to increase force by its compensatory effect on arousal and activation. No effect of sleep deprivation on response force was found. Knowledge of results increased response force independently of sleep deprivation.
A study on gaseous extinguishing agent sensing with a simple measurement method
NASA Astrophysics Data System (ADS)
Guan, Yu; Lu, Song; Yuan, Wei; Qian, Hanjie
2018-03-01
As research on the concentration distribution for evaluating the effectiveness of a gas fire extinguisher system is quite important, the proper sensing technology is necessary. Here, a simple method used for measuring the concentration of agent is introduced, and the manufacture of the sensing part is described clearly. The sensing unit is composed of a pressure reducing structure and pressure sensor element. The detection was achieved by sensing the change of pressure difference caused by gas flow. In order to verify the theory and characterize the sensing performance, two types of fire extinguishing agents, bromotrifluoromethane (CBrF3) and heptafluoropropane (C3HF7), were used in the experiments. The results showed a high sensitivity from 0 to 100%, good repeatability and fast response/recovery time. Furthermore, the effect of operating temperature, humidity and geometric structure on the response were investigated. Measurements showed, for CBrF3, that the temperature had a linear impact on the response and the influence of humidity in the sensor was negligible. Through the analysis of the geometry parameter, it was found that the sensing performance could be greatly improved through adjusting the geometry structure. This technique provides a low-cost and highly reliable sensor for the detection of gaseous extinguishing agent that can be easily fabricated.
Li, Xinan; Xu, Hongyuan; Cheung, Jeffrey T
2016-12-01
This work describes a new approach for gait analysis and balance measurement. It uses an inertial measurement unit (IMU) that can either be embedded inside a dynamically unstable platform for balance measurement or mounted on the lower back of a human participant for gait analysis. The acceleration data along three Cartesian coordinates is analyzed by the gait-force model to extract bio-mechanics information in both the dynamic state as in the gait analyzer and the steady state as in the balance scale. For the gait analyzer, the simple, noninvasive and versatile approach makes it appealing to a broad range of applications in clinical diagnosis, rehabilitation monitoring, athletic training, sport-apparel design, and many other areas. For the balance scale, it provides a portable platform to measure the postural deviation and the balance index under visual or vestibular sensory input conditions. Despite its simple construction and operation, excellent agreement has been demonstrated between its performance and the high-cost commercial balance unit over a wide dynamic range. The portable balance scale is an ideal tool for routine monitoring of balance index, fall-risk assessment, and other balance-related health issues for both clinical and household use.
Evaluation of Cohen's cross-section trichometer for measuring hair quantity.
Hendriks, Maria A E; Geerts, Paulus A F; Dercksen, Marcus W; van den Hurk, Corina J G; Breed, Wim P M
2012-04-01
Until now, there has been no reliable, simple method available for measuring hair quantity that is suitable in clinical practice. Recently, the cross-section trichometer by Cohen has been introduced. This study was designed to test its clinical utility. The hair mass index (HMI) is ratio of the cross-sectional area of an isolated bundle of hair and the premeasured area of skin from which it was taken using the trichometer device. The intra- and interobserver reproducibility of measurements at the same location and after relocation were evaluated. For intraobserver reproducibility, the HMI ranged from 3 to 120 (mean difference .2, 95% confidence interval [CI] = -4.7-5.1, correlation coefficient [r] = .99. For interobserver reproducibility, the HMI ranged from 18 to 119 (mean difference -.4, 95% CI = -8,0-7,2, r = .98). With relocation, the HMI ranged from 2 to 113 (mean difference -1.0, 95% CI = -10.1-8.1, r = .97). Measurements took 5-10 minutes per area. Measurements were simple to perform, and the data showed high reproducibility. The trichometer is a promising technology for hair quantity measurements and has multiple clinical and research applications. © 2012 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.
Nonintrusive Temperature and Velocity Measurements in a Hypersonic Nozzle Flow
NASA Technical Reports Server (NTRS)
OByrne, S.; Danehy, P. M.; Houwing, A. F. P.
2002-01-01
Distributions of nitric oxide vibrational temperature, rotational temperature and velocity have been measured in the hypersonic freestream at the exit of a conical nozzle, using planar laser-induced fluorescence. Particular attention has been devoted to reducing the major sources of systematic error that can affect fluorescence tempera- ture measurements, including beam attenuation, transition saturation effects, laser mode fluctuations and transition choice. Visualization experiments have been performed to improve the uniformity of the nozzle flow. Comparisons of measured quantities with a simple one-dimensional computation are made, showing good agreement between measurements and theory given the uncertainty of the nozzle reservoir conditions and the vibrational relaxation rate.
Experimental Investigation of the Flow on a Simple Frigate Shape (SFS)
Mora, Rafael Bardera
2014-01-01
Helicopters operations on board ships require special procedures introducing additional limitations known as ship helicopter operational limitations (SHOLs) which are a priority for all navies. This paper presents the main results obtained from the experimental investigation of a simple frigate shape (SFS) which is a typical case of study in experimental and computational aerodynamics. The results obtained in this investigation are used to make an assessment of the flow predicted by the SFS geometry in comparison with experimental data obtained testing a ship model (reduced scale) in the wind tunnel and on board (full scale) measurements performed on a real frigate type ship geometry. PMID:24523646
Deterministic quantum teleportation with atoms.
Riebe, M; Häffner, H; Roos, C F; Hänsel, W; Benhelm, J; Lancaster, G P T; Körber, T W; Becher, C; Schmidt-Kaler, F; James, D F V; Blatt, R
2004-06-17
Teleportation of a quantum state encompasses the complete transfer of information from one particle to another. The complete specification of the quantum state of a system generally requires an infinite amount of information, even for simple two-level systems (qubits). Moreover, the principles of quantum mechanics dictate that any measurement on a system immediately alters its state, while yielding at most one bit of information. The transfer of a state from one system to another (by performing measurements on the first and operations on the second) might therefore appear impossible. However, it has been shown that the entangling properties of quantum mechanics, in combination with classical communication, allow quantum-state teleportation to be performed. Teleportation using pairs of entangled photons has been demonstrated, but such techniques are probabilistic, requiring post-selection of measured photons. Here, we report deterministic quantum-state teleportation between a pair of trapped calcium ions. Following closely the original proposal, we create a highly entangled pair of ions and perform a complete Bell-state measurement involving one ion from this pair and a third source ion. State reconstruction conditioned on this measurement is then performed on the other half of the entangled pair. The measured fidelity is 75%, demonstrating unequivocally the quantum nature of the process.
A self-tuning automatic voltage regulator designed for an industrial environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flynn, D.; Hogg, B.W.; Swidenbank, E.
Examination of the performance of fixed parameter controllers has resulted in the development of self-tuning strategies for excitation control of turbogenerator systems. In conjunction with the advanced control algorithms, sophisticated measurement techniques have previously been adopted on micromachine systems to provide generator terminal quantities. In power stations, however, a minimalist hardware arrangement would be selected leading to relatively simple measurement techniques. The performance of a range of self-tuning schemes is investigated on an industrial test-bed, employing a typical industrial hardware measurement system. Individual controllers are implemented on a standard digital automatic voltage regulator, as installed in power stations. This employsmore » a VME platform, and the self-tuning algorithms are introduced by linking to a transputer network. The AVR includes all normal features, such as field forcing, VAR limiting and overflux protection. Self-tuning controller performance is compared with that of a fixed gain digital AVR.« less
INFERRING THE ECCENTRICITY DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogg, David W.; Bovy, Jo; Myers, Adam D., E-mail: david.hogg@nyu.ed
2010-12-20
Standard maximum-likelihood estimators for binary-star and exoplanet eccentricities are biased high, in the sense that the estimated eccentricity tends to be larger than the true eccentricity. As with most non-trivial observables, a simple histogram of estimated eccentricities is not a good estimate of the true eccentricity distribution. Here, we develop and test a hierarchical probabilistic method for performing the relevant meta-analysis, that is, inferring the true eccentricity distribution, taking as input the likelihood functions for the individual star eccentricities, or samplings of the posterior probability distributions for the eccentricities (under a given, uninformative prior). The method is a simple implementationmore » of a hierarchical Bayesian model; it can also be seen as a kind of heteroscedastic deconvolution. It can be applied to any quantity measured with finite precision-other orbital parameters, or indeed any astronomical measurements of any kind, including magnitudes, distances, or photometric redshifts-so long as the measurements have been communicated as a likelihood function or a posterior sampling.« less
ERIC Educational Resources Information Center
Grace, Nicci; Enticott, Peter Gregory; Johnson, Beth Patricia; Rinehart, Nicole Joan
2017-01-01
Handwriting is commonly identified as an area of weakness in children with autism spectrum disorder (ASD), but precise deficits have not been fully characterised. Boys with ASD (n = 23) and matched controls (n = 20) aged 8-12 years completed a simple, digitised task to objectively assess handwriting performance using advanced descriptive measures.…
Factors Affecting Student Progression and Achievement: Prediction and Intervention. A Two-Year Study
ERIC Educational Resources Information Center
Lowis, Mike; Castley, Andrew
2008-01-01
First-year student dropout in the university sector can reach 20% or higher. Over a two-year period, a simple instrument was developed to identify potential student low performance and withdrawal. It was based on a measure of students' early expectation of higher education, matched subsequently with their actual experience. The instrument design…
ERIC Educational Resources Information Center
Otto, William H.; Larive, Cynthia K.; Mason, Susan L.; Robinson, Janet B.; Heppert Joseph A.; Ellis, James D.
2005-01-01
An experiment to perform a simple initial investigation that illustrates concepts of speciation and equilibrium, using the instrument and chemical resources in the laboratory is presented. The investigation showed that the presence of multiple chemical species in a reaction mixture (phenol red solution) reflects the acid and base conditions…
Development and experimental characterization of a new non contact sensor for blade tip timing
NASA Astrophysics Data System (ADS)
Brouckaert, Jean-Francois; Marsili, Roberto; Rossi, Gianluca; Tomassini, Roberto
2012-06-01
Performances of blade tip timing measurement systems (BTT), recently used for non contact turbine blade vibration measurements, in terms of uncertainty and resolution are strongly affected by sensor characteristics. The sensors used for BTT generate pulses, to be used also for precise measurements of turbine blades time of arrival. All the literature on this measurement techniques do not address this problem in a clear way, defining the relevant dynamic and static sensor characteristics, fundamental for this application. Till now proximity sensors used are based on optical, capacitive, eddy current and microwave measuring principle. Also pressure sensors has been used. In this paper a new sensing principle is proposed. A proximity sensor based on magnetoresistive sensing element has been assembled end tested. A simple and portable test bench with variable speed, blade tip width, variable clearance was built and used in order to characterize the main sensor performances.
A Highly Sensitive Fiber Optic Sensor Based on Two-Core Fiber for Refractive Index Measurement
Guzmán-Sepúlveda, José Rafael; Guzmán-Cabrera, Rafael; Torres-Cisneros, Miguel; Sánchez-Mondragón, José Javier; May-Arrioja, Daniel Alberto
2013-01-01
A simple and compact fiber optic sensor based on a two-core fiber is demonstrated for high-performance measurements of refractive indices (RI) of liquids. In order to demonstrate the suitability of the proposed sensor to perform high-sensitivity sensing in a variety of applications, the sensor has been used to measure the RI of binary liquid mixtures. Such measurements can accurately determine the salinity of salt water solutions, and detect the water content of adulterated alcoholic beverages. The largest sensitivity of the RI sensor that has been experimentally demonstrated is 3,119 nm per Refractive Index Units (RIU) for the RI range from 1.3160 to 1.3943. On the other hand, our results suggest that the sensitivity can be enhanced up to 3485.67 nm/RIU approximately for the same RI range. PMID:24152878
Merrill, Anne M; Karcher, Nicole R; Cicero, David C; Becker, Theresa M; Docherty, Anna R; Kerns, John G
2017-03-01
People with schizophrenia exhibit wide-ranging cognitive deficits, including slower processing speed and decreased cognitive control. Disorganized speech symptoms, such as communication impairment, have been associated with poor cognitive control task performance (e.g., goal maintenance and working memory). Whether communication impairment is associated with poorer performance on a broader range of non-cognitive control measures is unclear. In the current study, people with schizophrenia (n =51) and non-psychiatric controls (n =26) completed speech interviews allowing for reliable quantitative assessment of communication impairment. Participants also completed multiple goal maintenance and working memory tasks. In addition, we also examined (a) simple measures of processing speed involving highly automatic prepotent responses and (b) a non-cognitive control measure of general task performance. Schizophrenia communication impairment was significantly associated with poor performance in all cognitive domains, with the largest association found with processing speed (r s =-0.52). Further, communication impairment was also associated with the non-cognitive control measure of poor general task performance (r s =-0.43). In contrast, alogia, a negative speech symptom, and positive symptoms were less if at all related to cognitive task performance. Overall, this study suggests that communication impairment in schizophrenia may be associated with relatively generalized poor cognitive task performance. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
2009-12-30
FA9550-06-1-0107 for “A Study of the 3-D Reconstruction of Heliospheric Vector Magnetic Fields from Faraday-Rotation Inversion” for work performed...from 2005 – 2009 by the University of California at San Diego. There are three aspects to this research: 1) The inversion of simple synthetic Faraday...rotation measurements that can be used to demonstrate the feasibility of performing this inversion when and if Faraday-rotation observations become
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
1987-10-30
simple relationship for the required refrigerant mass flow rate, m, for a given cooling load, q1l m = where Ah is the enthalpy difference between the cool...compressor concepts were tested to determine their performance. No measurable difference in performance was found and the first, more compact, concept was...resulting change in orifice size adjusts the mass flow rate through the valve. By reducing excursions in the pressure difference across the J-T valve, the
Computer modeling of pulsed CO2 lasers for lidar applications
NASA Technical Reports Server (NTRS)
Spiers, Gary D.; Smithers, Martin E.; Murty, Rom
1991-01-01
The experimental results will enable a comparison of the numerical code output with experimental data. This will ensure verification of the validity of the code. The measurements were made on a modified commercial CO2 laser. Results are listed as following. (1) The pulse shape and energy dependence on gas pressure were measured. (2) The intrapulse frequency chirp due to plasma and laser induced medium perturbation effects were determined. A simple numerical model showed quantitative agreement with these measurements. The pulse to pulse frequency stability was also determined. (3) The dependence was measured of the laser transverse mode stability on cavity length. A simple analysis of this dependence in terms of changes to the equivalent fresnel number and the cavity magnification was performed. (4) An analysis was made of the discharge pulse shape which enabled the low efficiency of the laser to be explained in terms of poor coupling of the electrical energy into the vibrational levels. And (5) the existing laser resonator code was changed to allow it to run on the Cray XMP under the new operating system.
Jones, Anne; Sealey, Rebecca; Crowe, Michael; Gordon, Susan
2014-10-01
The aim of this study was to assess the concurrent validity and reliability of the Simple Goniometer (SG) iPhone® app compared to the Universal Goniometer (UG). Within subject comparison design comparing the UG with the SG app. James Cook University, Townsville, Queensland, Australia. Thirty-six volunteer participants, with a mean age of 60.6 years (SD 6.2). Not applicable. Thirty-six participants performed three standing lunges during which the knee joint angle was measured with the SG app and the UG. There were no significant differences in the measures of individual knee joint angles between the UG and the SG app. Pearson correlations of 0.96-0.98 and intraclass correlation coefficients of 0.97-0.99 (95% confidence interval: 0.95-1.00) were recorded for all measures. Using the Bland-Altman method, the standard error of the mean of the differences and the standard deviation of the mean of the differences were low. The measurements from the SG iPhone® app were reliable and possessed concurrent validity for this sample and protocol when compared to the UG.
NASA Astrophysics Data System (ADS)
Heide, L.; Nürnberger, E.; Bögl, K. W.
Studies on the viscosity behavior were performed with 20 different spices or dried vegetables. In nine spices (cinnamon, ginger, mustard seed, celery, onions, shallots, lemon peel, black and white pepper) differences between unirradiated and irradiated samples were observed. Further lots were investigated to estimate the variations of viscosity depending on the origin of the samples. Additional storage experiments showed that measuring the viscosity may be a simple method to identify some radiation treated spices even after years.
Detailed analysis of the self-discharge of supercapacitors
NASA Astrophysics Data System (ADS)
Kowal, Julia; Avaroglu, Esin; Chamekh, Fahmi; Šenfelds, Armands; Thien, Tjark; Wijaya, Dhanny; Sauer, Dirk Uwe
Self-discharge is an important performance factor when using supercapacitors. Voltage losses in the range of 5-60% occur over two weeks. Experiments show a dependency of the self-discharge rate on various parameters such as temperature, charge duration and short-term history. In this paper, self-discharge of three commercially available supercapacitors was measured under various conditions. Based on different measurements, the impact of the influence factors is identified. A simple model to explain parts of the voltage decay is presented.
Pyroelectric effect in tryglicyne sulphate single crystals - Differential measurement method
NASA Astrophysics Data System (ADS)
Trybus, M.
2018-06-01
A simple mathematical model of the pyroelectric phenomenon was used to explain the electric response of the TGS (triglycine sulphate) samples in the linear heating process in ferroelectric and paraelectric phases. Experimental verification of mathematical model was realized. TGS single crystals were grown and four electrode samples were fabricated. Differential measurements of the pyroelectric response of two different regions of the samples were performed and the results were compared with data obtained from the model. Experimental results are in good agreement with model calculations.
Surface tension profiles in vertical soap films
NASA Astrophysics Data System (ADS)
Adami, N.; Caps, H.
2015-01-01
Surface tension profiles in vertical soap films are experimentally investigated. Measurements are performed by introducing deformable elastic objets in the films. The shape adopted by those objects once set in the film is related to the surface tension value at a given vertical position by numerically solving the adapted elasticity equations. We show that the observed dependency of the surface tension versus the vertical position is predicted by simple modeling that takes into account the mechanical equilibrium of the films coupled to previous thickness measurements.
Surface tension of flowing soap films
NASA Astrophysics Data System (ADS)
Sane, Aakash; Mandre, Shreyas; Kim, Ildoo
2018-04-01
The surface tension of flowing soap films is measured with respect to the film thickness and the concentration of soap solution. We perform this measurement by measuring the curvature of the nylon wires that bound the soap film channel and use the measured curvature to parametrize the relation between the surface tension and the tension of the wire. We find the surface tension of our soap films increases when the film is relatively thin or made of soap solution of low concentration, otherwise it approaches an asymptotic value 30 mN/m. A simple adsorption model with only two parameters describes our observations reasonably well. With our measurements, we are also able to measure Gibbs elasticity for our soap film.
Cultural influences on neural substrates of attentional control.
Hedden, Trey; Ketay, Sarah; Aron, Arthur; Markus, Hazel Rose; Gabrieli, John D E
2008-01-01
Behavioral research has shown that people from Western cultural contexts perform better on tasks emphasizing independent (absolute) dimensions than on tasks emphasizing interdependent (relative) dimensions, whereas the reverse is true for people from East Asian contexts. We assessed functional magnetic resonance imaging responses during performance of simple visuospatial tasks in which participants made absolute judgments (ignoring visual context) or relative judgments (taking visual context into account). In each group, activation in frontal and parietal brain regions known to be associated with attentional control was greater during culturally nonpreferred judgments than during culturally preferred judgments. Also, within each group, activation differences in these regions correlated strongly with scores on questionnaires measuring individual differences in culture-typical identity. Thus, the cultural background of an individual and the degree to which the individual endorses cultural values moderate activation in brain networks engaged during even simple visual and attentional tasks.
An Effective Electrical Resonance-Based Method to Detect Delamination in Thermal Barrier Coating
NASA Astrophysics Data System (ADS)
Kim, Jong Min; Park, Jae-Ha; Lee, Ho Girl; Kim, Hak-Joon; Song, Sung-Jin; Seok, Chang-Sung; Lee, Young-Ze
2017-12-01
This research proposes a simple yet highly sensitive method based on electrical resonance of an eddy-current probe to detect delamination of thermal barrier coating (TBC). This method can directly measure the mechanical characteristics of TBC compared to conventional ultrasonic testing and infrared thermography methods. The electrical resonance-based method can detect the delamination of TBC from the metallic bond coat by shifting the electrical impedance of eddy current testing (ECT) probe coupling with degraded TBC, and, due to this shift, the resonant frequencies near the peak impedance of ECT probe revealed high sensitivity to the delamination. In order to verify the performance of the proposed method, a simple experiment is performed with degraded TBC specimens by thermal cyclic exposure. Consequently, the delamination with growth of thermally grown oxide in a TBC system is experimentally identified. Additionally, the results are in good agreement with the results obtained from ultrasonic C-scanning.
An Effective Electrical Resonance-Based Method to Detect Delamination in Thermal Barrier Coating
NASA Astrophysics Data System (ADS)
Kim, Jong Min; Park, Jae-Ha; Lee, Ho Girl; Kim, Hak-Joon; Song, Sung-Jin; Seok, Chang-Sung; Lee, Young-Ze
2018-02-01
This research proposes a simple yet highly sensitive method based on electrical resonance of an eddy-current probe to detect delamination of thermal barrier coating (TBC). This method can directly measure the mechanical characteristics of TBC compared to conventional ultrasonic testing and infrared thermography methods. The electrical resonance-based method can detect the delamination of TBC from the metallic bond coat by shifting the electrical impedance of eddy current testing (ECT) probe coupling with degraded TBC, and, due to this shift, the resonant frequencies near the peak impedance of ECT probe revealed high sensitivity to the delamination. In order to verify the performance of the proposed method, a simple experiment is performed with degraded TBC specimens by thermal cyclic exposure. Consequently, the delamination with growth of thermally grown oxide in a TBC system is experimentally identified. Additionally, the results are in good agreement with the results obtained from ultrasonic C-scanning.
Performability modeling based on real data: A case study
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1988-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.
Performability modeling based on real data: A casestudy
NASA Technical Reports Server (NTRS)
Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.
1987-01-01
Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of different types of errors.
Comparison of 2- and 10-micron coherent Doppler lidar performance
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1995-01-01
The performance of 2- and 10-micron coherent Doppler lidar is presented in terms of the statistical distribution of the maximum-likelihood velocity estimator from simulations for fixed range resolution and fixed velocity search space as a function of the number of coherent photoelectrons per estimate. The wavelength dependence of the aerosol backscatter coefficient, the detector quantum efficiency, and the atmospheric extinction produce a simple shift of the performance curves. Results are presented for a typical boundary layer measurement and a space-based measurement for two regimes: the pulse-dominated regime where the signal statistics are determined by the transmitted pulse, and the atmospheric-dominated regime where the signal statistics are determined by the velocity fluctuations over the range gate. The optimal choice of wavelength depends on the problem under consideration.
Does linear separability really matter? Complex visual search is explained by simple search
Vighneshvel, T.; Arun, S. P.
2013-01-01
Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822
Thrive or overload? The effect of task complexity on novices' simulation-based learning.
Haji, Faizal A; Cheung, Jeffrey J H; Woods, Nicole; Regehr, Glenn; de Ribaupierre, Sandrine; Dubrowski, Adam
2016-09-01
Fidelity is widely viewed as an important element of simulation instructional design based on its purported relationship with transfer of learning. However, higher levels of fidelity may increase task complexity to a point at which novices' cognitive resources become overloaded. In this experiment, we investigate the effects of variations in task complexity on novices' cognitive load and learning during simulation-based procedural skills training. Thirty-eight medical students were randomly assigned to simulation training on a simple or complex lumbar puncture (LP) task. Participants completed four practice trials on this task (skill acquisition). After 10 days of rest, all participants completed one additional trial on their assigned task (retention) and one trial on a 'very complex' simulation designed to be similar to the complex task (transfer). We assessed LP performance and cognitive load on each trial using multiple measures. In both groups, LP performance improved significantly during skill acquisition (p ≤ 0.047, f = 0.29-0.96) and was maintained at retention. The simple task group demonstrated superior performance compared with the complex task group throughout these phases (p ≤ 0.002, d = 1.13-2.31). Cognitive load declined significantly in the simple task group (p < 0.009, f = 0.48-0.76), but not in the complex task group during skill acquisition, and remained lower at retention (p ≤ 0.024, d = 0.78-1.39). Between retention and transfer, LP performance declined and cognitive load increased in the simple task group, whereas both remained stable in the complex task group. At transfer, no group differences were observed in LP performance and cognitive load, except that the simple task group made significantly fewer breaches of sterility (p = 0.023, d = 0.80). Reduced task complexity was associated with superior LP performance and lower cognitive load during skill acquisition and retention, but mixed results on transfer to a more complex task. These results indicate that task complexity is an important factor that may mediate (via cognitive overload) the relationship between instructional design elements (e.g. fidelity) and simulation-based learning outcomes. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Quantification of Soluble Sugars and Sugar Alcohols by LC-MS/MS.
Feil, Regina; Lunn, John Edward
2018-01-01
Sugars are simple carbohydrates composed primarily of carbon, hydrogen, and oxygen. They play a central role in metabolism as sources of energy and as building blocks for synthesis of structural and nonstructural polymers. Many different techniques have been used to measure sugars, including refractometry, colorimetric and enzymatic assays, gas chromatography, high-performance liquid chromatography, and nuclear magnetic resonance spectroscopy. In this chapter we describe a method that combines an initial separation of sugars by high-performance anion-exchange chromatography (HPAEC) with detection and quantification by tandem mass spectrometry (MS/MS). This combination of techniques provides exquisite specificity, allowing measurement of a diverse range of high- and low-abundance sugars in biological samples. This method can also be used for isotopomer analysis in stable-isotope labeling experiments to measure metabolic fluxes.
Junka, Adam F; Żywicka, Anna; Szymczyk, Patrycja; Dziadas, Mariusz; Bartoszewicz, Marzena; Fijałkowski, Karol
2017-12-01
In the present article, we propose a simple Antibiofilm Dressing's Activity Measurement (A.D.A.M.) test that allows to check in vitro a dressing's suitability against biofilm-related wound infections. To perform the test, three agar discs are covered with biofilm formed by the tested pathogen after which they are assembled one over another in the form of an agar plug and placed in the well of a 24-well plate. The top disc is covered with the analyzed dressing and the entire set is incubated for 24h. During this time, the investigated antimicrobial substance is released from the dressing and penetrates to subsequent biofilm-covered agar discs. Biofilm reduction is measured using 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) spectrometric assay and the results are compared to untreated control samples (agar plug covered with biofilm and without the dressing/or with a passive dressing placed on the top disc). Furthermore, in order to standardize the differences in penetrability of the drugs released from active dressings the results can be expressed as a dimensionless value referred to as the Penetrability Index. In summary, A.D.A.M. test is simple, cheap, can be performed practically in every clinical laboratory and takes no more time than routine microbiological diagnostics. Apart from measuring the released drug's activity, the A.D.A.M. test allows to assess drug penetrability (across three agar discs), reflecting real wound conditions, where microbes are frequently hidden under the necrotic tissue or cloth. In conclusion, the A.D.A.M. test produces a high volume of data that, when analyzed, can provide a researcher with a valuable hint concerning the applicability of active dressings against specific biofilm pathogens in a particular setting. Copyright © 2017 Elsevier B.V. All rights reserved.
Using the Climbing Drum Peel (CDP) Test to Obtain a G(sub IC) value for Core/Facesheet Bonds
NASA Technical Reports Server (NTRS)
Nettles, A. T.; Gregory, Elizabeth D.; Jackson, Justin R.
2006-01-01
A method of measuring the Mode I fracture toughness of core/facesheet bonds in sandwich Structures is desired, particularly with the widespread use of models that need this data as input. This study examined if a critical strain energy release rate, G(sub IC), can be obtained from the climbing drum peel (CDP) test. The CDP test is relatively simple to perform and does not rely on measuring small crack lengths such as required by the double cantilever beam (DCB) test. Simple energy methods were used to calculate G(sub IC) from CDP test data on composite facesheets bonded to a honeycomb core. Facesheet thicknesses from 2 to 5 plies were tested to examine the upper and lower bounds on facesheet thickness requirements. Results from the study suggest that the CDP test, with certain provisions, can be used to find the GIG value of a core/facesheet bond.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-02-01
Metrology requirements for optical components for third generation synchrotron sources are taxing the state-of-the-art in manufacturing technology. We have investigated a number of effect sources in a commercial figure measurement instrument, the Long Trace Profiler II (LTP II), and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, the stability of the optical system is greatly improved, and the remaining error signals can be corrected by a simple referencemore » beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheinker, Alexander
Here, we study control of the angular-velocity actuated nonholonomic unicycle, via a simple, bounded extremum seeking controller which is robust to external disturbances and measurement noise. The vehicle performs source seeking despite not having any position information about itself or the source, able only to sense a noise corrupted scalar value whose extremum coincides with the unknown source location. In order to control the angular velocity, rather than the angular heading directly, a controller is developed such that the closed loop system exhibits multiple time scales and requires an analysis approach expanding the previous work of Kurzweil, Jarnik, Sussmann, andmore » Liu, utilizing weak limits. We provide analytic proof of stability and demonstrate how this simple scheme can be extended to include position-independent source seeking, tracking, and collision avoidance of groups on autonomous vehicles in GPS-denied environments, based only on a measure of distance to an obstacle, which is an especially important feature for an autonomous agent.« less
Neural-network quantum state tomography
NASA Astrophysics Data System (ADS)
Torlai, Giacomo; Mazzola, Guglielmo; Carrasquilla, Juan; Troyer, Matthias; Melko, Roger; Carleo, Giuseppe
2018-05-01
The experimental realization of increasingly complex synthetic quantum systems calls for the development of general theoretical methods to validate and fully exploit quantum resources. Quantum state tomography (QST) aims to reconstruct the full quantum state from simple measurements, and therefore provides a key tool to obtain reliable analytics1-3. However, exact brute-force approaches to QST place a high demand on computational resources, making them unfeasible for anything except small systems4,5. Here we show how machine learning techniques can be used to perform QST of highly entangled states with more than a hundred qubits, to a high degree of accuracy. We demonstrate that machine learning allows one to reconstruct traditionally challenging many-body quantities—such as the entanglement entropy—from simple, experimentally accessible measurements. This approach can benefit existing and future generations of devices ranging from quantum computers to ultracold-atom quantum simulators6-8.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, D.L.
1995-11-01
The objective of this work was to develop improved performance model for modules and systems for for all operating conditions for use in module specifications, system and BOS component design, and system rating or monitoring. The approach taken was to identify and quantify the influence of dominant factors of solar irradiance, cell temperature, angle-of-incidence; and solar spectrum; use outdoor test procedures to separate the effects of electrical, thermal, and optical performance; use fundamental cell characteristics to improve analysis; and combine factors in simple model using the common variables.
Accounting for results: how conservation organizations report performance information.
Rissman, Adena R; Smail, Robert
2015-04-01
Environmental program performance information is in high demand, but little research suggests why conservation organizations differ in reporting performance information. We compared performance measurement and reporting by four private-land conservation organizations: Partners for Fish and Wildlife in the US Fish and Wildlife Service (national government), Forest Stewardship Council-US (national nonprofit organization), Land and Water Conservation Departments (local government), and land trusts (local nonprofit organization). We asked: (1) How did the pattern of performance reporting relationships vary across organizations? (2) Was political conflict among organizations' principals associated with greater performance information? and (3) Did performance information provide evidence of program effectiveness? Based on our typology of performance information, we found that most organizations reported output measures such as land area or number of contracts, some reported outcome indicators such as adherence to performance standards, but few modeled or measured environmental effects. Local government Land and Water Conservation Departments reported the most types of performance information, while local land trusts reported the fewest. The case studies suggest that governance networks influence the pattern and type of performance reporting, that goal conflict among principles is associated with greater performance information, and that performance information provides unreliable causal evidence of program effectiveness. Challenging simple prescriptions to generate more data as evidence, this analysis suggests (1) complex institutional and political contexts for environmental program performance and (2) the need to supplement performance measures with in-depth evaluations that can provide causal inferences about program effectiveness.
Accounting for Results: How Conservation Organizations Report Performance Information
NASA Astrophysics Data System (ADS)
Rissman, Adena R.; Smail, Robert
2015-04-01
Environmental program performance information is in high demand, but little research suggests why conservation organizations differ in reporting performance information. We compared performance measurement and reporting by four private-land conservation organizations: Partners for Fish and Wildlife in the US Fish and Wildlife Service (national government), Forest Stewardship Council—US (national nonprofit organization), Land and Water Conservation Departments (local government), and land trusts (local nonprofit organization). We asked: (1) How did the pattern of performance reporting relationships vary across organizations? (2) Was political conflict among organizations' principals associated with greater performance information? and (3) Did performance information provide evidence of program effectiveness? Based on our typology of performance information, we found that most organizations reported output measures such as land area or number of contracts, some reported outcome indicators such as adherence to performance standards, but few modeled or measured environmental effects. Local government Land and Water Conservation Departments reported the most types of performance information, while local land trusts reported the fewest. The case studies suggest that governance networks influence the pattern and type of performance reporting, that goal conflict among principles is associated with greater performance information, and that performance information provides unreliable causal evidence of program effectiveness. Challenging simple prescriptions to generate more data as evidence, this analysis suggests (1) complex institutional and political contexts for environmental program performance and (2) the need to supplement performance measures with in-depth evaluations that can provide causal inferences about program effectiveness.
Using a Modified Simple Pendulum to Find the Variations in the Value of “g”
NASA Astrophysics Data System (ADS)
Arnold, Jonathan P.; Efthimiou, C.
2007-05-01
The simple pendulum is one of the most known and studied system of Newtonian Mechanics. It also provides one of the most elegant and simple devices to measure the acceleration of gravity at any location. In this presentation we will revisit the problem of measuring the acceleration of gravity using a simple pendulum and will present a modification to the standard technique that increases the accuracy of the measurement.
Davy, Jonathan; Göbel, Matthias
2018-02-01
Extended nap opportunities have been effective in maintaining alertness in the context of extended night shifts (+12 h). However, there is limited evidence of their efficacy during 8-h shifts. Thus, this study explored the effects of extended naps on cognitive, physiological and perceptual responses during four simulated, 8-h night shifts. In a laboratory setting, 32 participants were allocated to one of three conditions. All participants completed four consecutive, 8-h night shifts, with the arrangements differing by condition. The fixed night condition worked from 22h00 to 06h00, while the nap early group worked from 20h00 to 08h00 and napped between 00h00 and 03h20. The nap late group worked from 00h00 to 12h00 and napped between 04h00 and 07h20. Nap length was limited to 3 hours and 20 minutes. Participants performed a simple beading task during each shift, while also completing six to eight test batteries roughly every 2 h. During each shift, six test batteries were completed, in which the following measures were taken. Performance indicators included beading output, eye accommodation time, choice reaction time, visual vigilance, simple reaction time, processing speed and object recognition, working memory, motor response time and tracking performance. Physiological measures included heart rate and tympanic temperature, whereas subjective sleepiness and reported sleep length and quality while outside the laboratory constituted the self reported measures. Both naps reduced subjective sleepiness but did not alter the circadian and homeostatic-related changes in cognitive and physiological measures, relative to the fixed night condition. Additionally, there was evidence of sleep inertia following each nap, which resulted in transient reductions in certain perceptual cognitive performance measures. The present study suggested that there were some benefits associated with including an extended nap during 8-h night shifts. However, the effects of sleep inertia need to be effectively managed to ensure that post-nap alertness and performance is maintained.
NASA Technical Reports Server (NTRS)
Hippensteele, S. A.; Russell, L. M.; Stepka, F. S.
1981-01-01
Commercially available elements of a composite consisting of a plastic sheet coated with liquid crystal, another sheet with a thin layer of a conducting material (gold or carbon), and copper bus bar strips were evaluated and found to provide a simple, convenient, accurate, and low-cost measuring device for use in heat transfer research. The particular feature of the composite is its ability to obtain local heat transfer coefficients and isotherm patterns that provide visual evaluation of the thermal performances of turbine blade cooling configurations. Examples of the use of the composite are presented.
Viguet-Carrin, S; Gineyts, E; Bertholon, C; Delmas, P D
2009-01-01
A rapid high performance liquid chromatographic method was developed including an internal standard for the measurement of mature and senescent crosslinks concentration in non-demineralized bone hydrolysates. To avoid the demineralization which is a tedious step, we developed a method based on the use of a solid-phase extraction procedure to clean-up the samples. It resulted in sensitive and accurate measurements: the detection limits as low as 0.2 pmol for the pyridimium crosslinks and 0.02 pmol for the pentosidine. The inter- and intra-assay coefficients of variation were as low as 5% and 2%, respectively, for all crosslinks.
The relationship between psychological distress and baseline sports-related concussion testing.
Bailey, Christopher M; Samples, Hillary L; Broshek, Donna K; Freeman, Jason R; Barth, Jeffrey T
2010-07-01
This study examined the effect of psychological distress on neurocognitive performance measured during baseline concussion testing. Archival data were utilized to examine correlations between personality testing and computerized baseline concussion testing. Significantly correlated personality measures were entered into linear regression analyses, predicting baseline concussion testing performance. Suicidal ideation was examined categorically. Athletes underwent testing and screening at a university athletic training facility. Participants included 47 collegiate football players 17 to 19 years old, the majority of whom were in their first year of college. Participants were administered the Concussion Resolution Index (CRI), an internet-based neurocognitive test designed to monitor and manage both at-risk and concussed athletes. Participants took the Personality Assessment Inventory (PAI), a self-administered inventory designed to measure clinical syndromes, treatment considerations, and interpersonal style. Scales and subscales from the PAI were utilized to determine the influence psychological distress had on the CRI indices: simple reaction time, complex reaction time, and processing speed. Analyses revealed several significant correlations among aspects of somatic concern, depression, anxiety, substance abuse, and suicidal ideation and CRI performance, each with at least a moderate effect. When entered into a linear regression, the block of combined psychological symptoms accounted for a significant amount of baseline CRI performance, with moderate to large effects (r = 0.23-0.30). When examined categorically, participants with suicidal ideation showed significantly slower simple reaction time and complex reaction time, with a similar trend on processing speed. Given the possibility of obscured concussion deficits after injury, implications for premature return to play, and the need to target psychological distress outright, these findings heighten the clinical importance of screening for psychological distress during baseline and post-injury concussion evaluations.
Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles †
Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus
2017-01-01
We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures. PMID:28125010
Improved Targeting Through Collaborative Decision-Making and Brain Computer Interfaces
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Barrero, David F.; McDonald-Maier, Klaus
2013-01-01
This paper reports a first step toward a brain-computer interface (BCI) for collaborative targeting. Specifically, we explore, from a broad perspective, how the collaboration of a group of people can increase the performance on a simple target identification task. To this end, we requested a group of people to identify the location and color of a sequence of targets appearing on the screen and measured the time and accuracy of the response. The individual results are compared to a collective identification result determined by simple majority voting, with random choice in case of drawn. The results are promising, as the identification becomes significantly more reliable even with this simple voting and a small number of people (either odd or even number) involved in the decision. In addition, the paper briefly analyzes the role of brain-computer interfaces in collaborative targeting, extending the targeting task by using a BCI instead of a mechanical response.
Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles.
Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus
2017-01-24
We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.
The use of subjective rating of exertion in Ergonomics.
Capodaglio, P
2002-01-01
In Ergonomics, the use of psychophysical methods for subjectively evaluating work tasks and determining acceptable loads has become more common. Daily activities at the work site are studied not only with physiological methods but also with perceptual estimation and production methods. The psychophysical methods are of special interest in field studies of short-term work tasks for which valid physiological measurements are difficult to obtain. The perceived exertion, difficulty and fatigue that a person experiences in a certain work situation is an important sign of a real or objective load. Measurement of the physical load with physiological parameters is not sufficient since it does not take into consideration the particular difficulty of the performance or the capacity of the individual. It is often difficult from technical and biomechanical analyses to understand the seriousness of a difficulty that a person experiences. Physiological determinations give important information, but they may be insufficient due to the technical problems in obtaining relevant but simple measurements for short-term activities or activities involving special movement patterns. Perceptual estimations using Borg's scales give important information because the severity of a task's difficulty depends on the individual doing the work. Observation is the most simple and used means to assess job demands. Other evaluations integrating observation are the followings: indirect estimation of energy expenditure based on prediction equations or direct measurement of oxygen consumption; measurements of forces, angles and biomechanical parameters; measurements of physiological and neurophysiological parameters during tasks. It is recommended that determinations of performances of occupational activities assess rating of perceived exertion and integrate these measurements of intensity levels with those of activity's type, duration and frequency. A better estimate of the degree of physical activity of individuals thus can be obtained.
Rapid measurement and prediction of bacterial contamination in milk using an oxygen electrode.
Numthuam, Sonthaya; Suzuki, Hiroaki; Fukuda, Junji; Phunsiri, Suthiluk; Rungchang, Saowaluk; Satake, Takaaki
2009-03-01
An oxygen electrode was used to measure oxygen consumption to determine bacterial contamination in milk. Dissolved oxygen (DO) measured at 10-35 degrees C for 2 hours provided a reasonable prediction efficiency (r > or = 0.90) of the amount of bacteria between 1.9 and 7.3 log (CFU/mL). A temperature-dependent predictive model was developed that has the same prediction accuracy like the normal predictive model. The analysis performed with and without stirring provided the same prediction efficiency, with correlation coefficient of 0.90. The measurement of DO is a simple and rapid method for the determination of bacteria in milk.
In-group modulation of perceptual matching.
Moradi, Zargol; Sui, Jie; Hewstone, Miles; Humphreys, Glyn W
2015-10-01
We report a novel effect of in-group bias on a task requiring simple perceptual matching of stimuli. Football fans were instructed to associate the badges of their favorite football team (in-group), a rival team (out-group), and neutral teams with simple geometric shapes. Responses to matching in-group stimuli were more efficient, and discriminability was enhanced, as compared to out-group stimuli (rival and neutral)-a result that occurred even when participants responded only to the (equally familiar) geometric shapes. Across individuals, the in-group bias on shape matching was correlated with measures of group satisfaction, and similar results were found when football fans performed the task, in the context of both the football ground and a laboratory setting. We also observed effects of in-group bias on the response criteria in some but not all of the experiments. In control studies, the advantage for in-group stimuli was not found in an independent sample of participants who were not football fans. This indicates that there was not an intrinsic advantage for the stimuli that were "in-group" for football fans. Also, performance did not differ for familiar versus unfamiliar stimuli without in-group associations. These findings indicate that group identification can affect simple shape matching.
Continuous performance task in ADHD: Is reaction time variability a key measure?
Levy, Florence; Pipingas, Andrew; Harris, Elizabeth V; Farrow, Maree; Silberstein, Richard B
2018-01-01
To compare the use of the Continuous Performance Task (CPT) reaction time variability (intraindividual variability or standard deviation of reaction time), as a measure of vigilance in attention-deficit hyperactivity disorder (ADHD), and stimulant medication response, utilizing a simple CPT X-task vs an A-X-task. Comparative analyses of two separate X-task vs A-X-task data sets, and subgroup analyses of performance on and off medication were conducted. The CPT X-task reaction time variability had a direct relationship to ADHD clinician severity ratings, unlike the CPT A-X-task. Variability in X-task performance was reduced by medication compared with the children's unmedicated performance, but this effect did not reach significance. When the coefficient of variation was applied, severity measures and medication response were significant for the X-task, but not for the A-X-task. The CPT-X-task is a useful clinical screening test for ADHD and medication response. In particular, reaction time variability is related to default mode interference. The A-X-task is less useful in this regard.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Chang, Man-Ling
2012-01-01
This study extended Battery-free wireless mouse functionality to assess whether two people with developmental disabilities would be able to actively perform designated simple occupational activities according to simple instructions by controlling their favorite environmental stimulation using Battery-free wireless mice with a newly developed…
Transient Infrared Measurement of Laser Absorption Properties of Porous Materials
NASA Astrophysics Data System (ADS)
Marynowicz, Andrzej
2016-06-01
The infrared thermography measurements of porous building materials have become more frequent in recent years. Many accompanying techniques for the thermal field generation have been developed, including one based on laser radiation. This work presents a simple optimization technique for estimation of the laser beam absorption for selected porous building materials, namely clinker brick and cement mortar. The transient temperature measurements were performed with the use of infrared camera during laser-induced heating-up of the samples' surfaces. As the results, the absorbed fractions of the incident laser beam together with its shape parameter are reported.
Analyzing Oscillations of a Rolling Cart Using Smartphones and Tablets
NASA Astrophysics Data System (ADS)
Egri, Sándor; Szabó, Lóránt
2015-03-01
It is well known that "interactive engagement" helps students to understand basic concepts in physics.1 Performing experiments and analyzing measured data are effective ways to realize interactive engagement, in our view. Some experiments need special equipment, measuring instruments, or laboratories, but in this activity we advocate student use of mobile phones or tablets to take experimental data. Applying their own devices and measuring simple phenomena from everyday life can improve student interest, while still allowing precise analysis of data, which can give deeper insight into scientific thinking and provide a good opportunity for inquiry-based learning.2
Student Performance in Measuring Distance with Wavelengths in Various Settings
NASA Astrophysics Data System (ADS)
White, Gary
2015-04-01
When physics students are asked to measure the distance between two fixed locations using a pre-defined wavelength as a ruler, there is a surprising failure rate, at least partially due to the fact that the ``ruler'' to be used is not fixed in length (see ``Is a Simple Measurement Task a Roadblock to Student Understanding of Wave Phenomena?,'' by and references therein). I will show some data from introductory classes (algebra- and calculus-based) that replicate this result, and also show some interesting features when comparing a setting involving slinkies with a setting involving surface waves on water.
Methods to characterize charging effects
NASA Astrophysics Data System (ADS)
Slots, H.
1984-08-01
Methods to characterize charging in insulating material under high voltage dc stress, leading to electrical breakdown, are reviewed. The behavior of the charges can be studied by ac loss angle measurements after application or removal of dc bias. Measurements were performed on oil-paper and oil-Mylar systems. The poor reproducibility of the measurements makes it impossible to draw more than qualitative conclusions about the charging effects. With an ultrasound pressure wave the electric field distribution in a material can be determined. An alternative derivation for the transient response of a system which elucidates the influence of several parameters in a simple way is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickson, Peter; Novak, Alan M.; Foley, Timothy J.
A small number of simple air-gap tests were performed on 1-inch diameter PBX 9502 cylinders to determine an approximate threshold for detonation failure. The primary diagnostics were streak imaging and dent measurements in a steel witness plate. Relight was found to occur, with negligible excess transit time, for air gaps up to 1 mm. Relight did not occur with a 3-mm air gap.
ERIC Educational Resources Information Center
Rule, Audrey C.; Baldwin, Samantha; Schell, Robert
2009-01-01
This repeated measures study examined second graders' (n = 21) performance in creating inventions related to animal adaptations for simple products under two conditions that alternated each week for a six-week period. In the analogy condition, students used form and function analogy object boxes to learn about animal adaptations, applying these…
High-fidelity, low-cost, automated method to assess laparoscopic skills objectively.
Gray, Richard J; Kahol, Kanav; Islam, Gazi; Smith, Marshall; Chapital, Alyssa; Ferrara, John
2012-01-01
We sought to define the extent to which a motion analysis-based assessment system constructed with simple equipment could measure technical skill objectively and quantitatively. An "off-the-shelf" digital video system was used to capture the hand and instrument movement of surgical trainees (beginner level = PGY-1, intermediate level = PGY-3, and advanced level = PGY-5/fellows) while they performed a peg transfer exercise. The video data were passed through a custom computer vision algorithm that analyzed incoming pixels to measure movement smoothness objectively. The beginner-level group had the poorest performance, whereas those in the advanced group generated the highest scores. Intermediate-level trainees scored significantly (p < 0.04) better than beginner trainees. Advanced-level trainees scored significantly better than intermediate-level trainees and beginner-level trainees (p < 0.04 and p < 0.03, respectively). A computer vision-based analysis of surgical movements provides an objective basis for technical expertise-level analysis with construct validity. The technology to capture the data is simple, low cost, and readily available, and it obviates the need for expert human assessment in this setting. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Sustained attention in language production: an individual differences investigation.
Jongman, Suzanne R; Roelofs, Ardi; Meyer, Antje S
2015-01-01
Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants' sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers' sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources.
Rapid online language mapping with electrocorticography.
Miller, Kai J; Abel, Taylor J; Hebb, Adam O; Ojemann, Jeffrey G
2011-05-01
Emerging research in evoked broadband electrocorticographic (ECoG) measurement from the cortical surface suggests that it might cleanly delineate the functional organization of cortex. The authors sought to demonstrate whether this could be done in a same-session, online manner to identify receptive and expressive language areas. The authors assessed the efficacy of simple integration of "χ-band" (76-200 Hz) change in the ECoG signal by implementing a simple band-pass filter to estimate broadband spectral change. Following a brief (less than 10-second) period to characterize baseline activity, χ-band activity was integrated while 7 epileptic patients with implanted ECoG electrodes performed a verb-generation task. While the patients were performing verb-generation or noun-reading tasks, cortical activation was consistently identified in primary mouth motor area, superior temporal gyrus, and Broca and Wernicke association areas. Maps were robust after a mean time of 47 seconds (using an "activation overlap" measure). Correlation with electrocortical stimulation was not complete and was stronger for noun reading than verb generation. Broadband ECoG changes can be captured online to identify eloquent cortex. This demonstrates the existence of a powerful new tool for functional mapping in the operative and chronic implant setting.
Mizukoshi, Koji; Akamatsu, Hisashi
2013-05-01
Various studies have examined the properties of male skin. However, because these studies mostly involved simple measurement with non-invasive devices, a lack of understanding of the properties of male skin remains. In this study, we focused and investigated not only on simple instrumental measurements but also on gender differences and men's subjective perceptions of skin and daily skin care habits. Barrier function depends on corneocyte maturation level as well as sebum amount. Irrespective of the skin type, a high percentage of male subjects perceived a 'tacky feeling'. However, the percentage of men perceiving a 'shiny feeling' differed by skin type. Furthermore, there was a relationship between skin care habits and skin function. Men who did not perform a daily skincare regimen demonstrated a significantly higher sebum amount and transepidermal water loss value than those who did perform a daily skincare regimen. The results of this study indicate that male skin has two specific characteristics: impaired barrier function because of the excess amount of sebum and a lack of an appropriate skin care regimen because of the 'tacky feeling' caused by excess sebum. © 2012 John Wiley & Sons A/S. Published by Blackwell Publishing Ltd.
Physiological assessment of task underload
NASA Technical Reports Server (NTRS)
Comstock, J. Raymond, Jr.; Harris, Randall L., Sr.; Pope, Alan T.
1988-01-01
The ultimate goal of research efforts directed at underload, boredom, or complacency in high-technology work environments is to detect conditions or states of the operator that can be demonstrated to lead to performance degradation, and then to intervene in the environment to restore acceptable system performance. Physiological measures may provide indices of changes in condition or state of the operator that may be of value in high-technology work environments. The focus of the present study was on the use of physiological measures in the assessment of operator condition or state in a task underload scenario. A fault acknowledgement task characterized by simple repetitive responses with minimal novelty, complexity, and uncertainty was employed to place subjects in a task underload situation. Physiological measures (electrocardiogram (ECG), electroencephalogram (EEG), and pupil diameter) were monitored during task performance over a one-hour test session for 12 subjects. Each of the physiological measures exhibited changes over the test session indicative of decrements in subject arousal level. While high correlations between physiological measures were found across subjects, individual differences between subjects support the use of profiling techniques to establish baselines unique to each subject.
Daily Fluctuations in Everyday Cognition: Is It Meaningful?
Gamaldo, Alyssa A; Allaire, Jason C
2016-08-01
This study examined whether there are daily fluctuations in everyday cognition that are consistent with daily fluctuations often observed in traditional measures of basic cognitive abilities. Two hundred six independently living older adults (age range = 60-91 years) were asked to complete a computerized cognitive battery over eight occasions within a 2- to 3-week period. Using multilevel model, significant within-person variability was observed across the Daily Everyday Cognition Assessment (DECA; 46%), with 54% between-person variability. At each occasion, better performance on the DECA was significantly associated with better performance on simple reaction time ( p < .01) and memory (Auditory Verbal Learning Task, p < .01) even after accounting for time, age, education, and performance on other cognitive measures. These findings demonstrate that within-person performance fluctuations can be observed for everyday cognition tasks, and these fluctuations are consistent with daily changes in basic cognitive abilities. © The Author(s) 2015.
Williamson, A M; Feyer, A M; Mattick, R P; Friswell, R; Finlay-Brown, S
2001-05-01
The effects of 28 h of sleep deprivation were compared with varying doses of alcohol up to 0.1% blood alcohol concentration (BAC) in the same subjects. The study was conducted in the laboratory. Twenty long-haul truck drivers and 19 people not employed as professional drivers acted as subjects. Tests were selected that were likely to be affected by fatigue, including simple reaction time, unstable tracking, dual task, Mackworth clock vigilance test, symbol digit coding, visual search, sequential spatial memory and logical reasoning. While performance effects were seen due to alcohol for all tests, sleep deprivation affected performance on most tests, but had no effect on performance on the visual search and logical reasoning tests. Some tests showed evidence of a circadian rhythm effect on performance, in particular, simple reaction time, dual task, Mackworth clock vigilance, and symbol digit coding, but only for response speed and not response accuracy. Drivers were slower but more accurate than controls on the symbol digit test, suggesting that they took a more conservative approach to performance of this test. This study demonstrated which tests are most sensitive to sleep deprivation and fatigue. The study therefore has established a set of tests that can be used in evaluations of fatigue and fatigue countermeasures.
The time course of corticospinal excitability during a simple reaction time task.
Kennefick, Michael; Maslovat, Dana; Carlsen, Anthony N
2014-01-01
The production of movement in a simple reaction time task can be separated into two time periods: the foreperiod, which is thought to include preparatory processes, and the reaction time interval, which includes initiation processes. To better understand these processes, transcranial magnetic stimulation has been used to probe corticospinal excitability at various time points during response preparation and initiation. Previous research has shown that excitability decreases prior to the "go" stimulus and increases following the "go"; however these two time frames have been examined independently. The purpose of this study was to measure changes in CE during both the foreperiod and reaction time interval in a single experiment, relative to a resting baseline level. Participants performed a button press movement in a simple reaction time task and excitability was measured during rest, the foreperiod, and the reaction time interval. Results indicated that during the foreperiod, excitability levels quickly increased from baseline with the presentation of the warning signal, followed by a period of stable excitability leading up to the "go" signal, and finally a rapid increase in excitability during the reaction time interval. This excitability time course is consistent with neural activation models that describe movement preparation and response initiation.
Correcting For Seed-Particle Lag In LV Measurements
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.
1994-01-01
Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.
Design and measure of a tunable double-band metamaterial absorber in the THz spectrum
NASA Astrophysics Data System (ADS)
Guiming, Han
2018-04-01
We demonstrate and measure a hybrid double-band tunable metamaterial absorber in the terahertz region. The measured metamaterial absorber contains of a hybrid dielectric layer structure: a SU-8 layer and a VO2 layer. Near perfect double-band absorption performances are achieved by optimizing the SU-8 layer thickness at room temperature 25 °C. Measured results show that the phase transition can be observed when the measured temperature reaches 68 °C. Further measured results indicate that the resonance frequency and absorption amplitude of the proposed metamaterial absorber are tunable through increasing the measured temperature, while structural parameters unchanged. The proposed hybrid metamaterial absorber shows many advantages, such as frequency agility, absorption amplitude tunable, and simple fabrication.
Phase sensitive diffraction sensor for high sensitivity refractive index measurement
NASA Astrophysics Data System (ADS)
Kumawat, Nityanand; Varma, Manoj; Kumar, Sunil
2018-02-01
In this study a diffraction based sensor has been developed for bio molecular sensing applications and performing assays in real time. A diffraction grating fabricated on a glass substrate produced diffraction patterns both in transmission and reflection when illuminated by a laser diode. We used zeroth order I(0,0) as reference and first order I(0,1) as signal channel and conducted ratiometric measurements that reduced noise by more than 50 times. The ratiometric approach resulted in a very simple instrumentation with very high sensitivity. In the past, we have shown refractive index measurements both for bulk and surface adsorption using the diffractive self-referencing approach. In the current work we extend the same concept to higher diffraction orders. We have considered order I(0,1) and I(1,1) and performed ratiometric measurements I(0,1)/I(1,1) to eliminate the common mode fluctuations. Since orders I(0,1) and I(1,1) behaved opposite to each other, the resulting ratio signal amplitude increased more than twice compared to our previous results. As a proof of concept we used different salt concentrations in DI water. Increased signal amplitude and improved fluid injection system resulted in more than 4 times improvement in detection limit, giving limit of detection 1.3×10-7 refractive index unit (RIU) compared to our previous results. The improved refractive index sensitivity will help significantly for high sensitivity label free bio sensing application in a very cost-effective and simple experimental set-up.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Optimal MEMS device for mobility and zeta potential measurements using DC electrophoresis.
Karam, Pascal R; Dukhin, Andrei; Pennathur, Sumita
2017-05-01
We have developed a novel microchannel geometry that allows us to perform simple DC electrophoresis to measure the electrophoretic mobility and zeta potential of analytes and particles. In standard capillary geometries, mobility measurements using DC fields are difficult to perform. Specifically, measurements in open capillaries require knowledge of the hard to measure and often dynamic wall surface potential. Although measurements in closed capillaries eliminate this requirement, the measurements must be performed at infinitesimally small regions of zero flow where the pressure driven-flow completely cancels the electroosmotic flow (Komagata Planes). Furthermore, applied DC fields lead to electrode polarization, further questioning the reliability and accuracy of the measurement. In contrast, our geometry expands and moves the Komagata planes to where velocity gradients are at a minimum, and thus knowledge of the precise location of a Komagata plane is not necessary. Additionally, our microfluidic device prevents electrode polarization because of fluid recirculation around the electrodes. We fabricated our device using standard MEMS fabrication techniques and performed electrophoretic mobility measurements on 500 nm fluorescently tagged polystyrene particles at various buffer concentrations. Results are comparable to two different commercial dynamic light scattering based particle sizing instruments. We conclude with guidelines to further develop this robust electrophoretic tool that allows for facile and efficient particle characterization. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A study of intelligence and personality in children with simple obesity.
Li, X
1995-05-01
The objective of this study was to investigate differences in measures of intelligence and personality between obese and normal-weight children. The Wechsler Intelligence Scale (IQ) for Children (revised) and the Eysenck Personality Questionnaire (EPQ) were administered to 102 children with simple obesity and their controls in a case-controlled design. The mean age of the children was 9.8 years and they all attended primary school in Nanjing, PRC. It was found that children in the severe obesity category (> 50% overweight) had a significantly lower performance IQ score than the controls, and a significantly higher EPQ psychoticism score. These differences were not observed in children with milder degrees of obesity.
A Simple Model for Nonlinear Confocal Ultrasonic Beams
NASA Astrophysics Data System (ADS)
Zhang, Dong; Zhou, Lin; Si, Li-Sheng; Gong, Xiu-Fen
2007-01-01
A confocally and coaxially arranged pair of focused transmitter and receiver represents one of the best geometries for medical ultrasonic imaging and non-invasive detection. We develop a simple theoretical model for describing the nonlinear propagation of a confocal ultrasonic beam in biological tissues. On the basis of the parabolic approximation and quasi-linear approximation, the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation is solved by using the angular spectrum approach. Gaussian superposition technique is applied to simplify the solution, and an analytical solution for the second harmonics in the confocal ultrasonic beam is presented. Measurements are performed to examine the validity of the theoretical model. This model provides a preliminary model for acoustic nonlinear microscopy.
Phased Retrofits in Existing Homes in Florida Phase II. Shallow Plus Retrofits
DOE Office of Scientific and Technical Information (OSTI.GOV)
K. Sutherland; Parker, D.; Martin, E.
2016-02-01
The BAPIRC team and Florida Power and Light (FPL) electric utility pursued a pilot phased energy-efficiency retrofit program in Florida by creating detailed data on the energy and economic performance of two levels of retrofit - simple and deep. For this Phased Deep Retrofit (PDR) project, a total of 56 homes spread across the utility partner's territory in east central Florida, southeast Florida, and southwest Florida were instrumented between August 2012 and January 2013, and received simple pass-through retrofit measures during the period of March 2013 - June 2013. Ten of these homes received a deeper package of retrofits duringmore » August 2013 - December 2013.« less
A simple method for assessing occupational exposure via the one-way random effects model.
Krishnamoorthy, K; Mathew, Thomas; Peng, Jie
2016-11-01
A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.
NASA Technical Reports Server (NTRS)
Hayati, Samad; Tso, Kam; Roston, Gerald
1988-01-01
Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.
Simple method for experimentally testing any form of quantum contextuality
NASA Astrophysics Data System (ADS)
Cabello, Adán
2016-03-01
Contextuality provides a unifying paradigm for nonclassical aspects of quantum probabilities and resources of quantum information. Unfortunately, most forms of quantum contextuality remain experimentally unexplored due to the difficulty of performing sequences of projective measurements on individual quantum systems. Here we show that two-point correlations between binary compatible observables are sufficient to reveal any form of contextuality. This allows us to design simple experiments that are more robust against imperfections and easier to analyze, thus opening the door for observing interesting forms of contextuality, including those requiring quantum systems of high dimensions. In addition, it allows us to connect contextuality to communication complexity scenarios and reformulate a recent result relating contextuality and quantum computation.
Trending in Probability of Collision Measurements
NASA Technical Reports Server (NTRS)
Vallejo, J. J.; Hejduk, M. D.; Stamey, J. D.
2015-01-01
A simple model is proposed to predict the behavior of Probabilities of Collision (P(sub c)) for conjunction events. The model attempts to predict the location and magnitude of the peak P(sub c) value for an event by assuming the progression of P(sub c) values can be modeled to first order by a downward-opening parabola. To incorporate prior information from a large database of past conjunctions, the Bayes paradigm is utilized; and the operating characteristics of the model are established through a large simulation study. Though the model is simple, it performs well in predicting the temporal location of the peak (P(sub c)) and thus shows promise as a decision aid in operational conjunction assessment risk analysis.
Rapid measurement of 89,90Sr radioactivity in rinse water.
Masashi, Takada; Hiroko, Enomoto; Toshikazu, Suzuki
2013-03-01
Rapid measurement of radioactivity from Sr in aqueous solutions is performed using a technique combining a strontium rad disk and a picobeta spectrometer. Identification of Sr radionuclides is accomplished in as little as 90 min in a radiation-tainted solution that contains more highly radioactive cesium. It is possible to perform triage by assessing skin exposure doses in this short time. This simple technique could be used in mobile laboratories. Sr having 1 Bq radioactivities are measured in 10 kBq Cs in aqueous solution. The radioactivity contained in rinse water used to decontaminate the feet of workers who stepped into highly contaminated water in the basement of the turbine building of Unit 3 at the Fukushima Daiichi nuclear power station was measured. The amount of Sr radioactivity in rinse water using the authors' rapid measurement technique (0.29 Bq mL) and a traditional method agree well, with 3.6% difference. Based on this agreement, this technique is confirmed to be useful for rapid measurement of Sr radioactivities.
Further Simplification of the Simple Erosion Narrowing Score With Item Response Theory Methodology.
Oude Voshaar, Martijn A H; Schenk, Olga; Ten Klooster, Peter M; Vonkeman, Harald E; Bernelot Moens, Hein J; Boers, Maarten; van de Laar, Mart A F J
2016-08-01
To further simplify the simple erosion narrowing score (SENS) by removing scored areas that contribute the least to its measurement precision according to analysis based on item response theory (IRT) and to compare the measurement performance of the simplified version to the original. Baseline and 18-month data of the Combinatietherapie Bij Reumatoide Artritis (COBRA) trial were modeled using longitudinal IRT methodology. Measurement precision was evaluated across different levels of structural damage. SENS was further simplified by omitting the least reliably scored areas. Discriminant validity of SENS and its simplification were studied by comparing their ability to differentiate between the COBRA and sulfasalazine arms. Responsiveness was studied by comparing standardized change scores between versions. SENS data showed good fit to the IRT model. Carpal and feet joints contributed the least statistical information to both erosion and joint space narrowing scores. Omitting the joints of the foot reduced measurement precision for the erosion score in cases with below-average levels of structural damage (relative efficiency compared with the original version ranged 35-59%). Omitting the carpal joints had minimal effect on precision (relative efficiency range 77-88%). Responsiveness of a simplified SENS without carpal joints closely approximated the original version (i.e., all Δ standardized change scores were ≤0.06). Discriminant validity was also similar between versions for both the erosion score (relative efficiency = 97%) and the SENS total score (relative efficiency = 84%). Our results show that the carpal joints may be omitted from the SENS without notable repercussion for its measurement performance. © 2016, American College of Rheumatology.
A directional cylindrical anemometer with four sets of differential pressure sensors
NASA Astrophysics Data System (ADS)
Liu, C.; Du, L.; Zhao, Z.
2016-03-01
This paper presents a solid-state directional anemometer for simultaneously measuring the speed and direction of a wind in a plane in a speed range 1-40 m/s. This instrument has a cylindrical shape and works by detecting the pressure differences across diameters of the cylinder when exposed to wind. By analyzing our experimental data in a Reynolds number regime 1.7 × 103-7 × 104, we figure out the relationship between the pressure difference distribution and the wind velocity. We propose a novel and simple solution based on the relationship and design an anemometer which composes of a circular cylinder with four sets of differential pressure sensors, tubes connecting these sensors with the cylinder's surface, and corresponding circuits. In absence of moving parts, this instrument is small and immune of friction. It has simple internal structures, and the fragile sensing elements are well protected. Prototypes have been fabricated to estimate performance of proposed approach. The power consumption of the prototype is less than 0.5 W, and the sample rate is up to 31 Hz. The test results in a wind tunnel indicate that the maximum relative speed measuring error is 5% and the direction error is no more than 5° in a speed range 2-40 m/s. In theory, it is capable of measuring wind up to 60 m/s. When the air stream goes slower than 2 m/s, the measuring errors of directions are slightly greater, and the performance of speed measuring degrades but remains in an acceptable range of ±0.2 m/s.
An Anatomic and Biomechanical Comparison of Bankart Repair Configurations.
Judson, Christopher H; Voss, Andreas; Obopilwe, Elifho; Dyrna, Felix; Arciero, Robert A; Shea, Kevin P
2017-11-01
Suture anchor repair for anterior shoulder instability can be performed using a number of different repair techniques, but none has been proven superior in terms of anatomic and biomechanical properties. Purpose/Hypothesis: The purpose was to compare the anatomic footprint coverage and biomechanical characteristics of 4 different Bankart repair techniques: (1) single row with simple sutures, (2) single row with horizontal mattress sutures, (3) double row with sutures, and (4) double row with labral tape. The hypotheses were as follows: (1) double-row techniques would improve the footprint coverage and biomechanical properties compared with single-row techniques, (2) horizontal mattress sutures would increase the footprint coverage compared with simple sutures, and (3) repair techniques with labral tape and sutures would not show different biomechanical properties. Controlled laboratory study. Twenty-four fresh-frozen cadaveric specimens were dissected. The native labrum was removed and the footprint marked and measured. Repair for each of the 4 groups was performed, and the uncovered footprint was measured using a 3-dimensional digitizer. The strength of the repair sites was assessed using a servohydraulic testing machine and a digital video system to record load to failure, cyclic displacement, and stiffness. The double-row repair techniques with sutures and labral tape covered 73.4% and 77.0% of the footprint, respectively. These percentages were significantly higher than the footprint coverage achieved by single-row repair techniques using simple sutures (38.1%) and horizontal mattress sutures (32.8%) ( P < .001). The footprint coverage of the simple suture and horizontal mattress suture groups was not significantly different ( P = .44). There were no significant differences in load to failure, cyclic displacement, or stiffness between the single-row and double-row groups or between the simple suture and horizontal mattress suture techniques. Likewise, there was no difference in the biomechanical properties of the double-row repair techniques with sutures versus labral tape. Double-row repair techniques provided better coverage of the native footprint of the labrum but did not provide superior biomechanical properties compared with single-row repair techniques. There was no difference in footprint coverage or biomechanical strength between the simple suture and horizontal mattress suture repair techniques. Although the double-row repair techniques had no difference in initial strength, they may improve healing in high-risk patients by improving the footprint coverage.
Simple prognostic model for patients with advanced cancer based on performance status.
Jang, Raymond W; Caraiscos, Valerie B; Swami, Nadia; Banerjee, Subrata; Mak, Ernie; Kaya, Ebru; Rodin, Gary; Bryson, John; Ridley, Julia Z; Le, Lisa W; Zimmermann, Camilla
2014-09-01
Providing survival estimates is important for decision making in oncology care. The purpose of this study was to provide survival estimates for outpatients with advanced cancer, using the Eastern Cooperative Oncology Group (ECOG), Palliative Performance Scale (PPS), and Karnofsky Performance Status (KPS) scales, and to compare their ability to predict survival. ECOG, PPS, and KPS were completed by physicians for each new patient attending the Princess Margaret Cancer Centre outpatient Oncology Palliative Care Clinic (OPCC) from April 2007 to February 2010. Survival analysis was performed using the Kaplan-Meier method. The log-rank test for trend was employed to test for differences in survival curves for each level of performance status (PS), and the concordance index (C-statistic) was used to test the predictive discriminatory ability of each PS measure. Measures were completed for 1,655 patients. PS delineated survival well for all three scales according to the log-rank test for trend (P < .001). Survival was approximately halved for each worsening performance level. Median survival times, in days, for each ECOG level were: EGOG 0, 293; ECOG 1, 197; ECOG 2, 104; ECOG 3, 55; and ECOG 4, 25.5. Median survival times, in days, for PPS (and KPS) were: PPS/KPS 80-100, 221 (215); PPS/KPS 60 to 70, 115 (119); PPS/KPS 40 to 50, 51 (49); PPS/KPS 10 to 30, 22 (29). The C-statistic was similar for all three scales and ranged from 0.63 to 0.64. We present a simple tool that uses PS alone to prognosticate in advanced cancer, and has similar discriminatory ability to more complex models. Copyright © 2014 by American Society of Clinical Oncology.
Effectiveness of autogenic training in improving motor performances in Parkinson's disease.
Ajimsha, M S; Majeed, Nisar A; Chinnavan, Elanchezhian; Thulasyammal, Ramiah Pillai
2014-06-01
Relaxation training can be an important adjunct in reducing symptoms associated with Parkinson's disease (PD). Autogenic Training (AT) is a simple, easily administered and inexpensive technique for retraining the mind and the body to be able to relax. AT uses visual imagery and body awareness to promote a state of deep relaxation. To investigate whether AT when used as an adjunct to Physiotherapy (PT) improves motor performances in PD in comparison with a control group receiving PT alone. Randomized, controlled, single blinded trial. Movement Disorder Clinic and Department of Physiotherapy, Sree Chithira Thirunal Institute of Medical Sciences and Technology in Trivandrum, Kerala, India. Patients with PD of grade 2 or 3 of Hoehn & Yahr (H&Y) scale (N = 66). AT group or control group. The techniques were administered by Physiotherapists trained in AT and consisted of 40 sessions per patient over 8 weeks. Motor score subscale of Unified Parkinson's Disease Rating Scale (UPDRS) was used to measure the motor performances. The primary outcome measure was the difference in Motor score subscale of UPDRS scores between Week 1 (pretest score), Week 8 (posttest score), and follow-up at Week 12 after randomization. The simple main effects analysis showed that the AT group performed better than the control group in weeks 8 and 12 (P < .005). Patients in the AT and control groups reported a 51.78% and 35.24% improvement, respectively, in their motor performances in Week 8 compared with that in Week 1, which persisted, in the follow-up (Week 12) as 30.82% in the AT group and 21.42% in the control group. This study provides evidence that AT when used as an adjunct to PT is more effective than PT alone in improving motor performances in PD patients. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simple real-time computerized tasks for detection of malingering among murderers with schizophrenia.
Kertzman, Semion; Grinspan, Haim; Birger, Moshe; Shliapnikov, Nina; Alish, Yakov; Ben Nahum, Zeev; Mester, Roberto; Kotler, Moshe
2006-01-01
It is our contention that computer-based two-alternative forced choice techniques can be useful tools for the detection of patients with schizophrenia who feign acute psychotic symptoms and cognitive impairment as opposed to patients with schizophrenia with a true active psychosis. In our experiment, Visual Simple and Choice Reaction Time tasks were used. Reaction time in milliseconds was recorded and accuracy rate was obtained for all subjects' responses. Both types of task were administered to 27 patients with schizophrenia suspected of having committed murder. Patients with schizophrenia who were clinically assessed as malingerers achieved significantly fewer correct results, were significantly slower and less consistent in their reaction time. Congruence of performance between the Simple and Choice tasks was an additional parameter for the accurate diagnosis of malingering. The four parameters of both tests (accuracy of response, reaction time, standard deviation of reaction time and task congruency) are simple and constitute a user-friendly means for the detection of malingering in forensic practice. Another advantage of this procedure is that the software automatically measures and evaluates all the parameters.
Calculation of tip clearance effects in a transonic compressor rotor
NASA Technical Reports Server (NTRS)
Chima, R. V.
1996-01-01
The flow through the tip clearance region of a transonic compressor rotor (NASA rotor 37) was computed and compared to aerodynamic probe and laser anemometer data. Tip clearance effects were modeled both by gridding the clearance gap and by using a simple periodicity model across the ungridded gap. The simple model was run with both the full gap height, and with half the gap height to simulate a vena-contracta effect. Comparisons between computed and measured performance maps and downstream profiles were used to validate the models and to assess the effects of gap height on the simple clearance model. Recommendations were made concerning the use of the simple clearance model. Detailed comparisons were made between the gridded clearance gap solution and the laser anemometer data near the tip at two operating points. The computer results agreed fairly well with the data but overpredicted the extent of the casing separation and underpredicted the wake decay rate. The computations were then used to describe the interaction of the tip vortex, the passage shock, and the casing boundary layer.
Impedance microflow cytometry for viability studies of microorganisms
NASA Astrophysics Data System (ADS)
Di Berardino, Marco; Hebeisen, Monika; Hessler, Thomas; Ziswiler, Adrian; Largiadèr, Stephanie; Schade, Grit
2011-02-01
Impedance-based Coulter counters and its derivatives are widely used cell analysis tools in many laboratories and use normally DC or low frequency AC to perform these electrical analyses. The emergence of micro-fabrication technologies in the last decade, however, provides a new means of measuring electrical properties of cells. Microfluidic approaches combined with impedance spectroscopy measurements in the radio frequency (RF) range increase sensitivity and information content and thus push single cell analyses beyond simple cell counting and sizing applications towards multiparametric cell characterization. Promising results have been shown already in the fields of cell differentiation and blood analysis. Here we emphasize the potential of this technology by presenting new data obtained from viability studies on microorganisms. Impedance measurements of several yeast and bacteria strains performed at frequencies around 10 MHz enable an easy discrimination between dead and viable cells. Moreover, cytotoxic effects of antibiotics and other reagents, as well as cell starvation can also be monitored easily. Control analyses performed with conventional flow cytometers using various fluorescent dyes (propidium iodide, oxonol) indicate a good correlation and further highlight the capability of this device. The label-free approach makes on the one hand the use of usually expensive fluorochromes obsolete, on the other hand practically eliminates laborious sample preparation procedures. Until now, online cell monitoring was limited to the determination of viable biomass, which provides rather poor information of a cell culture. Impedance microflow cytometry, besides other aspects, proposes a simple solution to these limitations and might become an important tool for bioprocess monitoring applications in the biotech industry.
Characterization of Louisiana asphalt mixtures using simple performance tests and MEPDG.
DOT National Transportation Integrated Search
2014-04-01
The National Cooperative Highway Research Program (NCHRP) Project 9-19, Superpave Support and Performance : Models Management, recommended three Simple Performance Tests (SPTs) to complement the Superpave volumetric : mixture design method. These are...
Measurement of luminescence decays: High performance at low cost
NASA Astrophysics Data System (ADS)
Sulkes, Mark; Sulkes, Zoe
2011-11-01
The availability of inexpensive ultra bright LEDs spanning the visible and near-ultraviolet combined with the availability of inexpensive electronics equipment makes it possible to construct a high performance luminescence lifetime apparatus (˜5 ns instrumental response or better) at low cost. A central need for time domain measurement systems is the ability to obtain short (˜1 ns or less) excitation light pulses from the LEDs. It is possible to build the necessary LED driver using a simple avalanche transistor circuit. We describe first a circuit to test for small signal NPN transistors that can avalanche. We then describe a final optimized avalanche mode circuit that we developed on a prototyping board by measuring driven light pulse duration as a function of the circuit on the board and passive component values. We demonstrate that the combination of the LED pulser and a 1P28 photomultiplier tube used in decay waveform acquisition has a time response that allows for detection and lifetime determination of luminescence decays down to ˜5 ns. The time response and data quality afforded with the same components in time-correlated single photon counting are even better. For time-correlated single photon counting an even simpler NAND-gate based LED driver circuit is also applicable. We also demonstrate the possible utility of a simple frequency domain method for luminescence lifetime determinations.
Development and validation of a predictive equation for lean body mass in children and adolescents.
Foster, Bethany J; Platt, Robert W; Zemel, Babette S
2012-05-01
Lean body mass (LBM) is not easy to measure directly in the field or clinical setting. Equations to predict LBM from simple anthropometric measures, which account for the differing contributions of fat and lean to body weight at different ages and levels of adiposity, would be useful to both human biologists and clinicians. To develop and validate equations to predict LBM in children and adolescents across the entire range of the adiposity spectrum. Dual energy X-ray absorptiometry was used to measure LBM in 836 healthy children (437 females) and linear regression was used to develop sex-specific equations to estimate LBM from height, weight, age, body mass index (BMI) for age z-score and population ancestry. Equations were validated using bootstrapping methods and in a local independent sample of 332 children and in national data collected by NHANES. The mean difference between measured and predicted LBM was - 0.12% (95% limits of agreement - 11.3% to 8.5%) for males and - 0.14% ( - 11.9% to 10.9%) for females. Equations performed equally well across the entire adiposity spectrum, as estimated by BMI z-score. Validation indicated no over-fitting. LBM was predicted within 5% of measured LBM in the validation sample. The equations estimate LBM accurately from simple anthropometric measures.
Dettmer, Marius; Pourmoghaddam, Amir; Lee, Beom-Chan; Layne, Charles S.
2016-01-01
Specific activities that require concurrent processing of postural and cognitive tasks may increase the risk for falls in older adults. We investigated whether peripheral receptor sensitivity was associated with postural performance in a dual-task and whether an intervention in form of subthreshold vibration could affect performance. Ten younger (age: 20–35 years) and ten older adults (70–85 years) performed repeated auditory-verbal 1-back tasks while standing quietly on a force platform. Foot sole vibration was randomly added during several trials. Several postural control and performance measures were assessed and statistically analyzed (significance set to α-levels of .05). There were moderate correlations between peripheral sensitivity and several postural performance and control measures (r = .45 to .59). Several postural performance measures differed significantly between older and younger adults (p < 0.05); addition of vibration did not affect outcome measures. Aging affects healthy older adults' performance in dual-tasks, and peripheral sensitivity may be a contributor to the observed differences. A vibration intervention may only be useful when there are more severe impairments of the sensorimotor system. Hence, future research regarding the efficacy of sensorimotor interventions in the form of vibrotactile stimulation should focus on older adults whose balance is significantly affected. PMID:27143967
2013-01-01
Background Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2–3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. Methods OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Results Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as “general level” (FPC1), “time to peak” (FPC2) and “oscillations” (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (−0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (−0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. Conclusions FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy. PMID:23327294
Frøslie, Kathrine Frey; Røislien, Jo; Qvigstad, Elisabeth; Godang, Kristin; Bollerslev, Jens; Voldner, Nanna; Henriksen, Tore; Veierød, Marit B
2013-01-17
Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2-3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as "general level" (FPC1), "time to peak" (FPC2) and "oscillations" (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (-0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (-0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy.
Edwards, Elizabeth J; Edwards, Mark S; Lyvers, Michael
2016-08-01
Attentional control theory (ACT) describes the mechanisms associated with the relationship between anxiety and cognitive performance. We investigated the relationship between cognitive trait anxiety, situational stress and mental effort on phonological performance using a simple (forward-) and complex (backward-) word span task. Ninety undergraduate students participated in the study. Predictor variables were cognitive trait anxiety, indexed using questionnaire scores; situational stress, manipulated using ego threat instructions; and perceived level of mental effort, measured using a visual analogue scale. Criterion variables (a) performance effectiveness (accuracy) and (b) processing efficiency (accuracy divided by response time) were analyzed in separate multiple moderated-regression analyses. The results revealed (a) no relationship between the predictors and performance effectiveness, and (b) a significant 3-way interaction on processing efficiency for both the simple and complex tasks, such that at higher effort, trait anxiety and situational stress did not predict processing efficiency, whereas at lower effort, higher trait anxiety was associated with lower efficiency at high situational stress, but not at low situational stress. Our results were in full support of the assumptions of ACT and implications for future research are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
McMillan, Norman; O'Neill, Martina; Smith, Stephen; Hammond, John; Riedel, Sven; Arthure, Kevin; Smith, S.
2009-05-01
A TLDA-microvolume (transmitted light drop analyser) accessory for use with a standard UV-visible fibre spectrophotometer is described. The physics of the elegantly simple optical design is described along with the experimental testing of this accessory. The modelling of the arrangement is fully explored to investigate the performance of the drop spectrophotometer. The design optimizes the focusing to deliver the highest quality spectra, rapid and simple sample handling and, importantly, no detectable carryover on the single quartz drophead. Results of spectral measurements in a laboratory providing NIST standards show the closest correlation between modelled pathlength and experimental measurement for different drop volumes in the range 0.7-3 µl. This instrument accessory delivers remarkably accurate and reproducible results that are good enough to allow the accessory to be used for rapid pipette calibration to avoid the laborious weighing methods currently employed. Measurements on DNA standards and proteins are given to illustrate the main application area of biochemistry for this accessory. The accessory has a measurement range of at least 0-60 A units without sample dilution and, since there exists an accurate volume-pathlength relationship, the drop volume used in any specific measurement or assay should be optimized to minimize the photometric error. Studies demonstrate that the cleaning of the drophead with lab wipes results in no measurable carryover. This important practical result is confirmed from direct reading of the accessory and an analytical balance which was used to perform carryover studies. For further information on the TLDA please contact: Drop Technology, Unit 2, Tallaght Business Park, Whitestown, Dublin 24, Republic of Ireland. email: info@droptechnology.com.
A simple high-performance matrix-free biomass molten carbonate fuel cell without CO2 recirculation
Lan, Rong; Tao, Shanwen
2016-01-01
In previous reports, flowing CO2 at the cathode is essential for either conventional molten carbonate fuel cells (MCFCs) based on molten carbonate/LiAlO2 electrolytes or matrix-free MCFCs. For the first time, we demonstrate a high-performance matrix-free MCFC without CO2 recirculation. At 800°C, power densities of 430 and 410 mW/cm2 are achieved when biomass—bamboo charcoal and wood, respectively–is used as fuel. At 600°C, a stable performance is observed during the measured 90 hours after the initial degradation. In this MCFC, CO2 is produced at the anode when carbon-containing fuels are used. The produced CO2 then dissolves and diffuses to the cathode to react with oxygen in open air, forming the required CO32− or CO42− ions for continuous operation. The dissolved O2− ions may also take part in the cell reactions. This provides a simple new fuel cell technology to directly convert carbon-containing fuels such as carbon and biomass into electricity with high efficiency. PMID:27540588
A simple high-performance matrix-free biomass molten carbonate fuel cell without CO2 recirculation.
Lan, Rong; Tao, Shanwen
2016-08-01
In previous reports, flowing CO2 at the cathode is essential for either conventional molten carbonate fuel cells (MCFCs) based on molten carbonate/LiAlO2 electrolytes or matrix-free MCFCs. For the first time, we demonstrate a high-performance matrix-free MCFC without CO2 recirculation. At 800°C, power densities of 430 and 410 mW/cm(2) are achieved when biomass-bamboo charcoal and wood, respectively-is used as fuel. At 600°C, a stable performance is observed during the measured 90 hours after the initial degradation. In this MCFC, CO2 is produced at the anode when carbon-containing fuels are used. The produced CO2 then dissolves and diffuses to the cathode to react with oxygen in open air, forming the required [Formula: see text] or [Formula: see text] ions for continuous operation. The dissolved [Formula: see text] ions may also take part in the cell reactions. This provides a simple new fuel cell technology to directly convert carbon-containing fuels such as carbon and biomass into electricity with high efficiency.
Rate of muscle contraction is associated with cognition in women, not in men.
Tian, Qu; Osawa, Yusuke; Resnick, Susan M; Ferrucci, Luigi; Studenski, Stephanie A
2018-05-08
In older persons, lower hand grip strength is associated with poorer cognition. Little is known about how the rate of muscle contraction relates to cognition and upper extremity motor function, and sex differences are understudied. Linear regression, adjusting for age, race, education, body mass index, appendicular lean mass, and knee pain assessed sex-specific cross-sectional associations of peak torque, rate of torque development (RTD) and rate of velocity development (RVD) with cognition and upper extremity motor function. In men (n=447), higher rate-adjusted peak torque and a greater RVD were associated with faster simple finger tapping speed, and a greater RVD was associated with higher nondominant pegboard performance. In women (n=447), higher peak torque was not associated with any measures, but a greater RTD was associated with faster simple tapping speed and higher language performance, and a greater RVD was associated with higher executive function, attention, memory, and nondominant pegboard performance. In women with low isokinetic peak torque, RVD was associated with attention and memory. RVD capacity may reflect neural health, especially in women with low muscle strength.
A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.
Vogt, Emelie; MacQuarrie, David; Neary, John Patrick
2012-11-01
Ballistocardiography (BCG) is a non-invasive technology that has been used to record ultra-low-frequency vibrations of the heart allowing for the measurement of cardiac cycle events including timing and amplitudes of contraction. Recent developments in BCG have made this technology simple to use, as well as time- and cost-efficient in comparison with other more complicated and invasive techniques used to evaluate cardiac performance. Recent technological advances are considerably greater since the advent of microprocessors and laptop computers. Along with the history of BCG, this paper reviews the present and future potential benefits of using BCG to measure cardiac cycle events and its application to clinical and applied research. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
EZ and GOSSIP, two new VO compliant tools for spectral analysis
NASA Astrophysics Data System (ADS)
Franzetti, P.; Garill, B.; Fumana, M.; Paioro, L.; Scodeggio, M.; Paltani, S.; Scaramella, R.
2008-10-01
We present EZ and GOSSIP, two new VO compliant tools dedicated to spectral analysis. EZ is a tool to perform automatic redshift measurement; GOSSIP is a tool created to perform the SED fitting procedure in a simple, user friendly and efficient way. These two tools have been developed by the PANDORA Group at INAF-IASF (Milano); EZ has been developed in collaboration with Osservatorio Monte Porzio (Roma) and Integral Science Data Center (Geneve). EZ is released to the astronomical community; GOSSIP is currently in beta-testing.
Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.
Peteranderl, Sonja; Oberauer, Klaus
2018-01-01
This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
NASA Astrophysics Data System (ADS)
Lodwick, Camille J.
This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.
Oosterman, Joukje M; Heringa, Sophie M; Kessels, Roy P C; Biessels, Geert Jan; Koek, Huiberdina L; Maes, Joseph H R; van den Berg, Esther
2017-04-01
Rule induction tests such as the Wisconsin Card Sorting Test require executive control processes, but also the learning and memorization of simple stimulus-response rules. In this study, we examined the contribution of diminished learning and memorization of simple rules to complex rule induction test performance in patients with amnestic mild cognitive impairment (aMCI) or Alzheimer's dementia (AD). Twenty-six aMCI patients, 39 AD patients, and 32 control participants were included. A task was used in which the memory load and the complexity of the rules were independently manipulated. This task consisted of three conditions: a simple two-rule learning condition (Condition 1), a simple four-rule learning condition (inducing an increase in memory load, Condition 2), and a complex biconditional four-rule learning condition-inducing an increase in complexity and, hence, executive control load (Condition 3). Performance of AD patients declined disproportionately when the number of simple rules that had to be memorized increased (from Condition 1 to 2). An additional increment in complexity (from Condition 2 to 3) did not, however, disproportionately affect performance of the patients. Performance of the aMCI patients did not differ from that of the control participants. In the patient group, correlation analysis showed that memory performance correlated with Condition 1 performance, whereas executive task performance correlated with Condition 2 performance. These results indicate that the reduced learning and memorization of underlying task rules explains a significant part of the diminished complex rule induction performance commonly reported in AD, although results from the correlation analysis suggest involvement of executive control functions as well. Taken together, these findings suggest that care is needed when interpreting rule induction task performance in terms of executive function deficits in these patients.
Design and Implementation of a Distributed Version of the NASA Engine Performance Program
NASA Technical Reports Server (NTRS)
Cours, Jeffrey T.
1994-01-01
Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.
Motor abilities and anthropometrics in youth cross-country skiing.
Stöggl, R; Müller, E; Stöggl, T
2015-02-01
The purposes were to validate whether general motor abilities and anthropometrics are determinants of youth cross-country (XC) skiing performance; evaluate gender-specific differences; and to establish noninvasive diagnostics. Fifty-one youth XC skiers (34 boys; 13.8 ± 0.6 years and 17 girls; 13.4 ± 0.9 years) performed motor skill and laboratory tests, and anthropometric data were collected and correlated with XC skiing performance. Anthropometrics and maturity status were related to boys but not to girls XC skiing performance. Push-ups and 20-m sprint were correlated to XC skiing performance in both boys and girls. XC skiing performance of boys was predominantly influenced by upper body and trunk strength capacities (medicine ball throw, push-ups, and pull-ups) and jumping power (standing long and triple jump), whereas XC skiing of girls was mainly influenced by aerobic capacities (3000-m run). Laboratory measures did not reveal greater correlations to XC skiing performance compared with simple test concepts of speed, strength, and endurance. Maturity was a major confounding variable in boys but not girls. Use of noninvasive simple test concepts for determination of upper body strength, speed, and endurance represent practicable support for ski clubs, schools, or skiing federations in the guidance and evaluation of young talent, being aware of the effect of maturity especially in boys. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Area 18 of the cat: the first step in processing visual movement information.
Orban, G A
1977-01-01
In cats, responses of area 18 neurons to different moving patterns were measured. The influence of three movement parameters--direction, angular velocity, and amplitude of movement--were tested. The results indicate that in area 18 no ideal movement detector exists, but that simple and complex cells each perform complementary operations of primary visual areas, i.e. analysis and detection of movement.
ATTENUATION OF COBALT-60 RADIATION FROM A SOURCE DISTRIBUTED AROUND A CONCRETE BLOCKHOUSE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batter, J.F.; Starbird, A.W.
1961-06-15
Two radiation-shielding experiments were performed upon a simple blockhouse structure. The blockhouse was exposed to a simulated fallout field, and the radiation penetrating the structure was measured. The radiation field was produced by circulating a sealed cobalt-60 source through polyethylene tubing predistributed over an octant centered on the test building. Experimental details are described and results tabulated. (auth)
Errors in retarding potential analyzers caused by nonuniformity of the grid-plane potential.
NASA Technical Reports Server (NTRS)
Hanson, W. B.; Frame, D. R.; Midgley, J. E.
1972-01-01
One aspect of the degradation in performance of retarding potential analyzers caused by potential depressions in the retarding grid is quantitatively estimated from laboratory measurements and theoretical calculations. A simple expression is obtained that permits the use of laboratory measurements of grid properties to make first-order corrections to flight data. Systematic positive errors in ion temperature of approximately 16% for the Ogo 4 instrument and 3% for the Ogo 6 instrument are deduced. The effects of the transverse electric fields arising from the grid potential depressions are not treated.
Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors
NASA Astrophysics Data System (ADS)
Liu, Y.; Strum, R.; Stiles, D.; Long, C.; Rakhman, A.; Blokland, W.; Winder, D.; Riemer, B.; Wendel, M.
2018-03-01
We describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. The proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry-Perot sensors for measurement of strains and vibrations.
Lin, Zhichao; Wu, Zhongyu
2009-05-01
A rapid and reliable radiochemical method coupled with a simple and compact plating apparatus was developed, validated, and applied for the analysis of (210)Po in variety of food products and bioassay samples. The method performance characteristics, including accuracy, precision, robustness, and specificity, were evaluated along with a detailed measurement uncertainty analysis. With high Po recovery, improved energy resolution, and effective removal of interfering elements by chromatographic extraction, the overall method accuracy was determined to be better than 5% with measurement precision of 10%, at 95% confidence level.
NASA Astrophysics Data System (ADS)
Nakanishi, Y.; Taniguchi, M.; Nakamura, M. M.; Hasegawa, J.; Ohyama, R.; Nakamura, M.; Yoshizawa, M.; Tsujimoto, M.; Nakatsuji, S.
2018-05-01
We have performed the ultrasound measurement on the non-Kramers doublet system PrV2Al20 in order to figure out the low-temperature multi-quadrupolar phase appearing at low temperatures. Elastic anomalies and their systematic magnetic field evolution were clearly observed in the temperature dependence of the elastic constant C44(T). We discuss the possible origin and implications of the rich variety of phases emerging from the simple ground state: the well-isolated non-Kramers doublet Γ3 subspace.
Simon, E G; Arthuis, C J; Haddad, G; Bertrand, P; Perrotin, F
2015-03-01
In the first trimester of pregnancy, a biparietal diameter (BPD) below the 5(th) percentile is a simple marker that enables the prenatal detection of half of all cases of open spina bifida. We hypothesized that relating the BPD measurement to the transverse abdominal diameter (TAD) might be another simple and effective screening method. In this study we assessed the performance of using the BPD/TAD ratio during the first trimester of pregnancy in screening for open spina bifida. A total of 20,551 first-trimester ultrasound scans (11-13 weeks' gestation), performed between 2000 and 2013, were analyzed retrospectively; there were 26 cases of open spina bifida and 17,665 unaffected pregnancies with a crown-rump length of 45-84 mm and a record of both BPD and TAD measurements. The mean (± SD) BPD/TAD ratio was 1.00 ± 0.06 for fetuses with spina bifida and 1.13 ± 0.06 for those without (P < 0.0001). A BPD ≤ 5(th) percentile enabled the prenatal detection of 46.2% of spina bifida cases, while a BPD/TAD ratio of ≤ 1.00 detected 69.2%. If we considered cases in which either BPD was ≤ 5(th) percentile or BPD/TAD ratio was ≤ 1, we identified 76.9% of cases. In the latter case, the false-positive rate was 5.1%, while that for using a combination of both BPD ≤ 5th percentile and BPD/TAD ratio ≤ 1 was 0.6%, with a sensitivity of 38.5%. The positive predictive value of using a combination of BPD ≤ 5th percentile and BPD/TAD ratio ≤ 1 for detecting spina bifida was 8.5%. Between 11 and 13 weeks' gestation, relating BPD to TAD improves considerably the diagnostic performance of using BPD measurement alone in screening for open spina bifida. Screening using this marker is simple and applicable to a large population. Copyright © 2014 ISUOG. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Deufel, Christopher L.; Furutani, Keith M.
2014-02-01
As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-12-09
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-01-01
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705
Wong, Elaine M; Ormiston, Margaret E; Haselhuhn, Michael P
2011-12-01
Researchers have theorized that innate personal traits are related to leadership success. Although links between psychological characteristics and leadership success have been well established, research has yet to identify any objective physical traits of leaders that predict organizational performance. In the research reported here, we identified leaders' facial structure as a specific physical trait that correlates with organizational performance. Specifically, we found that firms whose male CEOs have wider faces (relative to facial height) achieve superior financial performance. Decision-making dynamics within a firm's leadership team moderate this effect, such that the relationship between a given CEO's facial measurements and his firm's financial performance is stronger in firms with cognitively simple leadership teams.
DC and AC conductivity properties of bovine dentine hydroxyapatite (BDHA)
NASA Astrophysics Data System (ADS)
Dumludag, F.; Gunduz, O.; Kılıc, O.; Ekren, N.; Kalkandelen, C.; Ozbek, B.; Oktar, F. N.
2017-12-01
Bovine dentine bio-waste may be used as a potential natural source of hydroxyapatite (BDHA), thus extraction of bovine dentin hydroxyapatite (BDHA) from bio-waste is significantly important to fabricate in a simple, economically and environmentally preferable. DC and AC conductivity properties of BDHA were investigated depending on sintering temperature (1000ºC - 1300°C) in air and vacuum (<10-2 mbar) ambient at room temperature. DC conductivity measurements performed between -1 and 1 V. AC conductivity measurements performed in the frequency range of 40 Hz - 100 kHz. DC conductivity results showed that dc conductivity values of the BDHA decrease with increasing sintering temperature in air ambient. It is not observed remarkable/systematic behavior for ac conductivity depending on sintering temperature.
Difficulty of distinguishing product states locally
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.
2017-01-01
Nonlocality without entanglement is a rather counterintuitive phenomenon in which information may be encoded entirely in product (unentangled) states of composite quantum systems in such a way that local measurement of the subsystems is not enough for optimal decoding. For simple examples of pure product states, the gap in performance is known to be rather small when arbitrary local strategies are allowed. Here we restrict to local strategies readily achievable with current technology: those requiring neither a quantum memory nor joint operations. We show that even for measurements on pure product states, there can be a large gap between such strategies and theoretically optimal performance. Thus, even in the absence of entanglement, physically realizable local strategies can be far from optimal for extracting quantum information.
Promotion of Preventive Measures in Public Nursery Schools: Lessons From the H1N1 Pandemic.
Michail, Koralia A; Ioannidou, Christina; Galanis, Petros; Tsoumakas, Kostantinos; Pavlopoulou, Ioanna D
2017-09-01
Nursery schools serve as reservoirs of transmission of infectious diseases, and teachers should be able to implement and monitor hygiene measures to prevent them. The aim of the present study was to assess the compliance of nursery school teachers on promoting preventive interventions and to identify associated factors, during the novel H1N1 influenza pandemic. A secondary objective was to evaluate their knowledge and vaccination status regarding the novel virus. A cross-sectional study was performed, with the use of a predesigned anonymous, questionnaire, and distributed to all public nursery teachers of Athens, Greece. General etiquette practices were highly acceptable to over 92% of teachers. Those with longer teaching experience promoted simple preventive measures, such as hand washing and use of hand sanitizer, more often while older children were more likely to familiarize with them. However, teachers presented inadequate knowledge concerning the novel virus and their vaccination rates with the pandemic vaccine were unacceptably low (1.1%). Our study showed that promotion of simple preventive measures is feasible and may contribute to the prevention of outbreaks in nursery schools, although knowledge gaps and fear concerning the pandemic vaccine highlight communication issues.
Lens, Frederic; Vos, Rutger A.; Charrier, Guillaume; van der Niet, Timo; Merckx, Vincent; Baas, Pieter; Aguirre Gutierrez, Jesus; Jacobs, Bart; Chacon Dória, Larissa; Smets, Erik; Delzon, Sylvain; Janssens, Steven B.
2016-01-01
Background and Aims Angiosperms with simple vessel perforations have evolved many times independently of species having scalariform perforations, but detailed studies to understand why these transitions in wood evolution have happened are lacking. We focus on the striking difference in wood anatomy between two closely related genera of Adoxaceae, Viburnum and Sambucus, and link the anatomical divergence with climatic and physiological insights. Methods After performing wood anatomical observations, we used a molecular phylogenetic framework to estimate divergence times for 127 Adoxaceae species. The conditions under which the genera diversified were estimated using ancestral area reconstruction and optimization of ancestral climates, and xylem-specific conductivity measurements were performed. Key Results Viburnum, characterized by scalariform vessel perforations (ancestral), diversified earlier than Sambucus, having simple perforations (derived). Ancestral climate reconstruction analyses point to cold temperate preference for Viburnum and warm temperate for Sambucus. This is reflected in the xylem-specific conductivity rates of the co-occurring species investigated, showing that Viburnum lantana has rates much lower than Sambucus nigra. Conclusions The lack of selective pressure for high conductive efficiency during early diversification of Viburnum and the potentially adaptive value of scalariform perforations in frost-prone cold temperate climates have led to retention of the ancestral vessel perforation type, while higher temperatures during early diversification of Sambucus have triggered the evolution of simple vessel perforations, allowing more efficient long-distance water transport. PMID:27498812
Evidence for unlimited capacity processing of simple features in visual cortex
White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.
2017-01-01
Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964
Rolling, slip and traction measurements on low modulus materials
NASA Technical Reports Server (NTRS)
Tevaarwerk, J. L.
1985-01-01
Traction and wear tests were performed on six low modulus materials (LMM). Three different traction tests were performed to determine the suitability of the material for use as traction rollers. These were the rolling, slip and endurance traction tests. For each material the combination LMM on LMM and LMM on steel were evaluated. Rolling traction test were conducted to determine the load - velocity limits, the rolling traction coefficient of the materials and to establish the type of failures that would result when loading beyond the limit. It was found that in general a simple constant rolling traction coefficient was enough to describe the results of all the test. The slip traction tests revealed that the peak traction coefficients were considerably higher than for lubricated traction contacts. The endurance traction tests were performed to establish the durability of the LMM under conditions of prolonged traction. Wear measurements were performed during and after the test. Energetic wear rates were determined from the wear measurements conducted in the endurance traction tests. These values show that the roller wear is not severe when reasonable levels of traction are transmitted.
Measuring complete quantum states with a single observable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng Xinhua; Suter, Dieter; Du Jiangfeng
2007-10-15
Experimental determination of an unknown quantum state usually requires several incompatible measurements. However, it is also possible to determine the full quantum state from a single, repeated measurement. For this purpose, the quantum system whose state is to be determined is first coupled to a second quantum system (the 'assistant') in such a way that part of the information in the quantum state is transferred to the assistant. The actual measurement is then performed on the enlarged system including the original system and the assistant. We discuss in detail the requirements of this procedure and experimentally implement it on amore » simple quantum system consisting of nuclear spins.« less
In situ optical measurements of bacterial endospore breakdown in a shock tube
NASA Astrophysics Data System (ADS)
McCartt, A. D.; Gates, S.; Lappas, P.; Jeffries, J. B.; Hanson, R. K.
2012-03-01
The interaction of endospore-laden bioaerosols and shock waves is monitored with a combination of laser absorption and scattering. Tests are performed in the Stanford aerosol shock tube for post-shock temperatures ranging from 400-1100 K. In situ laser measurements at 266 and 665 nm provide a real-time monitor of endospore morphology. Scatter of visible light measures the integrity of endospore structure, while absorption of UV light provides a monitor of biochemicals released by endospore rupture. For post-shock temperatures greater than 750 K endospore morphological breakdown is observed. A simple theoretical model is employed to quantify the optical measurements, and mechanisms leading to the observed data are discussed.
Dosimetry of secondary cosmic radiation up to an altitude of 30 km.
Wissmann, F; Burda, O; Khurana, S; Klages, T; Langner, F
2014-10-01
Dosimetric measurements in the field of secondary cosmic radiation were extensively made during the last years. Since the majority of these measurements were performed on-board passenger aircraft at altitudes between 10 and 12 km, measurements at higher altitudes are desirable for the verification of the legal dose assessment procedures for aircrew. A simple solution is to use a high-altitude balloon that reaches altitudes as high as 30 km. In this work, it is shown that the dose rate profile up to 30 km can be measured with acceptable uncertainties using a Si-detector. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Clarkson, Sean; Wheat, Jon; Heller, Ben; Choppin, Simon
2016-01-01
Use of anthropometric data to infer sporting performance is increasing in popularity, particularly within elite sport programmes. Measurement typically follows standards set by the International Society for the Advancement of Kinanthropometry (ISAK). However, such techniques are time consuming, which reduces their practicality. Schranz et al. recently suggested 3D body scanners could replace current measurement techniques; however, current systems are costly. Recent interest in natural user interaction has led to a range of low-cost depth cameras capable of producing 3D body scans, from which anthropometrics can be calculated. A scanning system comprising 4 depth cameras was used to scan 4 cylinders, representative of the body segments. Girth measurements were calculated from the 3D scans and compared to gold standard measurements. Requirements of a Level 1 ISAK practitioner were met in all 4 cylinders, and ISO standards for scan-derived girth measurements were met in the 2 larger cylinders only. A fixed measurement bias was identified that could be corrected with a simple offset factor. Further work is required to determine comparable performance across a wider range of measurements performed upon living participants. Nevertheless, findings of the study suggest such a system offers many advantages over current techniques, having a range of potential applications.
Performance and Flowfield Measurements on a 10-inch Ducted Rotor VTOL UAV
NASA Technical Reports Server (NTRS)
Martin, Preston; Tung, Chee
2004-01-01
A ducted fan VTOL UAV with a 10-inch diameter rotor was tested in the US Army 7-by 10-Foot Wind Tunnel. The test conditions covered a range of angle of attack from 0 to 110 degrees to the freestream. The tunnel velocity was varied from 0 (simulating a hover condition) to 128 ft/sec in propeller mode. A six-component internal balance measured the aerodynamic loads for a range of model configurations. including the isolated rotor, the isolated duct, and the full configuration of the duct and rotor. For some conditions, hotwire velocity surveys were conducted along the inner and outer surface of the duct and across the downstream wake. In addition, fluorescent oil flow visualization allowed the flow separation patterns inside and outside of the duct to be mapped for a few test conditions. Two different duct shapes were tested to determine the performance effects of leading edge radius. For each duct, a range of rotor tip gap from 1%R to 4.5%R was tested to determine the performance penalty in hover and axial flight. Measured results are presented in terms of hover performance, hover performance in a crosswind, and high angle of attack performance in propeller mode. In each case, the effects of both tip gap and duct leading edge radius are illustrated using measurements. Some of the hover performance issues were also studied using a simple analytical method, and the results agreed with the measurements.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Dehydration and Performance on Clinical Concussion Measures in Collegiate Wrestlers
Weber, Amanda Friedline; Mihalik, Jason P.; Register-Mihalik, Johna K.; Mays, Sally; Prentice, William E.; Guskiewicz, Kevin M.
2013-01-01
Context: The effects of dehydration induced by wrestling-related weight-cutting tactics on clinical concussion outcomes, such as neurocognitive function, balance performance, and symptoms, have not been adequately studied. Objective: To evaluate the effects of dehydration on the outcome of clinical concussion measures in National Collegiate Athletic Association Division I collegiate wrestlers. Design: Repeated-measures design. Setting: Clinical research laboratory. Patients or Other Participants: Thirty-two Division I healthy collegiate male wrestlers (age = 20.0 ± 1.4 years; height = 175.0 ± 7.5 cm; baseline mass = 79.2 ± 12.6 kg). Intervention(s): Participants completed preseason concussion baseline testing in early September. Weight and urine samples were also collected at this time. All participants reported to prewrestling practice and postwrestling practice for the same test battery and protocol in mid-October. They had begun practicing weight-cutting tactics a day before prepractice and postpractice testing. Differences between these measures permitted us to evaluate how dehydration and weight-cutting tactics affected concussion measures. Main Outcome Measures: Sport Concussion Assessment Tool 2 (SCAT2), Balance Error Scoring System, Graded Symptom Checklist, and Simple Reaction Time scores. The Simple Reaction Time was measured using the Automated Neuropsychological Assessment Metrics. Results: The SCAT2 measurements were lower at prepractice (P = .002) and postpractice (P < .001) when compared with baseline. The BESS error scores were higher at postpractice when compared with baseline (P = .015). The GSC severity scores were higher at prepractice (P = .011) and postpractice (P < .001) than at baseline and at postpractice when than at prepractice (P = .003). The number of Graded Symptom Checklist symptoms reported was also higher at prepractice (P = .036) and postpractice (P < .001) when compared with baseline, and at postpractice when compared with prepractice (P = .003). Conclusions: Our results suggest that it is important for wrestlers to be evaluated in a euhydrated state to ensure that dehydration is not influencing the outcome of the clinical measures. PMID:23672379
Weafer, Jessica; Baggott, Matthew J; de Wit, Harriet
2013-12-01
Behavioral measures of impulsivity are widely used in substance abuse research, yet relatively little attention has been devoted to establishing their psychometric properties, especially their reliability over repeated administration. The current study examined the test-retest reliability of a battery of standardized behavioral impulsivity tasks, including measures of impulsive choice (i.e., delay discounting, probability discounting, and the Balloon Analogue Risk Task), impulsive action (i.e., the stop signal task, the go/no-go task, and commission errors on the continuous performance task), and inattention (i.e., attention lapses on a simple reaction time task and omission errors on the continuous performance task). Healthy adults (n = 128) performed the battery on two separate occasions. Reliability estimates for the individual tasks ranged from moderate to high, with Pearson correlations within the specific impulsivity domains as follows: impulsive choice (r range: .76-.89, ps < .001); impulsive action (r range: .65-.73, ps < .001); and inattention (r range: .38-.42, ps < .001). Additionally, the influence of day-to-day fluctuations in mood, as measured by the Profile of Mood States, was assessed in relation to variability in performance on each of the behavioral tasks. Change in performance on the delay discounting task was significantly associated with change in positive mood and arousal. No other behavioral measures were significantly associated with mood. In sum, the current analysis demonstrates that behavioral measures of impulsivity are reliable measures and thus can be confidently used to assess various facets of impulsivity as intermediate phenotypes for drug abuse.
Weafer, Jessica; Baggott, Matthew J.; de Wit, Harriet
2014-01-01
Behavioral measures of impulsivity are widely used in substance abuse research, yet relatively little attention has been devoted to establishing their psychometric properties, especially their reliability over repeated administration. The current study examined the test-retest reliability of a battery of standardized behavioral impulsivity tasks, including measures of impulsive choice (delay discounting, probability discounting, and the Balloon Analogue Risk Task), impulsive action (the stop signal task, the go/no-go task, and commission errors on the continuous performance task), and inattention (attention lapses on a simple reaction time task and omission errors on the continuous performance task). Healthy adults (n=128) performed the battery on two separate occasions. Reliability estimates for the individual tasks ranged from moderate to high, with Pearson correlations within the specific impulsivity domains as follows: impulsive choice (r = .76 - .89, ps < .001); impulsive action (r = .65 - .73, ps < .001); and inattention (r = .38-.42, ps < .001). Additionally, the influence of day-to-day fluctuations in mood as measured by the Profile of Mood States was assessed in relation to variability in performance on each of the behavioral tasks. Change in performance on the delay discounting task was significantly associated with change in positive mood and arousal. No other behavioral measures were significantly associated with mood. In sum, the current analysis demonstrates that behavioral measures of impulsivity are reliable measures and thus can be confidently used to assess various facets of impulsivity as intermediate phenotypes for drug abuse. PMID:24099351
A Simple Method for Assessing Upper-Limb Force-Velocity Profile in Bench Press.
Rahmani, Abderrahmane; Samozino, Pierre; Morin, Jean-Benoit; Morel, Baptiste
2018-02-01
To analyze the reliability and validity of a field computation method based on easy-to-measure data to assess the mean force ([Formula: see text]) and velocity ([Formula: see text]) produced during a ballistic bench-press movement and to verify that the force-velocity profile (F-v) obtained with multiple loaded trials is accurately described. Twelve participants performed ballistic bench presses against various lifted mass from 30% to 70% of their body mass. For each trial, [Formula: see text] and [Formula: see text] were determined from an accelerometer (sampling rate 500 Hz; reference method) and a simple computation method based on upper-limb mass, barbell flight height, and push-off distance. These [Formula: see text] and [Formula: see text] data were used to establish the F-v relationship for each individual and method. A strong to almost perfect reliability was observed between the 2 trials (ICC > .90 for [Formula: see text] and .80 for [Formula: see text], CV% < 10%), whatever the considered method. The mechanical variables ([Formula: see text], [Formula: see text]) measured with the 2 methods and all the variables extrapolated from the F-v relationships were strongly correlated (r 2 > .80, P < .001). The practical differences between the methods for the extrapolated mechanical parameters were all <5%, indicating very probably no differences. The findings suggest that the simple computation method used here provides valid and reliable information on force and velocity produced during ballistic bench press, in line with that observed in laboratory conditions. This simple method is thus a practical tool, requiring only 3 simple parameters (upper-limb mass, barbell flight height, and push-off distance).
Validity of a Simple Method for Measuring Force-Velocity-Power Profile in Countermovement Jump.
Jiménez-Reyes, Pedro; Samozino, Pierre; Pareja-Blanco, Fernando; Conceição, Filipe; Cuadrado-Peñafiel, Víctor; González-Badillo, Juan José; Morin, Jean-Benoît
2017-01-01
To analyze the reliability and validity of a simple computation method to evaluate force (F), velocity (v), and power (P) output during a countermovement jump (CMJ) suitable for use in field conditions and to verify the validity of this computation method to compute the CMJ force-velocity (F-v) profile (including unloaded and loaded jumps) in trained athletes. Sixteen high-level male sprinters and jumpers performed maximal CMJs under 6 different load conditions (0-87 kg). A force plate sampling at 1000 Hz was used to record vertical ground-reaction force and derive vertical-displacement data during CMJ trials. For each condition, mean F, v, and P of the push-off phase were determined from both force-plate data (reference method) and simple computation measures based on body mass, jump height (from flight time), and push-off distance and used to establish the linear F-v relationship for each individual. Mean absolute bias values were 0.9% (± 1.6%), 4.7% (± 6.2%), 3.7% (± 4.8%), and 5% (± 6.8%) for F, v, P, and slope of the F-v relationship (S Fv ), respectively. Both methods showed high correlations for F-v-profile-related variables (r = .985-.991). Finally, all variables computed from the simple method showed high reliability, with ICC >.980 and CV <1.0%. These results suggest that the simple method presented here is valid and reliable for computing CMJ force, velocity, power, and F-v profiles in athletes and could be used in practice under field conditions when body mass, push-off distance, and jump height are known.
NASA Technical Reports Server (NTRS)
Gratz, Andrew J.; Bird, Peter
1993-01-01
The range of the measured quartz dissolution rates, as a function of temperature and pOH, extent of saturation, and ionic strength, is extended to cover a wider range of solution chemistries, using the negative crystal methodology of Gratz et al. (1990) to measure the dissolution rate. A simple rate law describing the quartz dissolution kinetics above the point of zero charge of quartz is derived for ionic strengths above 0.003 m. Measurements were performed on some defective crystals, and the mathematics of step motion was developed for quartz dissolution and was compared with rough-face behavior using two different models.
Tests of a robust eddy correlation system for sensible heat flux
NASA Astrophysics Data System (ADS)
Blanford, J. H.; Gay, L. W.
1992-03-01
Sensible heat flux estimates from a simple, one-propeller eddy correlation system (OPEC) were compared with those from a sonic anemometer eddy correlation system (SEC). In accordance with similarity theory, the performance of the OPEC system improved with increasing height of the sensor above the surface. Flux totals from the two systems at sites with adequate fetch were in excellent agreement after frequency response corrections were applied. The propeller system appears suitable for long periods of unattended measurement. The sensible heat flux measurements can be combined with net radiation and soil heat flux measurements to estimate latent heat as a residual in the surface energy balance.
Goto, Norio; Morita, Yutaka; Terada, Katsuhide
2016-01-01
The transfer of urea from a urea formulation to the stratum corneum varies with the formulation base and form, and impacts the formulation's therapeutic effect. Consequently, determining the amount of urea transferred is essential for developing efficient formulations. This study assessed a simple method for measuring the amount of urea accumulated in the stratum corneum. Conventional methods rely on labeling urea used in the formulation with radiocarbon ((14)C) or other radioactive isotopes (RIs), retrieving the transferred urea from the stratum corneum by tape stripping, then quantitating the urea. The handling and use of RIs, however, is subject to legal regulation and can only be performed in sanctioned facilities, so methods employing RIs are neither simple nor convenient. We therefore developed a non-radiolabel method "tape stripping-colorimetry (T-C)" that combines tape stripping with colorimetry (urease-glutamate dehydrogenase (GLDH)) for the quantitative measurement of urea. Urea in the stratum corneum is collected by tape stripping and measured using urease-GLDH, which is commonly used to measure urea nitrogen in blood tests. The results indicate that accurate urea measurement by the T-C method requires the application of 1400 mg (on hairless rats) of a 20% urea solution on a 50 cm(2) (5×10 cm) area. Further, we determined the amount of urea accumulated in the stratum corneum using formulations with different urea concentrations, and the time course of urea accumulation from formulations differing in the rate of urea crystallization. We demonstrate that the T-C method is simple and convenient, with no need for (14)C or other RIs.
Gabbay, Itay E; Gabbay, Uri
2013-01-01
Excess adverse events may be attributable to poor surgical performance but also to case-mix, which is controlled through the Standardized Incidence Ratio (SIR). SIR calculations can be complicated, resource consuming, and unfeasible in some settings. This article suggests a novel method for SIR approximation. In order to evaluate a potential SIR surrogate measure we predefined acceptance criteria. We developed a new measure - Approximate Risk Index (ARI). "Number Needed for Event" (NNE) is the theoretical number of patients needed "to produce" one adverse event. ARI is defined as the quotient of the group of patients needed for no observed events Ge by total patients treated Ga. Our evaluation compared 2500 surgical units and over 3 million heterogeneous risk surgical patients that were induced through a computerized simulation. Surgical unit's data were computed for SIR and ARI to evaluate compliance with the predefined criteria. Approximation was evaluated by correlation analysis and performance prediction capability by Receiver Operating Characteristics (ROC) analysis. ARI strongly correlates with SIR (r(2) = 0.87, p < 0.05). ARI prediction of excessive risk revealed excellent ROC (Area Under the Curve > 0.9) 87% sensitivity and 91% specificity. ARI provides good approximation of SIR and excellent prediction capability. ARI is simple and cost-effective as it requires thorough risk evaluation of only the adverse events patients. ARI can provide a crucial screening and performance evaluation quality control tool. The ARI method may suit other clinical and epidemiological settings where relatively small fraction of the entire population is affected. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Carbon Nanotube-Based Chemiresistive Sensors
Tang, Ruixian; Shi, Yongji; Hou, Zhongyu; Wei, Liangming
2017-01-01
The development of simple and low-cost chemical sensors is critically important for improving human life. Many types of chemical sensors have been developed. Among them, the chemiresistive sensors receive particular attention because of their simple structure, the ease of high precise measurement and the low cost. This review mainly focuses on carbon nanotube (CNT)-based chemiresistive sensors. We first describe the properties of CNTs and the structure of CNT chemiresistive sensors. Next, the sensing mechanism and the performance parameters of the sensors are discussed. Then, we detail the status of the CNT chemiresistive sensors for detection of different analytes. Lastly, we put forward the remaining challenges for CNT chemiresistive sensors and outlook the possible opportunity for CNT chemiresistive sensors in the future. PMID:28420195
Carbon Nanotube-Based Chemiresistive Sensors.
Tang, Ruixian; Shi, Yongji; Hou, Zhongyu; Wei, Liangming
2017-04-18
The development of simple and low-cost chemical sensors is critically important for improving human life. Many types of chemical sensors have been developed. Among them, the chemiresistive sensors receive particular attention because of their simple structure, the ease of high precise measurement and the low cost. This review mainly focuses on carbon nanotube (CNT)-based chemiresistive sensors. We first describe the properties of CNTs and the structure of CNT chemiresistive sensors. Next, the sensing mechanism and the performance parameters of the sensors are discussed. Then, we detail the status of the CNT chemiresistive sensors for detection of different analytes. Lastly, we put forward the remaining challenges for CNT chemiresistive sensors and outlook the possible opportunity for CNT chemiresistive sensors in the future.
Cholesterol in preteen children of parents with premature coronary disease.
Gross, H; Caplan, C
1978-03-01
A pediatric population at high risk for the development of coronary artery disease has been identified. Using a simple and inexpensive protocol, serum cholesterol determinations were performed on 50 children 12 years old and younger. These children were taken from 28 families in which one parent had suffered a myocardial infarction before the age of 50. Eight of the 50 children were found to have significant elevation of serum cholesterol. This was an incidence of 16%--twice that of the general pediatric population. Subjects with both adverse genetic and metabolic backgrounds need to be identified in this simple way. Preventive and therapeutic measures in such children may alter in the future the serious morbidity and mortality of coronary artery disease.
Simple go/no-go test for subcritical damage in body armor panels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, Jason; Chimenti, D. E.
2011-06-23
The development of a simple test for subcritical damage in body armor panels using pressure-sensitive dye-indicator film has been performed and demonstrated effective. Measurements have shown that static indicator levels are accurately reproduced in dynamic loading events. Impacts from hard blunt impactors instrumented with an accelerometer and embedded force transducer were studied. Reliable correlations between the indicator film and instrumented impact force are shown for a range of impact energies. Force and acceleration waveforms with corresponding indicator film results are presented for impact events onto damaged and undamaged panels. We find that panel damage can occur at impact levels farmore » below the National Institute of Justice acceptance test standard.« less
Azlan, C A; Ng, K H; Anandan, S; Nizam, M S
2006-09-01
Illuminance level in the softcopy image viewing room is a very important factor to optimize productivity in radiological diagnosis. In today's radiological environment, the illuminance measurements are normally done during the quality control procedure and performed annually. Although the room is equipped with dimmer switches, radiologists are not able to decide the level of illuminance according to the standards. The aim of this study is to develop a simple real-time illuminance detector system to assist the radiologists in deciding an adequate illuminance level during radiological image viewing. The system indicates illuminance in a very simple visual form by using light emitting diodes. By employing the device in the viewing room, illuminance level can be monitored and adjusted effectively.
Minimalist design of a robust real-time quantum random number generator
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Radchenko, I. V.; Kulik, S. P.; Molotkov, S. N.
2015-08-01
We present a simple and robust construction of a real-time quantum random number generator (QRNG). Our minimalist approach ensures stable operation of the device as well as its simple and straightforward hardware implementation as a stand-alone module. As a source of randomness the device uses measurements of time intervals between clicks of a single-photon detector. The obtained raw sequence is then filtered and processed by a deterministic randomness extractor, which is realized as a look-up table. This enables high speed on-the-fly processing without the need of extensive computations. The overall performance of the device is around 1 random bit per detector click, resulting in 1.2 Mbit/s generation rate in our implementation.
Egerton, Thorlene; Riphagen, Ingrid I; Nygård, Arnhild J; Thingstad, Pernille; Helbostad, Jorunn L
2015-09-01
The assessment of fatigue in older people requires simple and user-friendly questionnaires that capture the phenomenon, yet are free from items indistinguishable from other disorders and experiences. This study aimed to evaluate the content, and systematically review and rate the measurement properties of self-report questionnaires for measuring fatigue, in order to identify the most suitable questionnaires for older people. This study firstly involved identification of questionnaires that purport to measure self-reported fatigue, and evaluation of the content using a rating scale developed for the purpose from contemporary understanding of the construct. Secondly, for the questionnaires that had acceptable content, we identified studies reporting measurement properties and rated the methodological quality of those studies according to the COSMIN system. Finally, we extracted and synthesised the results of the studies to give an overall rating for each questionnaire for each measurement property. The protocol was registered with PROSPERO (CRD42013005589). Of the 77 identified questionnaires, twelve were selected for review after content evaluation. Methodological quality varied, and there was a lack of information on measurement error and responsiveness. The PROMIS-Fatigue item bank and short forms perform the best. The FACIT-Fatigue scale, Parkinsons Fatigue Scale, Perform Questionnaire, and Uni-dimensional Fatigue Impact Scale also perform well and can be recommended. Minor modifications to improve performance are suggested. Further evaluation of unresolved measurement properties, particularly with samples including older people, is needed for all the recommended questionnaires.
Yasukawa, Keiko; Shimosawa, Tatsuo; Okubo, Shigeo; Yatomi, Yutaka
2018-01-01
Background Human mercaptalbumin and human non-mercaptalbumin have been reported as markers for various pathological conditions, such as kidney and liver diseases. These markers play important roles in redox regulations throughout the body. Despite the recognition of these markers in various pathophysiologic conditions, the measurements of human mercaptalbumin and non-mercaptalbumin have not been popular because of the technical complexity and long measurement time of conventional methods. Methods Based on previous reports, we explored the optimal analytical conditions for a high-performance liquid chromatography method using an anion-exchange column packed with a hydrophilic polyvinyl alcohol gel. The method was then validated using performance tests as well as measurements of various patients' serum samples. Results We successfully established a reliable high-performance liquid chromatography method with an analytical time of only 12 min per test. The repeatability (within-day variability) and reproducibility (day-to-day variability) were 0.30% and 0.27% (CV), respectively. A very good correlation was obtained with the results of the conventional method. Conclusions A practical method for the clinical measurement of human mercaptalbumin and non-mercaptalbumin was established. This high-performance liquid chromatography method is expected to be a powerful tool enabling the expansion of clinical usefulness and ensuring the elucidation of the roles of albumin in redox reactions throughout the human body.
Birdcage volume coils and magnetic resonance imaging: a simple experiment for students.
Vincent, Dwight E; Wang, Tianhao; Magyar, Thalia A K; Jacob, Peni I; Buist, Richard; Martin, Melanie
2017-01-01
This article explains some simple experiments that can be used in undergraduate or graduate physics or biomedical engineering laboratory classes to learn how birdcage volume radiofrequency (RF) coils and magnetic resonance imaging (MRI) work. For a clear picture, and to do any quantitative MRI analysis, acquiring images with a high signal-to-noise ratio (SNR) is required. With a given MRI system at a given field strength, the only means to change the SNR using hardware is to change the RF coil used to collect the image. RF coils can be designed in many different ways including birdcage volume RF coil designs. The choice of RF coil to give the best SNR for any MRI study is based on the sample being imaged. The data collected in the simple experiments show that the SNR varies as inverse diameter for the birdcage volume RF coils used in these experiments. The experiments were easily performed by a high school student, an undergraduate student, and a graduate student, in less than 3 h, the time typically allotted for a university laboratory course. The article describes experiments that students in undergraduate or graduate laboratories can perform to observe how birdcage volume RF coils influence MRI measurements. It is designed for students interested in pursuing careers in the imaging field.
Cummins, Dustin R.; Martinez, Ulises; Sherehiy, Andriy; Kappera, Rajesh; Martinez-Garcia, Alejandro; Schulze, Roland K.; Jasinski, Jacek; Zhang, Jing; Gupta, Ram K.; Lou, Jun; Chhowalla, Manish; Sumanasekera, Gamini; Mohite, Aditya D.; Sunkara, Mahendra K.; Gupta, Gautam
2016-01-01
Hydrogen evolution reaction is catalysed efficiently with precious metals, such as platinum; however, transition metal dichalcogenides have recently emerged as a promising class of materials for electrocatalysis, but these materials still have low activity and durability when compared with precious metals. Here we report a simple one-step scalable approach, where MoOx/MoS2 core-shell nanowires and molybdenum disulfide sheets are exposed to dilute aqueous hydrazine at room temperature, which results in marked improvement in electrocatalytic performance. The nanowires exhibit ∼100 mV improvement in overpotential following exposure to dilute hydrazine, while also showing a 10-fold increase in current density and a significant change in Tafel slope. In situ electrical, gate-dependent measurements and spectroscopic investigations reveal that hydrazine acts as an electron dopant in molybdenum disulfide, increasing its conductivity, while also reducing the MoOx core in the core-shell nanowires, which leads to improved electrocatalytic performance. PMID:27282871
Cummins, Dustin R; Martinez, Ulises; Sherehiy, Andriy; Kappera, Rajesh; Martinez-Garcia, Alejandro; Schulze, Roland K; Jasinski, Jacek; Zhang, Jing; Gupta, Ram K; Lou, Jun; Chhowalla, Manish; Sumanasekera, Gamini; Mohite, Aditya D; Sunkara, Mahendra K; Gupta, Gautam
2016-06-10
Hydrogen evolution reaction is catalysed efficiently with precious metals, such as platinum; however, transition metal dichalcogenides have recently emerged as a promising class of materials for electrocatalysis, but these materials still have low activity and durability when compared with precious metals. Here we report a simple one-step scalable approach, where MoOx/MoS2 core-shell nanowires and molybdenum disulfide sheets are exposed to dilute aqueous hydrazine at room temperature, which results in marked improvement in electrocatalytic performance. The nanowires exhibit ∼100 mV improvement in overpotential following exposure to dilute hydrazine, while also showing a 10-fold increase in current density and a significant change in Tafel slope. In situ electrical, gate-dependent measurements and spectroscopic investigations reveal that hydrazine acts as an electron dopant in molybdenum disulfide, increasing its conductivity, while also reducing the MoOx core in the core-shell nanowires, which leads to improved electrocatalytic performance.
Mapping a battlefield simulation onto message-passing parallel architectures
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
Perhaps the most critical problem in distributed simulation is that of mapping: without an effective mapping of workload to processors the speedup potential of parallel processing cannot be realized. Mapping a simulation onto a message-passing architecture is especially difficult when the computational workload dynamically changes as a function of time and space; this is exactly the situation faced by battlefield simulations. This paper studies an approach where the simulated battlefield domain is first partitioned into many regions of equal size; typically there are more regions than processors. The regions are then assigned to processors; a processor is responsible for performing all simulation activity associated with the regions. The assignment algorithm is quite simple and attempts to balance load by exploiting locality of workload intensity. The performance of this technique is studied on a simple battlefield simulation implemented on the Flex/32 multiprocessor. Measurements show that the proposed method achieves reasonable processor efficiencies. Furthermore, the method shows promise for use in dynamic remapping of the simulation.
Cummins, Dustin R.; Martinez, Ulises; Sherehiy, Andriy; ...
2016-06-10
In this study, hydrogen evolution reaction is catalyzed efficiently with precious metals, such as platinum; however, transition metal dichalcogenides have recently emerged as a promising class of materials for electrocatalysis, but these materials still have low activity and durability when compared with precious metals. Here we report a simple one-step scalable approach, where MoO x/MoS 2 core-shell nanowires and molybdenum disulfide sheets are exposed to dilute aqueous hydrazine at room temperature, which results in marked improvement in electrocatalytic performance. The nanowires exhibit ~100 mV improvement in over potential following exposure to dilute hydrazine, while also showing a 10-fold increase inmore » current density and a significant change in Tafel slope. In situ electrical, gate-dependent measurements and spectroscopic investigations reveal that hydrazine acts as an electron dopant in molybdenum disulfide, increasing its conductivity, while also reducing the MoO x core in the core-shell nanowires, which leads to improved electrocatalytic performance.« less
Rapid determination of minoxidil in human plasma using ion-pair HPLC.
Zarghi, A; Shafaati, A; Foroutan, S M; Khoddam, A
2004-10-29
A rapid, simple and sensitive ion-pair high-performance liquid chromatography (HPLC) method has been developed for quantification of minoxidil in plasma. The assay enables the measurement of minoxidil for therapeutic drug monitoring with a minimum detectable limit of 0.5 ng ml(-1). The method involves simple, one-step extraction procedure and analytical recovery was complete. The separation was performed on an analytical 150 x 4.6 mm i.d. microbondapak C18 column. The wavelength was set at 281 nm. The mobile phase was a mixture of 0.01 M sodium dihydrogen phosphate buffer and acetonitrile (60:40, v/v) containing 2.5 mM sodium dodecyl sulphate adjusted to pH 3.5 at a flow rate of 1 ml/min. The column temperature was set at 50 degrees C. The calibration curve was linear over the concentration range 2-100 ng ml(-1). The coefficients of variation for inter-day and intra-day assay were found to be less than 8%.
Continuous quantum measurements and the action uncertainty principle
NASA Astrophysics Data System (ADS)
Mensky, Michael B.
1992-09-01
The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Measuring listening effort: driving simulator vs. simple dual-task paradigm
Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth
2014-01-01
Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment. PMID:25083599
Similarity, not complexity, determines visual working memory performance.
Jackson, Margaret C; Linden, David E J; Roberts, Mark V; Kriegeskorte, Nikolaus; Haenschel, Corinna
2015-11-01
A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased perceptual similarity between complex items as a result of a large amount of overlapping information. Increased similarity is thought to lead to greater comparison errors between items encoded into WM and the test item(s) presented at retrieval. However, previous studies have used different object categories to manipulate complexity and similarity, raising questions as to whether these effects are simply due to cross-category differences. For the first time, here the relationship between complexity and similarity in WM using the same stimulus category (abstract polygons) are investigated. The authors used a delayed discrimination task to measure WM for 1-4 complex versus simple simultaneously presented items and manipulated the similarity between the single test item at retrieval and the sample items at encoding. WM was poorer for complex than simple items only when the test item was similar to 1 of the encoding items, and not when it was dissimilar or identical. The results provide clear support for reinterpretation of the complexity effect in WM as a similarity effect and highlight the importance of the retrieval stage in governing WM performance. The authors discuss how these findings can be reconciled with current models of WM capacity limits. (c) 2015 APA, all rights reserved).
Marson, D C; Chatterjee, A; Ingram, K K; Harrell, L E
1996-03-01
To identify cognitive predictors of competency performance and status in Alzheimer's disease (AD) using three differentially stringent legal standards for capacity to consent. Univariate and multivariate analyses of independent neuropsychological test measures with three dependent measures of competency to consent to treatment. University medical center. 15 normal older controls and 29 patients with probably AD (15 mild and 14 moderate). Subjects were administered a batter of neuropsychological measures theoretically linked to competency function, as well as two clinical vignettes testing capacity to consent to medical treatment under five legal standards (LSs). The present study focused on three differentially stringent LSs: the capacity simply to "evidence a treatment of choice" (LS1), which is a minimal standard; the capacity to "appreciate the consequences" of a treatment of choice (LS3), a moderately stringent standard; and the capacity to "understand the treatment situation and choices" (LS5), the most stringent standard. Control subject and AD patient neuropsychological test scores were correlated with scores on the three LSs. The resulting univariate correlates were than analyzed using stepwise regression and discriminant function to identify key multivariate predictors of competency performance and status under each LS. No neuropsychological measures predicted control group performance on the LSs. For the AD group, a measure of simple auditory comprehension predicted LS1 performance (r(2)=0.44, p < 0.0001), a word fluency measure predicted LS3 performance (r(2)=0.58, p < 0.0001), and measures of conceptualization and confrontation naming together predicted LS5 performance (r(2)=0.81, p < 0.0001). Under discriminant function analysis, confrontation naming was the best single predictor of LS1 competency status for all subjects, correctly classifying 96% of cases (42/44). Measures of visumotor tracking and confrontation naming were the best single predictors, respectively, of competency status under LS3 (91% [39/43]) and LS5 (98% [43/44]). Multiple cognitive functions are associated with loss of competency in AD. Deficits in conceptualization, semantic memory, and probably verbal recall are associated with the declining capacity of mild AD patients to understand a treatment situation and choices (LS5); executive dysfunction with the declining capacity of mild to moderate AD patients to identify the consequences of treatment choice (LS3); and receptive aphasia and severe dysnomia with the declining capacity of advanced AD patients to evidence a simple treatment choice (LS1). The results offer insight into the relationship between different legal thresholds of competency and the progressive cognitive changes characteristic of AD, and represent an initial step toward a neurologic model of competency.
Creating Simple Windchill Admin Tools Using Info*Engine
NASA Technical Reports Server (NTRS)
Jones, Corey; Kapatos, Dennis; Skradski, Cory
2012-01-01
Being a Windchill administrator often requires performing simple yet repetitive tasks on large sets of objects. These can include renaming, deleting, checking in, undoing checkout, and much more. This is especially true during a migration. Fortunately, PTC has provided a simple way to dynamically interact with Windchill using Info*Engine. This presentation will describe how to create simple Info*Engine tasks capable of saving Windchill 10.0 administrators hours of tedious work. It will also show how these tasks can be combined and displayed on a simple JSP page that acts as a "Windchill Administrator Dashboard/Toolbox". The attendee will learn some valuable tasks Info*Engine capable of performing. The attendee will gain a basic understanding of how to perform and implement Info*Engine tasks. The attendee will learn what's involved in creating a JSP page that displays Info*Engine tasks
Ameye, Lieveke; Fischerova, Daniela; Epstein, Elisabeth; Melis, Gian Benedetto; Guerriero, Stefano; Van Holsbeke, Caroline; Savelli, Luca; Fruscio, Robert; Lissoni, Andrea Alberto; Testa, Antonia Carla; Veldman, Joan; Vergote, Ignace; Van Huffel, Sabine; Bourne, Tom; Valentin, Lil
2010-01-01
Objectives To prospectively assess the diagnostic performance of simple ultrasound rules to predict benignity/malignancy in an adnexal mass and to test the performance of the risk of malignancy index, two logistic regression models, and subjective assessment of ultrasonic findings by an experienced ultrasound examiner in adnexal masses for which the simple rules yield an inconclusive result. Design Prospective temporal and external validation of simple ultrasound rules to distinguish benign from malignant adnexal masses. The rules comprised five ultrasonic features (including shape, size, solidity, and results of colour Doppler examination) to predict a malignant tumour (M features) and five to predict a benign tumour (B features). If one or more M features were present in the absence of a B feature, the mass was classified as malignant. If one or more B features were present in the absence of an M feature, it was classified as benign. If both M features and B features were present, or if none of the features was present, the simple rules were inconclusive. Setting 19 ultrasound centres in eight countries. Participants 1938 women with an adnexal mass examined with ultrasound by the principal investigator at each centre with a standardised research protocol. Reference standard Histological classification of the excised adnexal mass as benign or malignant. Main outcome measures Diagnostic sensitivity and specificity. Results Of the 1938 patients with an adnexal mass, 1396 (72%) had benign tumours, 373 (19.2%) had primary invasive tumours, 111 (5.7%) had borderline malignant tumours, and 58 (3%) had metastatic tumours in the ovary. The simple rules yielded a conclusive result in 1501 (77%) masses, for which they resulted in a sensitivity of 92% (95% confidence interval 89% to 94%) and a specificity of 96% (94% to 97%). The corresponding sensitivity and specificity of subjective assessment were 91% (88% to 94%) and 96% (94% to 97%). In the 357 masses for which the simple rules yielded an inconclusive result and with available results of CA-125 measurements, the sensitivities were 89% (83% to 93%) for subjective assessment, 50% (42% to 58%) for the risk of malignancy index, 89% (83% to 93%) for logistic regression model 1, and 82% (75% to 87%) for logistic regression model 2; the corresponding specificities were 78% (72% to 83%), 84% (78% to 88%), 44% (38% to 51%), and 48% (42% to 55%). Use of the simple rules as a triage test and subjective assessment for those masses for which the simple rules yielded an inconclusive result gave a sensitivity of 91% (88% to 93%) and a specificity of 93% (91% to 94%), compared with a sensitivity of 90% (88% to 93%) and a specificity of 93% (91% to 94%) when subjective assessment was used in all masses. Conclusions The use of the simple rules has the potential to improve the management of women with adnexal masses. In adnexal masses for which the rules yielded an inconclusive result, subjective assessment of ultrasonic findings by an experienced ultrasound examiner was the most accurate diagnostic test; the risk of malignancy index and the two regression models were not useful. PMID:21156740
NASA Astrophysics Data System (ADS)
Kadem, L.; Knapp, Y.; Pibarot, P.; Bertrand, E.; Garcia, D.; Durand, L. G.; Rieu, R.
2005-12-01
The effective orifice area (EOA) is the most commonly used parameter to assess the severity of aortic valve stenosis as well as the performance of valve substitutes. Particle image velocimetry (PIV) may be used for in vitro estimation of valve EOA. In the present study, we propose a new and simple method based on Howe’s developments of Lighthill’s aero-acoustic theory. This method is based on an acoustical source term (AST) to estimate the EOA from the transvalvular flow velocity measurements obtained by PIV. The EOAs measured by the AST method downstream of three sharp-edged orifices were in excellent agreement with the EOAs predicted from the potential flow theory used as the reference method in this study. Moreover, the AST method was more accurate than other conventional PIV methods based on streamlines, inflexion point or vorticity to predict the theoretical EOAs. The superiority of the AST method is likely due to the nonlinear form of the AST. There was also an excellent agreement between the EOAs measured by the AST method downstream of the three sharp-edged orifices as well as downstream of a bioprosthetic valve with those obtained by the conventional clinical method based on Doppler-echocardiographic measurements of transvalvular velocity. The results of this study suggest that this new simple PIV method provides an accurate estimation of the aortic valve flow EOA. This new method may thus be used as a reference method to estimate the EOA in experimental investigation of the performance of valve substitutes and to validate Doppler-echocardiographic measurements under various physiologic and pathologic flow conditions.
Attitude control system of the Delfi-n3Xt satellite
NASA Astrophysics Data System (ADS)
Reijneveld, J.; Choukroun, D.
2013-12-01
This work is concerned with the development of the attitude control algorithms that will be implemented on board of the Delfi-n3xt nanosatellite, which is to be launched in 2013. One of the mission objectives is to demonstrate Sun pointing and three axis stabilization. The attitude control modes and the associated algorithms are described. The control authority is shared between three body-mounted magnetorquers (MTQ) and three orthogonal reaction wheels. The attitude information is retrieved from Sun vector measurements, Earth magnetic field measurements, and gyro measurements. The design of the control is achieved as a trade between simplicity and performance. Stabilization and Sun pointing are achieved via the successive application of the classical Bdot control law and a quaternion feedback control. For the purpose of Sun pointing, a simple quaternion estimation scheme is implemented based on geometric arguments, where the need for a costly optimal filtering algorithm is alleviated, and a single line of sight (LoS) measurement is required - here the Sun vector. Beyond the three-axis Sun pointing mode, spinning Sun pointing modes are also described and used as demonstration modes. The three-axis Sun pointing mode requires reaction wheels and magnetic control while the spinning control modes are implemented with magnetic control only. In addition, a simple scheme for angular rates estimation using Sun vector and Earth magnetic measurements is tested in the case of gyro failures. The various control modes performances are illustrated via extensive simulations over several orbits time spans. The simulated models of the dynamical space environment, of the attitude hardware, and the onboard controller logic are using realistic assumptions. All control modes satisfy the minimal Sun pointing requirements allowed for power generation.
Fabrication of PVDF-TrFE based bilayered PbTiO3/PVDF-TrFE films capacitor
NASA Astrophysics Data System (ADS)
Nurbaya, Z.; Wahid, M. H.; Rozana, M. D.; Annuar, I.; Alrokayan, S. A. H.; Khan, H. A.; Rusop, M.
2016-07-01
Development of high performance capacitor is reaching towards new generation where the ferroelectric materials take places as the active dielectric layer. The motivation of this study is to produce high capacitance device with long life cycle. This was configured by preparing bilayered films where lead titanate as an active dielectric layer and stacked with the top dielectric layer, poly(vinyledenefluoride-trifluoroethylene). Both of them are being referred that have one in common which is ferroelectric behavior. Therefore the combination of ceramic and polymer ferroelectric material could perform optimum dielectric characteristic for capacitor applications. The fabrication was done by simple sol-gel spin coating method that being varied at spinning speed property for polymer layers, whereas maintaining the ceramic layer. The characterization of PVDF-TrFE/PbTiO3 was performed according to metal-insulator-metal stacked capacitor measurement which includes structural, dielectric, and ferroelectric measurement.
Nonlinear elastic behavior of sub-critically damaged body armor panel
NASA Astrophysics Data System (ADS)
Fisher, Jason T.; Chimenti, D. E.
2012-05-01
A simple go/no-go test for body armor panels using pressure-sensitive, dye-indicator film (PSF) has been shown to be statistically effective in revealing subcritical damage to body armor panels. Previous measurements have shown that static indicator levels are accurately reproduced in dynamic loading events. Further impact tests on armor worn by a human resuscitation dummy using instrumented masses with an attached accelerometer and embedded force transducer have been performed and analyzed. New impact tests have shown a reliable correlation between PSF indication (as digitized images) and impact force for a wide range of impactor energies and masses. Numerical evaluation of digital PSF images is presented and correlated with impact parameters. Relationships between impactor mass and energy, and corresponding measured force are shown. We will also report on comparisons between ballistic testing performed on panels damaged under various impact conditions and tests performed on undamaged panels.
Tomassetti, Mauro; Merola, Giovanni; Martini, Elisabetta; Campanella, Luigi; Sanzò, Gabriella; Favero, Gabriele; Mazzei, Franco
2017-01-01
In this research, we developed a direct-flow surface plasmon resonance (SPR) immunosensor for ampicillin to perform direct, simple, and fast measurements of this important antibiotic. In order to better evaluate the performance, it was compared with a conventional amperometric immunosensor, working with a competitive format with the aim of finding out experimental real advantages and disadvantages of two respective methods. Results showed that certain analytical features of the new SPR immunodevice, such as the lower limit of detection (LOD) value and the width of the linear range, are poorer than those of a conventional amperometric immunosensor, which adversely affects the application to samples such as natural waters. On the other hand, the SPR immunosensor was more selective to ampicillin, and measurements were more easily and quickly attained compared to those performed with the conventional competitive immunosensor. PMID:28394296
Dehydration and performance on clinical concussion measures in collegiate wrestlers.
Weber, Amanda Friedline; Mihalik, Jason P; Register-Mihalik, Johna K; Mays, Sally; Prentice, William E; Guskiewicz, Kevin M
2013-01-01
The effects of dehydration induced by wrestling-related weight-cutting tactics on clinical concussion outcomes, such as neurocognitive function, balance performance, and symptoms, have not been adequately studied. To evaluate the effects of dehydration on the outcome of clinical concussion measures in National Collegiate Athletic Association Division I collegiate wrestlers. Repeated-measures design. Clinical research laboratory. Thirty-two Division I healthy collegiate male wrestlers (age = 20.0 ± 1.4 years; height = 175.0 ± 7.5 cm; baseline mass = 79.2 ± 12.6 kg). Participants completed preseason concussion baseline testing in early September. Weight and urine samples were also collected at this time. All participants reported to prewrestling practice and postwrestling practice for the same test battery and protocol in mid-October. They had begun practicing weight-cutting tactics a day before prepractice and postpractice testing. Differences between these measures permitted us to evaluate how dehydration and weight-cutting tactics affected concussion measures. Sport Concussion Assessment Tool 2 (SCAT2), Balance Error Scoring System, Graded Symptom Checklist, and Simple Reaction Time scores. The Simple Reaction Time was measured using the Automated Neuropsychological Assessment Metrics. The SCAT2 measurements were lower at prepractice (P = .002) and postpractice (P < .001) when compared with baseline. The BESS error scores were higher at postpractice when compared with baseline (P = .015). The GSC severity scores were higher at prepractice (P = .011) and postpractice (P < .001) than at baseline and at postpractice when than at prepractice (P = .003). The number of Graded Symptom Checklist symptoms reported was also higher at prepractice (P = .036) and postpractice (P < .001) when compared with baseline, and at postpractice when compared with prepractice (P = .003). Our results suggest that it is important for wrestlers to be evaluated in a euhydrated state to ensure that dehydration is not influencing the outcome of the clinical measures.
Simmonds, Mark; Burch, Jane; Llewellyn, Alexis; Griffiths, Claire; Yang, Huiqin; Owen, Christopher; Duffy, Steven; Woolacott, Nerys
2015-06-01
It is uncertain which simple measures of childhood obesity are best for predicting future obesity-related health problems and the persistence of obesity into adolescence and adulthood. To investigate the ability of simple measures, such as body mass index (BMI), to predict the persistence of obesity from childhood into adulthood and to predict obesity-related adult morbidities. To investigate how accurately simple measures diagnose obesity in children, and how acceptable these measures are to children, carers and health professionals. Multiple sources including MEDLINE, EMBASE and The Cochrane Library were searched from 2008 to 2013. Systematic reviews and a meta-analysis were carried out of large cohort studies on the association between childhood obesity and adult obesity; the association between childhood obesity and obesity-related morbidities in adulthood; and the diagnostic accuracy of simple childhood obesity measures. Study quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) and a modified version of the Quality in Prognosis Studies (QUIPS) tool. A systematic review and an elicitation exercise were conducted on the acceptability of the simple measures. Thirty-seven studies (22 cohorts) were included in the review of prediction of adult morbidities. Twenty-three studies (16 cohorts) were included in the tracking review. All studies included BMI. There were very few studies of other measures. There was a strong positive association between high childhood BMI and adult obesity [odds ratio 5.21, 95% confidence interval (CI) 4.50 to 6.02]. A positive association was found between high childhood BMI and adult coronary heart disease, diabetes and a range of cancers, but not stroke or breast cancer. The predictive accuracy of childhood BMI to predict any adult morbidity was very low, with most morbidities occurring in adults who were of healthy weight in childhood. Predictive accuracy of childhood obesity was moderate for predicting adult obesity, with a sensitivity of 30% and a specificity of 98%. Persistence of obesity from adolescence to adulthood was high. Thirty-four studies were included in the diagnostic accuracy review. Most of the studies used the least reliable reference standard (dual-energy X-ray absorptiometry); only 24% of studies were of high quality. The sensitivity of BMI for diagnosing obesity and overweight varied considerably; specificity was less variable. Pooled sensitivity of BMI was 74% (95% CI 64.2% to 81.8%) and pooled specificity was 95% (95% CI 92.2% to 96.4%). The acceptability to children and their carers of BMI or other common simple measures was generally good. Little evidence was available regarding childhood measures other than BMI. No individual-level analysis could be performed. Childhood BMI is not a good predictor of adult obesity or adult disease; the majority of obese adults were not obese as children and most obesity-related adult morbidity occurs in adults who had a healthy childhood weight. However, obesity (as measured using BMI) was found to persist from childhood to adulthood, with most obese adolescents also being obese in adulthood. BMI was found to be reasonably good for diagnosing obesity during childhood. There is no convincing evidence suggesting that any simple measure is better than BMI for diagnosing obesity in childhood or predicting adult obesity and morbidity. Further research on obesity measures other than BMI is needed to determine which is the best tool for diagnosing childhood obesity, and new cohort studies are needed to investigate the impact of contemporary childhood obesity on adult obesity and obesity-related morbidities. This study is registered as PROSPERO CRD42013005711. The National Institute for Health Research Health Technology Assessment programme.
Further statistics in dentistry, Part 5: Diagnostic tests for oral conditions.
Petrie, A; Bulman, J S; Osborn, J F
2002-12-07
A diagnostic test is a simple test, sometimes based on a clinical measurement, which is used when the gold-standard test providing a definitive diagnosis of a given condition is too expensive, invasive or time-consuming to perform. The diagnostic test can be used to diagnose a dental condition in an individual patient or as a screening device in a population of apparently healthy individuals.
Rubino, Stefano; Akhtar, Sultan; Leifer, Klaus
2016-02-01
We present a simple, fast method for thickness characterization of suspended graphene/graphite flakes that is based on transmission electron microscopy (TEM). We derive an analytical expression for the intensity of the transmitted electron beam I 0(t), as a function of the specimen thickness t (t<λ; where λ is the absorption constant for graphite). We show that in thin graphite crystals the transmitted intensity is a linear function of t. Furthermore, high-resolution (HR) TEM simulations are performed to obtain λ for a 001 zone axis orientation, in a two-beam case and in a low symmetry orientation. Subsequently, HR (used to determine t) and bright-field (to measure I 0(0) and I 0(t)) images were acquired to experimentally determine λ. The experimental value measured in low symmetry orientation matches the calculated value (i.e., λ=225±9 nm). The simulations also show that the linear approximation is valid up to a sample thickness of 3-4 nm regardless of the orientation and up to several ten nanometers for a low symmetry orientation. When compared with standard techniques for thickness determination of graphene/graphite, the method we propose has the advantage of being simple and fast, requiring only the acquisition of bright-field images.
3D Laser Triangulation for Plant Phenotyping in Challenging Environments
Kjaer, Katrine Heinsvig; Ottosen, Carl-Otto
2015-01-01
To increase the understanding of how the plant phenotype is formed by genotype and environmental interactions, simple and robust high-throughput plant phenotyping methods should be developed and considered. This would not only broaden the application range of phenotyping in the plant research community, but also increase the ability for researchers to study plants in their natural environments. By studying plants in their natural environment in high temporal resolution, more knowledge on how multiple stresses interact in defining the plant phenotype could lead to a better understanding of the interaction between plant responses and epigenetic regulation. In the present paper, we evaluate a commercial 3D NIR-laser scanner (PlantEye, Phenospex B.V., Herleen, The Netherlands) to track daily changes in plant growth with high precision in challenging environments. Firstly, we demonstrate that the NIR laser beam of the scanner does not affect plant photosynthetic performance. Secondly, we demonstrate that it is possible to estimate phenotypic variation amongst the growth pattern of ten genotypes of Brassica napus L. (rapeseed), using a simple linear correlation between scanned parameters and destructive growth measurements. Our results demonstrate the high potential of 3D laser triangulation for simple measurements of phenotypic variation in challenging environments and in a high temporal resolution. PMID:26066990
Prediction during statistical learning, and implications for the implicit/explicit divide
Dale, Rick; Duran, Nicholas D.; Morehead, J. Ryan
2012-01-01
Accounts of statistical learning, both implicit and explicit, often invoke predictive processes as central to learning, yet practically all experiments employ non-predictive measures during training. We argue that the common theoretical assumption of anticipation and prediction needs clearer, more direct evidence for it during learning. We offer a novel experimental context to explore prediction, and report results from a simple sequential learning task designed to promote predictive behaviors in participants as they responded to a short sequence of simple stimulus events. Predictive tendencies in participants were measured using their computer mouse, the trajectories of which served as a means of tapping into predictive behavior while participants were exposed to very short and simple sequences of events. A total of 143 participants were randomly assigned to stimulus sequences along a continuum of regularity. Analysis of computer-mouse trajectories revealed that (a) participants almost always anticipate events in some manner, (b) participants exhibit two stable patterns of behavior, either reacting to vs. predicting future events, (c) the extent to which participants predict relates to performance on a recall test, and (d) explicit reports of perceiving patterns in the brief sequence correlates with extent of prediction. We end with a discussion of implicit and explicit statistical learning and of the role prediction may play in both kinds of learning. PMID:22723817
A simple and versatile phase detector for heterodyne interferometers
NASA Astrophysics Data System (ADS)
Mlynek, A.; Faugel, H.; Eixenberger, H.; Pautasso, G.; Sellmair, G.
2017-02-01
The measurement of the relative phase of two sinusoidal electrical signals is a frequently encountered task in heterodyne interferometry, but also occurs in many other applications. Especially in interferometry, multi-radian detectors are often required, which track the temporal evolution of the phase difference and are able to register phase changes that exceed 2π. While a large variety of solutions to this problem is already known, we present an alternative approach, which pre-processes the signals with simple analog circuitry and digitizes two resulting voltages with an analog-to-digital converter (ADC), whose sampling frequency can be far below the frequency of the sinusoidal signals. Phase reconstruction is finally carried out by software. The main advantage of this approach is its simplicity, using only few low-cost hardware components and a standard 2-channel ADC with low performance requirements. We present an application on the two-color interferometer of the ASDEX Upgrade tokamak, where the relative phase of 40 MHz sinusoids is measured.
Liquid-vapor rectilinear diameter revisited
NASA Astrophysics Data System (ADS)
Garrabos, Y.; Lecoutre, C.; Marre, S.; Beysens, D.; Hahn, I.
2018-02-01
In the modern theory of critical phenomena, the liquid-vapor density diameter in simple fluids is generally expected to deviate from a rectilinear law approaching the critical point. However, by performing precise scannerlike optical measurements of the position of the SF6 liquid-vapor meniscus, in an approach much closer to criticality in temperature and density than earlier measurements, no deviation from a rectilinear diameter can be detected. The observed meniscus position from far (10 K ) to extremely close (1 mK ) to the critical temperature is analyzed using recent theoretical models to predict the complete scaling consequences of a fluid asymmetry. The temperature dependence of the meniscus position appears consistent with the law of rectilinear diameter. The apparent absence of the critical hook in SF6 therefore seemingly rules out the need for the pressure scaling field contribution in the complete scaling theoretical framework in this SF6 analysis. More generally, this work suggests a way to clarify the experimental ambiguities in the simple fluids for the near-critical singularities in the density diameter.
The penta-prism LTP: A long-trace-profiler with stationary optical head and moving penta prism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-03-01
Metrology requirements for optical components for third-generation synchrotron sources are taxing the state of the art in manufacturing technology. We have investigated a number of error sources in a commercial figure measurement instrument, the Long-Trace-Profiler II, and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, as in the original pencil-beam interferometer design of von Bieren, the stability of the optical system is greatly improved, and the remaining errormore » signals can be corrected by a simple reference beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
Bounded extremum seeking for angular velocity actuated control of nonholonomic unicycle
Scheinker, Alexander
2016-08-17
Here, we study control of the angular-velocity actuated nonholonomic unicycle, via a simple, bounded extremum seeking controller which is robust to external disturbances and measurement noise. The vehicle performs source seeking despite not having any position information about itself or the source, able only to sense a noise corrupted scalar value whose extremum coincides with the unknown source location. In order to control the angular velocity, rather than the angular heading directly, a controller is developed such that the closed loop system exhibits multiple time scales and requires an analysis approach expanding the previous work of Kurzweil, Jarnik, Sussmann, andmore » Liu, utilizing weak limits. We provide analytic proof of stability and demonstrate how this simple scheme can be extended to include position-independent source seeking, tracking, and collision avoidance of groups on autonomous vehicles in GPS-denied environments, based only on a measure of distance to an obstacle, which is an especially important feature for an autonomous agent.« less
Flexible Modes Control Using Sliding Mode Observers: Application to Ares I
NASA Technical Reports Server (NTRS)
Shtessel, Yuri B.; Hall, Charles E.; Baev, Simon; Orr, Jeb S.
2010-01-01
The launch vehicle dynamics affected by bending and sloshing modes are considered. Attitude measurement data that are corrupted by flexible modes could yield instability of the vehicle dynamics. Flexible body and sloshing modes are reconstructed by sliding mode observers. The resultant estimates are used to remove the undesirable dynamics from the measurements, and the direct effects of sloshing and bending modes on the launch vehicle are compensated by means of a controller that is designed without taking the bending and sloshing modes into account. A linearized mathematical model of Ares I launch vehicle was derived based on FRACTAL, a linear model developed by NASA/MSFC. The compensated vehicle dynamics with a simple PID controller were studied for the launch vehicle model that included two bending modes, two slosh modes and actuator dynamics. A simulation study demonstrated stable and accurate performance of the flight control system with the augmented simple PID controller without the use of traditional linear bending filters.
NASA Astrophysics Data System (ADS)
Ani, M. H.; Helmi, F.; Herman, S. H.; Noh, S.
2018-01-01
Recently, extensive researches have been done on memristor to replace current memory storage technologies. Study on active layer of memristor mostly involving n-type semiconductor oxide such as TiO2 and ZnO. This paper highlight a simple water vapour oxidation method at 423 K to form Cu/Cu2O electronic junction as a new type of memristor. Cu2O is a p-type semiconductor oxide, was used as the active layer of memristor. Cu/Cu2O/Au memristor was fabricated by thermal oxidation of copper foil, followed by sputtering of gold. Structural, morphological and memristive properties were characterized using XRD, FESEM, and current-voltage, I-V measurement respectively. Its memristivity was indentified by pinch hysteresis loop and measurement of high resistance state (HRS) and low resistance state (LRS) of the sample. The Cu/Cu2O/Au memristor demonstrates comparable performances to previous studies using other methods.
Hassan, Ahmed Sheikh; Sapin, Anne; Ubrich, Nathalie; Maincent, Philippe; Bolzan, Claire; Leroy, Pierre
2008-10-01
A simple and sensitive high-performance liquid chromatography (HPLC) assay applied to the measurement of ibuprofen in rat plasma has been developed. Two parameters have been investigated to improve ibuprofen detectability using fluorescence detection: variation of mobile phase pH and the use of beta-cyclodextrin (beta-CD). Increasing the pH value from 2.5 to 6.5 and adding 5 mM beta-CD enhanced the fluorescence signal (lambda(exc) = 224 nm; lambda(em) = 290 nm) by 2.5 and 1.3-fold, respectively, when using standards. In the case of plasma samples, only pH variation significantly lowered detection and quantification limits, down to 10 and 35 ng/mL, respectively. Full selectivity was obtained with a single step for plasma treatment, that is, protein precipitation with acidified acetonitrile. The validated method was applied to a pharmacokinetic study of ibuprofen encapsulated in microspheres and subcutaneously administered to rats.
Determination of simple thresholds for accelerometry-based parameters for fall detection.
Kangas, Maarit; Konttila, Antti; Winblad, Ilkka; Jämsä, Timo
2007-01-01
The increasing population of elderly people is mainly living in a home-dwelling environment and needs applications to support their independency and safety. Falls are one of the major health risks that affect the quality of life among older adults. Body attached accelerometers have been used to detect falls. The placement of the accelerometric sensor as well as the fall detection algorithms are still under investigation. The aim of the present pilot study was to determine acceleration thresholds for fall detection, using triaxial accelerometric measurements at the waist, wrist, and head. Intentional falls (forward, backward, and lateral) and activities of daily living (ADL) were performed by two voluntary subjects. The results showed that measurements from the waist and head have potential to distinguish between falls and ADL. Especially, when the simple threshold-based detection was combined with posture detection after the fall, the sensitivity and specificity of fall detection were up to 100 %. On the contrary, the wrist did not appear to be an optimal site for fall detection.
Further shock tunnel studies of scramjet phenomena
NASA Technical Reports Server (NTRS)
Morgan, R. G.; Paull, A.; Morris, N. A.; Stalker, R. J.
1986-01-01
Scramjet phenomena were studied using the shock tunnel T3 at the Australian National University. Simple two dimensional models were used with a combination of wall and central injectors. Silane as an additive to hydrogen fuel was studied over a range of temperatures and pressures to evaluate its effect as an ignition aid. The film cooling effect of surface injected hydrogen was measured over a wide range of equivalence. Heat transfer measurements without injection were repeated to confirm previous indications of heating rates lower than simple flat plate predictions for laminar boundary layers in equilibrium flow. The previous results were reproduced and the discrepancies are discussed in terms of the model geometry and departures of the flow from equilibrium. In the thrust producing mode, attempts were made to increase specific impulse with wall injection. Some preliminary tests were also performed on shock induced ignition, to investigate the possibility in flight of injecting fuel upstream of the combustion chamber, where it could mix but not burn.
Ohhara, Yoshihito; Oshima, Marie; Iwai, Toshinori; Kitajima, Hiroaki; Yajima, Yasuharu; Mitsudo, Kenji; Krdy, Absy; Tohnai, Iwai
2016-02-04
Patient-specific modelling in clinical studies requires a realistic simulation to be performed within a reasonable computational time. The aim of this study was to develop simple but realistic outflow boundary conditions for patient-specific blood flow simulation which can be used to clarify the distribution of the anticancer agent in intra-arterial chemotherapy for oral cancer. In this study, the boundary conditions are expressed as a zero dimension (0D) resistance model of the peripheral vessel network based on the fractal characteristics of branching arteries combined with knowledge of the circulatory system and the energy minimization principle. This resistance model was applied to four patient-specific blood flow simulations at the region where the common carotid artery bifurcates into the internal and external carotid arteries. Results of these simulations with the proposed boundary conditions were compared with the results of ultrasound measurements for the same patients. The pressure was found to be within the physiological range. The difference in velocity in the superficial temporal artery results in an error of 5.21 ± 0.78 % between the numerical results and the measurement data. The proposed outflow boundary conditions, therefore, constitute a simple resistance-based model and can be used for performing accurate simulations with commercial fluid dynamics software.
NASA Astrophysics Data System (ADS)
Shanmugavadivu, P.; Eliahim Jeevaraj, P. S.
2014-06-01
The Adaptive Iterated Functions Systems (AIFS) Filter presented in this paper has an outstanding potential to attenuate the fixed-value impulse noise in images. This filter has two distinct phases namely noise detection and noise correction which uses Measure of Statistics and Iterated Function Systems (IFS) respectively. The performance of AIFS filter is assessed by three metrics namely, Peak Signal-to-Noise Ratio (PSNR), Mean Structural Similarity Index Matrix (MSSIM) and Human Visual Perception (HVP). The quantitative measures PSNR and MSSIM endorse the merit of this filter in terms of degree of noise suppression and details/edge preservation respectively, in comparison with the high performing filters reported in the recent literature. The qualitative measure HVP confirms the noise suppression ability of the devised filter. This computationally simple noise filter broadly finds application wherein the images are highly degraded by fixed-value impulse noise.
King, Marika R.; Binger, Cathy; Kent-Walsh, Jennifer
2015-01-01
The developmental readiness of four 5-year-old children to produce basic sentences using graphic symbols on an augmentative and alternative communication (AAC) device during a dynamic assessment (DA) task was examined. Additionally, the ability of the DA task to predict performance on a subsequent experimental task was evaluated. A graduated prompting framework was used during DA. Measures included amount of support required to produce the targets, modifiability (change in participant performance) within a DA session, and predictive validity of DA. Participants accurately produced target structures with varying amounts of support. Modifiability within DA sessions was evident for some participants, and partial support was provided for the measures of predictive validity. These initial results indicate that DA may be a viable way to measure young children’s developmental readiness to learn how to sequence simple, rule-based messages via aided AAC. PMID:25621928
Jatana, Gurneesh S; Magee, Mark; Fain, David; Naik, Sameer V; Shaver, Gregory M; Lucht, Robert P
2015-02-10
A diode-laser-absorption-spectroscopy-based sensor system was used to perform high-speed (100 Hz to 5 kHz) measurements of gas properties (temperature, pressure, and H(2)O vapor concentration) at the turbocharger inlet and at the exhaust gas recirculation (EGR) cooler exit of a diesel engine. An earlier version of this system was previously used for high-speed measurements of gas temperature and H(2)O vapor concentration in the intake manifold of the diesel engine. A 1387.2 N m tunable distributed feedback diode laser was used to scan across multiple H(2)O absorption transitions, and the direct absorption signal was recorded using a high-speed data acquisition system. Compact optical connectors were designed to conduct simultaneous measurements in the intake manifold, the EGR cooler exit, and the turbocharger inlet of the engine. For measurements at the turbocharger inlet, these custom optical connectors survived gas temperatures as high as 800 K using a simple and passive arrangement in which the temperature-sensitive components were protected from high temperatures using ceramic insulators. This arrangement reduced system cost and complexity by eliminating the need for any active water or oil cooling. Diode-laser measurements performed during steady-state engine operation were within 5% of the thermocouple and pressure sensor measurements, and within 10% of the H(2)O concentration values derived from the CO(2) gas analyzer measurements. Measurements were also performed in the engine during transient events. In one such transient event, where a step change in fueling was introduced, the diode-laser sensor was able to capture the 30 ms change in the gas properties; the thermocouple, on the other hand, required 7.4 s to accurately reflect the change in gas conditions, while the gas analyzer required nearly 600 ms. To the best of our knowledge, this is the first implementation of such a simple and passive arrangement of high-temperature optical connectors as well as the first documented application of diode-laser absorption for high-speed gas dynamics measurements in the turbocharger inlet and EGR cooler exit of a diesel engine.
Halsteinli, Vidar; Kittelsen, Sverre A; Magnussen, Jon
2010-02-01
The performance of health service providers may be monitored by measuring productivity. However, the policy value of such measures may depend crucially on the accuracy of input and output measures. In particular, an important question is how to adjust adequately for case-mix in the production of health care. In this study, we assess productivity growth in Norwegian outpatient child and adolescent mental health service units (CAMHS) over a period characterized by governmental utilization of simple productivity indices, a substantial increase in capacity and a concurrent change in case-mix. We analyze the sensitivity of the productivity growth estimates using different specifications of output to adjust for case-mix differences. Case-mix adjustment is achieved by distributing patients into eight groups depending on reason for referral, age and gender, as well as correcting for the number of consultations. We utilize the nonparametric Data Envelopment Analysis (DEA) method to implicitly calculate weights that maximize each unit's efficiency. Malmquist indices of technical productivity growth are estimated and bootstrap procedures are performed to calculate confidence intervals and to test alternative specifications of outputs. The dataset consist of an unbalanced panel of 48-60 CAMHS in the period 1998-2006. The mean productivity growth estimate from a simple unadjusted patient model (one single output) is 35%; adjusting for case-mix (eight outputs) reduces the growth estimate to 15%. Adding consultations increases the estimate to 28%. The latter reflects an increase in number of consultations per patient. We find that the governmental productivity indices strongly tend to overestimate productivity growth. Case-mix adjustment is of major importance and governmental utilization of performance indicators necessitates careful considerations of output specifications. Copyright 2009 Elsevier Ltd. All rights reserved.
Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane
2016-09-20
The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.
NASA Astrophysics Data System (ADS)
Butler, S. L.
2017-12-01
The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.
The five traps of performance measurement.
Likierman, Andrew
2009-10-01
Evaluating a company's performance often entails wading through a thicket of numbers produced by a few simple metrics, writes the author, and senior executives leave measurement to those whose specialty is spreadsheets. To take ownership of performance assessment, those executives should find qualitative, forward-looking measures that will help them avoid five common traps: Measuring against yourself. Find data from outside the company, and reward relative, rather than absolute, performance. Enterprise Rent-A-Car uses a service quality index to measure customers' repeat purchase intentions. Looking backward. Use measures that lead rather than lag the profits in your business. Humana, a health insurer, found that the sickest 10% of its patients account for 80% of its costs; now it offers customers incentives for early screening. Putting your faith in numbers. The soft drinks company Britvic evaluates its executive coaching program not by trying to assign it an ROI number but by tracking participants' careers for a year. Gaming your metrics. The law firm Clifford Chance replaced its single, easy-to-game metric of billable hours with seven criteria on which to base bonuses. Sticking to your numbers too long. Be precise about what you want to assess and explicit about what metrics are assessing it. Such clarity would have helped investors interpret the AAA ratings involved in the financial meltdown. Really good assessment will combine finance managers' relative independence with line managers' expertise.
Implementing self sustained quality control procedures in a clinical laboratory.
Khatri, Roshan; K C, Sanjay; Shrestha, Prabodh; Sinha, J N
2013-01-01
Quality control is an essential component in every clinical laboratory which maintains the excellence of laboratory standards, supplementing to proper disease diagnosis, patient care and resulting in overall strengthening of health care system. Numerous quality control schemes are available, with combinations of procedures, most of which are tedious, time consuming and can be "too technical" whereas commercially available quality control materials can be expensive especially for laboratories in developing nations like Nepal. Here, we present a procedure performed at our centre with self prepared control serum and use of simple statistical tools for quality assurance. The pooled serum was prepared as per guidelines for preparation of stabilized liquid quality control serum from human sera. Internal Quality Assessment was performed on this sample, on a daily basis which included measurement of 12 routine biochemical parameters. The results were plotted on Levey-Jennings charts and analysed with quality control rules, for a period of one month. The mean levels of biochemical analytes in self prepared control serum were within normal physiological range. This serum was evaluated every day along with patients' samples. The results obtained were plotted on control charts and analysed using common quality control rules to identify possible systematic and random errors. Immediate mitigation measures were taken and the dispatch of erroneous reports was avoided. In this study we try to highlight on a simple internal quality control procedure which can be performed by laboratories, with minimum technology, expenditure, and expertise and improve reliability and validity of the test reports.
Bioreactor design studies for a hydrogen-producing bacterium.
Wolfrum, Edward J; Watt, Andrew S
2002-01-01
Carbon monoxide (CO) can be metabolized by a number of microorganisms along with water to produce hydrogen (H2) and carbon dioxide. National Renewable Energy Laboratory researchers have isolated a number of bacteria that perform this so-called water-gas shift reaction at ambient temperatures. We performed experiments to measure the rate of CO conversion and H2 production in a trickle-bed reactor (TBR). The liquid recirculation rate and the reactor support material both affected the mass transfer coefficient, which controls the overall performance of the reactor. A simple reactor model taken from the literature was used to quantitatively compare the performance of the TBR geometry at two different size scales. Good agreement between the two reactor scales was obtained.
Effects of stress upon psychophysiological responses and performance following sleep deprivation
NASA Technical Reports Server (NTRS)
Roessler, R.; Lester, J. W.
1972-01-01
The usefulness of psychological and physiological variables in predicting performance under stress of 48 hours of sleep deprivation was investigated. Performance tests, with subjects of different ego strength personalities, in concept acquisition, reading comprehension, word association, word memory, and anagrams were conducted, and physiological measurements of (1) the phasic and tonic electrodermal, (2) galvanic skin response, (3) thermal skin resistance, (4) heart rate, (5) respiration, and (6) plethysmographic finger pulse volumn were recorded. It was found that the changes in the pattern of performance were the result of testing subjects at times when they would normally be sleeping, and that sleep deprivation longer than 48 hours must be maintained to produce changes in simple or well learned tasks.
Ullal-Gupta, Sangeeta; Hannon, Erin E.; Snyder, Joel S.
2014-01-01
Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians) or a complex meter (familiar only to Indians). A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase). When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after) the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters. PMID:25075514
Simple Techniques for Microclimate Measurement.
ERIC Educational Resources Information Center
Unwin, D. M.
1978-01-01
Describes simple ways of measuring the very local climate near the ground, and explains what these measurements mean. Equipment included a solar radiometer, a dew point instrument, and a thermocouple psychrometer. Examples are given of field measurements taken with some of the equipment and the results and their interpretation are discussed.…
NASA Astrophysics Data System (ADS)
Holway, Kevin; Thaxton, Christopher S.; Calantoni, Joseph
2012-11-01
Morphodynamic models of coastal evolution require relatively simple parameterizations of sediment transport for application over larger scales. Calantoni and Thaxton (2008) [6] presented a transport parameterization for bimodal distributions of coarse quartz grains derived from detailed boundary layer simulations for sheet flow and near sheet flow conditions. The simulation results, valid over a range of wave forcing conditions and large- to small-grain diameter ratios, were successfully parameterized with a simple power law that allows for the prediction of the transport rates of each size fraction. Here, we have applied the simple power law to a two-dimensional cellular automaton to simulate sheet flow transport. Model results are validated with experiments performed in the small oscillating flow tunnel (S-OFT) at the Naval Research Laboratory at Stennis Space Center, MS, in which sheet flow transport was generated with a bed composed of a bimodal distribution of non-cohesive grains. The work presented suggests that, under the conditions specified, algorithms that incorporate the power law may correctly reproduce laboratory bed surface measurements of bimodal sheet flow transport while inherently incorporating vertical mixing by size.
Kosmadopoulos, Anastasi; Sargent, Charli; Zhou, Xuan; Darwent, David; Matthews, Raymond W; Dawson, Drew; Roach, Gregory D
2017-02-01
Fatigue is a significant contributor to motor-vehicle accidents and fatalities. Shift workers are particularly susceptible to fatigue-related risks as they are often sleep-restricted and required to commute around the clock. Simple assays of performance could provide useful indications of risk in fatigue management, but their effectiveness may be influenced by changes in their sensitivity to sleep loss across the day. The aim of this study was to evaluate the sensitivity of several neurobehavioral and subjective tasks to sleep restriction (SR) at different circadian phases and their efficacy as predictors of performance during a simulated driving task. Thirty-two volunteers (M±SD; 22.8±2.9 years) were time-isolated for 13-days and participated in one of two 14-h forced desynchrony protocols with sleep opportunities equivalent to 8h/24h (control) or 4h/24h (SR). At regular intervals during wake periods, participants completed a simulated driving task, several neurobehavioral tasks, including the psychomotor vigilance task (PVT), and subjective ratings, including a self-assessment measure of ability to perform. Scores transformed into standardized units relative to baseline were folded into circadian phase bins based on core body temperature. Sleep dose and circadian phase effect sizes were derived via mixed models analyses. Predictors of driving were identified with regressions. Performance was most sensitive to sleep restriction around the circadian nadir. The effects of sleep restriction around the circadian nadir were larger for simulated driving and neurobehavioral tasks than for subjective ratings. Tasks did not significantly predict driving performance during the control condition or around the acrophase during the SR condition. The PVT and self-assessed ability were the best predictors of simulated driving across circadian phases during SR. These results show that simple performance measures and self-monitoring explain a large proportion of the variance in driving when fatigue-risk is high. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kang, Yan-Ru; Li, Ya-Li; Hou, Feng; Wen, Yang-Yang; Su, Dong
2012-05-01
An electrically conductive and electrochemically active composite paper of graphene nanosheet (GNS) coated cellulose fibres was fabricated via a simple paper-making process of dispersing chemically synthesized GNS into a cellulose pulp, followed by infiltration. The GNS nanosheet was deposited onto the cellulose fibers, forming a coating, during infiltration. It forms a continuous network through a bridge of interconnected cellulose fibres at small GNS loadings (3.2 wt%). The GNS/cellulose paper is as flexible and mechanically tough as the pure cellulose paper. The electrical measurements show the composite paper has a sheet resistance of 1063 Ω □-1 and a conductivity of 11.6 S m-1. The application of the composite paper as a flexible double layer supercapacitor in an organic electrolyte (LiPF6) displays a high capacity of 252 F g-1 at a current density of 1 A g-1 with respect to GNS. Moreover, the paper can be used as the anode in a lithium battery, showing distinct charge and discharge performances. The simple process for synthesising the GNS functionalized cellulose papers is attractive for the development of high performance papers for electrical, electrochemical and multifunctional applications.An electrically conductive and electrochemically active composite paper of graphene nanosheet (GNS) coated cellulose fibres was fabricated via a simple paper-making process of dispersing chemically synthesized GNS into a cellulose pulp, followed by infiltration. The GNS nanosheet was deposited onto the cellulose fibers, forming a coating, during infiltration. It forms a continuous network through a bridge of interconnected cellulose fibres at small GNS loadings (3.2 wt%). The GNS/cellulose paper is as flexible and mechanically tough as the pure cellulose paper. The electrical measurements show the composite paper has a sheet resistance of 1063 Ω □-1 and a conductivity of 11.6 S m-1. The application of the composite paper as a flexible double layer supercapacitor in an organic electrolyte (LiPF6) displays a high capacity of 252 F g-1 at a current density of 1 A g-1 with respect to GNS. Moreover, the paper can be used as the anode in a lithium battery, showing distinct charge and discharge performances. The simple process for synthesising the GNS functionalized cellulose papers is attractive for the development of high performance papers for electrical, electrochemical and multifunctional applications. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr30318c
Cortright, Ronald N; Lujan, Heidi L; Cox, Julie H; Cortright, Maria A; Langworthy, Brandon M; Petta, Lorene M; Tanner, Charles J; DiCarlo, Stephen E
2015-09-01
We hypothesized that the intellectual development of students, i.e., their beliefs about the nature of knowledge and learning, affects their intrinsic motivation and class performance. Specifically, we hypothesized that students with low intellectual development (i.e., the naive beliefs that knowledge is simple, absolute, and certain) have low intrinsic motivation and low class performance, whereas students with high intellectual development (i.e., more sophisticated beliefs that knowledge is complex, tentative, and evolving) have high intrinsic motivation and class performance. To test this hypothesis, we administered the Learning Context Questionnaire to measure intellectual development. In addition, we administered the Intrinsic Motivation Inventory to assess our students' intrinsic motivation. Furthermore, we performed regression analyses between intellectual development with both intrinsic motivation and class performance. The results document a positive relationship among intellectual development, intrinsic motivation, and class performance for female students only. In sharp contrast, there was a negative relationship between intellectual development, intrinsic motivation, and class performance for male students. The slope comparisons documented significant differences in the slopes relating intellectual development, intrinsic motivation, and class performance between female and male students. Thus, female students with more sophisticated beliefs that knowledge is personally constructed, complex, and evolving had higher intrinsic motivation and class performance. In contrast, male students with the naive beliefs that the structure of knowledge is simple, absolute, and certain had higher levels of intrinsic motivation and class performance. The results suggest that sex influences intellectual development, which has an effect on intrinsic motivation for learning a specific topic. Copyright © 2015 The American Physiological Society.
Courville, Xan F; Tomek, Ivan M; Kirkland, Kathryn B; Birhle, Marian; Kantor, Stephen R; Finlayson, Samuel R G
2012-02-01
To perform a cost-effectiveness analysis to evaluate preoperative use of mupirocin in patients with total joint arthroplasty (TJA). Simple decision tree model. Outpatient TJA clinical setting. Hypothetical cohort of patients with TJA. A simple decision tree model compared 3 strategies in a hypothetical cohort of patients with TJA: (1) obtaining preoperative screening cultures for all patients, followed by administration of mupirocin to patients with cultures positive for Staphylococcus aureus; (2) providing empirical preoperative treatment with mupirocin for all patients without screening; and (3) providing no preoperative treatment or screening. We assessed the costs and benefits over a 1-year period. Data inputs were obtained from a literature review and from our institution's internal data. Utilities were measured in quality-adjusted life-years, and costs were measured in 2005 US dollars. Incremental cost-effectiveness ratio. The treat-all and screen-and-treat strategies both had lower costs and greater benefits, compared with the no-treatment strategy. Sensitivity analysis revealed that this result is stable even if the cost of mupirocin was over $100 and the cost of SSI ranged between $26,000 and $250,000. Treating all patients remains the best strategy when the prevalence of S. aureus carriers and surgical site infection is varied across plausible values as well as when the prevalence of mupirocin-resistant strains is high. Empirical treatment with mupirocin ointment or use of a screen-and-treat strategy before TJA is performed is a simple, safe, and cost-effective intervention that can reduce the risk of SSI. S. aureus decolonization with nasal mupirocin for patients undergoing TJA should be considered. Level II, economic and decision analysis.
Non-intrusive torque measurement for rotating shafts using optical sensing of zebra-tapes
NASA Astrophysics Data System (ADS)
Zappalá, D.; Bezziccheri, M.; Crabtree, C. J.; Paone, N.
2018-06-01
Non-intrusive, reliable and precise torque measurement is critical to dynamic performance monitoring, control and condition monitoring of rotating mechanical systems. This paper presents a novel, contactless torque measurement system consisting of two shaft-mounted zebra tapes and two optical sensors mounted on stationary rigid supports. Unlike conventional torque measurement methods, the proposed system does not require costly embedded sensors or shaft-mounted electronics. Moreover, its non-intrusive nature, adaptable design, simple installation and low cost make it suitable for a large variety of advanced engineering applications. Torque measurement is achieved by estimating the shaft twist angle through analysis of zebra tape pulse train time shifts. This paper presents and compares two signal processing methods for torque measurement: rising edge detection and cross-correlation. The performance of the proposed system has been proven experimentally under both static and variable conditions and both processing approaches show good agreement with reference measurements from an in-line, invasive torque transducer. Measurement uncertainty has been estimated according to the ISO GUM (Guide to the expression of uncertainty in measurement). Type A analysis of experimental data has provided an expanded uncertainty relative to the system full-scale torque of ±0.30% and ±0.86% for the rising edge and cross-correlation approaches, respectively. Statistical simulations performed by the Monte Carlo method have provided, in the worst case, an expanded uncertainty of ±1.19%.
Hemangiopericytoma arising from the wall of the urinary bladder.
Kibar, Y; Uzar, A I; Erdemir, F; Ozcan, A; Coban, H; Seckin, B
2006-01-01
Hemangiopericytoma (HPC) arising from within the urinary bladder is exceptionally rare. A 45-year-old man having the symptoms of left groin pain, vague suprapubic discomfort and frequency was admitted to our clinic. Pelvic tomography revealed a tumor in the bladder wall measuring 4 x 3 cm and was not clearly distinct from the lower abdominal wall. Partial cystectomy was performed and the histopathological examination confirmed the hemangiopericytoma. Three thousand rad exterior beam irradiation was performed after operation. Partial cystectomy and adjuvant radiotherapy may be a simple and effective alternative operation for the patient with HPC.
Acoustic performance of a Herschel Quincke tube modified with an interconnecting pipe
NASA Astrophysics Data System (ADS)
Desantes, J. M.; Torregrosa, A. J.; Climent, H.; Moya, D.
2005-06-01
The classical two-duct Herschel-Quincke tube is modified by means of an additional pipe connecting both paths. A transfer matrix is obtained for a mesh system with five arbitrary branches and then particularized to the proposed scheme. Experimental attenuation measurements were performed on several prototypes, and the results compared favourably with predictions from the previous theoretical development. Finally, transmission loss contour plots were used to study the influence of the connecting pipe on the resonance frequencies. The results confirm the nontrivial character of the influence observed, and simple relationships are obtained for the general trends.
A quantitative theory of the Hounsfield unit and its application to dual energy scanning.
Brooks, R A
1977-10-01
A standard definition is proposed for the Hounsfield number. Any number in computed tomography can be converted to the Hounsfield scale after performing a simple calibration using air and water. The energy dependence of the Hounsfield number, H, is given by the expression H = (Hc + Hp Q)/(1 + Q), where Hc and Hp are the Compton and photoelectric coefficients of the material being measured, expressed in Hounsfield units, and Q is the "quality factor" of the scanner. Q can be measured by performing a scan of a single calibrating material, such as a potassium iodine solution. By applying this analysis to dual energy scans, the Compton and photoelectric coefficients of an unknown substance may easily be obtained. This can lead to a limited degree of chemical identification.
Spacecraft Communications System Verification Using On-Axis Near Field Measurement Techniques
NASA Technical Reports Server (NTRS)
Keating, Thomas; Baugh, Mark; Gosselin, R. B.; Lecha, Maria C.; Krebs, Carolyn A. (Technical Monitor)
2000-01-01
Determination of the readiness of a spacecraft for launch is a critical requirement. The final assembly of all subsystems must be verified. Testing of a communications system can mostly be done using closed-circuits (cabling to/from test ports), but the final connections to the antenna require radiation tests. The Tropical Rainfall Measuring Mission (TRMM) Project used a readily available 'near-fleld on-axis' equation to predict the values to be used for comparison with those obtained in a test program. Tests were performed in a 'clean room' environment at both Goddard Space Flight Center (GSFC) and in Japan at the Tanegashima Space Center (TnSC) launch facilities. Most of the measured values agreed with the predicted values to within 0.5 dB. This demonstrates that sometimes you can use relatively simple techniques to make antenna performance measurements when use of the 'far field ranges, anechoic chambers, or precision near-field ranges' are neither available nor practical. Test data and photographs are provided.
Measuring the Resilience of Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Bell, Ann Maria; Dearden, Richard; Levri, Julie A.
2002-01-01
Despite the central importance of crew safety in designing and operating a life support system, the metric commonly used to evaluate alternative Advanced Life Support (ALS) technologies does not currently provide explicit techniques for measuring safety. The resilience of a system, or the system s ability to meet performance requirements and recover from component-level faults, is fundamentally a dynamic property. This paper motivates the use of computer models as a tool to understand and improve system resilience throughout the design process. Extensive simulation of a hybrid computational model of a water revitalization subsystem (WRS) with probabilistic, component-level faults provides data about off-nominal behavior of the system. The data can then be used to test alternative measures of resilience as predictors of the system s ability to recover from component-level faults. A novel approach to measuring system resilience using a Markov chain model of performance data is also developed. Results emphasize that resilience depends on the complex interaction of faults, controls, and system dynamics, rather than on simple fault probabilities.
Investigating the role of feedback and motivation in clinical reaction time assessment.
Eckner, James T; Chandran, Srikrishna; Richardson, James K
2011-12-01
To investigate the influence of performance feedback and motivation during 2 tests of simple visuomotor reaction time (RT). Cross-sectional, observational study. Outpatient academic physiatry clinic. Thirty-one healthy adults (mean [SD], 54 ± 15 years). Participants completed a clinical test of RT (RT(clin)) and a computerized test of RT with and without performance feedback (RT(compFB) and RT(compNoFB), respectively) in randomly assigned order. They then ranked their degree of motivation during each test. RT(clin) measured the time required to catch a suspended vertical shaft by hand closure after release of the shaft by the examiner. RT(compFB) and RT(compNoFB) both measured the time required to press a computer key in response to a visual cue displayed on a computer monitor. Performance feedback (visual display of the previous trial and summary results) was provided for RT(compFB), but not for RT(compNoFB). Means and standard deviations of RT(clin), RT(compFB), and RT(compNoFB) and participants' self-reported motivation on a 5-point Likert scale for each test. There were significant differences in both the means and standard deviations of RT(clin), RT(compFB), and RT(compNoFB) (F(2,60) = 81.66, P < .0001; F(2,60) = 32.46, P < .0001, respectively), with RT(clin) being both the fastest and least variable of the RT measurements. RT(clin) was more strongly correlated with RT(compFB) (r = 0.449, P = .0011) than with RT(compNoFB) (r = 0.314, P = .086). The participants reported similar levels of motivation between RT(clin) and RT(compFB), both of which were reported to be more motivating than RT(compNoFB). The stronger correlation between RT(clin) and RT(compFB) as well as the higher reported motivation during RT(clin) and RT(compFB) testing suggest that performance feedback is a positive motivating factor that is inherent to RT(clin) testing. RT(clin) is a simple, inexpensive technique for measuring RT and appears to be an intrinsically motivating task. This motivation may promote faster, more consistent RT performance compared with currently available computerized programs, which do not typically provide performance feedback. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
High-Performance Liquid Chromatography in the Undergraduate Chemical Engineering Laboratory
ERIC Educational Resources Information Center
Frey, Douglas D.; Guo, Hui; Karnik, Nikhila
2013-01-01
This article describes the assembly of a simple, low-cost, high-performance liquid chromatography (HPLC) system and its use in the undergraduate chemical engineering laboratory course to perform simple experiments. By interpreting the results from these experiments students are able to gain significant experience in the general method of…
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
A Total Quality-Control Plan with Right-Sized Statistical Quality-Control.
Westgard, James O
2017-03-01
A new Clinical Laboratory Improvement Amendments option for risk-based quality-control (QC) plans became effective in January, 2016. Called an Individualized QC Plan, this option requires the laboratory to perform a risk assessment, develop a QC plan, and implement a QC program to monitor ongoing performance of the QC plan. Difficulties in performing a risk assessment may limit validity of an Individualized QC Plan. A better alternative is to develop a Total QC Plan including a right-sized statistical QC procedure to detect medically important errors. Westgard Sigma Rules provides a simple way to select the right control rules and the right number of control measurements. Copyright © 2016 Elsevier Inc. All rights reserved.
Filter methods to preserve local contrast and to avoid artifacts in gamut mapping
NASA Astrophysics Data System (ADS)
Meili, Marcel; Küpper, Dennis; Barańczuk, Zofia; Caluori, Ursina; Simon, Klaus
2010-01-01
Contrary to high dynamic range imaging, the preservation of details and the avoidance of artifacts is not explicitly considered in popular color management systems. An effective way to overcome these difficulties is image filtering. In this paper we investigate several image filter concepts for detail preservation as part of a practical gamut mapping strategy. In particular we define four concepts including various image filters and check their performance with a psycho-visual test. Additionally, we compare our performance evaluation to two image quality measures with emphasis on local contrast. Surprisingly, the most simple filter concept performs highly efficient and achieves an image quality which is comparable to the more established but slower methods.
Steady-state photoluminescent excitation characterization of semiconductor carrier recombination.
Bhosale, J S; Moore, J E; Wang, X; Bermel, P; Lundstrom, M S
2016-01-01
Photoluminescence excitation spectroscopy is a contactless characterization technique that can provide valuable information about the surface and bulk recombination parameters of a semiconductor device, distinct from other sorts of photoluminescent measurements. For this technique, a temperature-tuned light emitting diode (LED) has several advantages over other light sources. The large radiation density offered by LEDs from near-infrared to ultraviolet region at a low cost enables efficient and fast photoluminescence measurements. A simple and inexpensive LED-based setup facilitates measurement of surface recombination velocity and bulk Shockley-Read-Hall lifetime, which are key parameters to assess device performance. Under the right conditions, this technique can also provide a contactless way to measure the external quantum efficiency of a solar cell.
Plastic optical fiber level measurement sensor based on side holes
NASA Astrophysics Data System (ADS)
Park, Young June; Shin, Jong-Dug; Park, Jaehee
2014-10-01
Plastic optical fiber level measurement sensor based on in-line side holes is investigated theoretically and experimentally. The sensor consists of a plastic optical fiber with in-line side holes spaced about 5 cm apart. The 0.9 diameter in-line side holes were fabricated by micro-drilling. An analytical expression of the sensor transmittance was obtained using a simple ray optics approach. The measurements of the sensor transmittance were performed with a 55 cm height Mass cylinder. Both results show that the sensor transmittance increases as the number of side holes filled with water increases. The research results indicate that the plastic optical fiber based on in-line side holes can be used for water level measurement.
Active ultrasonic cross-correlation flowmeters for mixed-phase pipe flows
NASA Astrophysics Data System (ADS)
Sheen, S. H.; Raptis, A. C.
Two ultrasonic flowmeters which employ the active cross-correlation technique and use a simple clamp-on transducer arrangement are discussed. The flowmeter for solid/liquid flows was tested over a wide range of coal concentration in water and oil. The measured velocity based on the peak position of the cross-correlation function is consistently higher by about 15% than the average velocity measured by flow diversion. The origin of the difference results mainly from the flow velocity profiles and the transit-time probability distribution. The flowmeter that can measure particle velocity in a solid/gas flow requires acoustic decoupling arrangement between two sensing stations. The measured velocity is mainly associated with the particles near the wall. Performance of both flowmeters is presented.
Steady-state photoluminescent excitation characterization of semiconductor carrier recombination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhosale, J. S.; Department of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907; Moore, J. E.
2016-01-15
Photoluminescence excitation spectroscopy is a contactless characterization technique that can provide valuable information about the surface and bulk recombination parameters of a semiconductor device, distinct from other sorts of photoluminescent measurements. For this technique, a temperature-tuned light emitting diode (LED) has several advantages over other light sources. The large radiation density offered by LEDs from near-infrared to ultraviolet region at a low cost enables efficient and fast photoluminescence measurements. A simple and inexpensive LED-based setup facilitates measurement of surface recombination velocity and bulk Shockley-Read-Hall lifetime, which are key parameters to assess device performance. Under the right conditions, this technique canmore » also provide a contactless way to measure the external quantum efficiency of a solar cell.« less
Neutron-Induced Fission Measurements at the Dance and Lsds Facilities at Lanl
NASA Astrophysics Data System (ADS)
Jandel, M.; Bredeweg, T. A.; Bond, E. M.; Chadwick, M. B.; Couture, A.; O'Donnell, J. M.; Fowler, M. M.; Haight, R. C.; Hayes-Sterbenz, A. C.; Rundberg, R. S.; Rusev, G. Y.; Ullmann, J. L.; Vieira, D. J.; Wilhelmy, J. B.; Wu, C. Y.; Becker, J. A.; Alexander, C. W.; Belier, G.
2014-09-01
New results from neutron-induced fission measurements performed at the Detector for Advanced Neutron Capture Experiments (DANCE) and Lead Slowing Down Spectrometer (LSDS) are presented. New correlated data on promptfission γ-ray (PFG) distributions were measured using the DANCE array for resonant neutron-induced fission of 233U, 235U and 239Pu. The deduced properties of PFG emission are presented using a simple parametrization. An accurate knowledge of fission γ-ray spectra enables us to analyze the isomeric states of 236U created after neutron capture on 235U. We briefly discuss these new results. Finally, we review details and preliminary results of the challenging 237U(n,f) cross section measurement at the LSDS facility.
A High Spectral Resolution Lidar Based on Absorption Filter
NASA Technical Reports Server (NTRS)
Piironen, Paivi
1996-01-01
A High Spectral Resolution Lidar (HSRL) that uses an iodine absorption filter and a tunable, narrow bandwidth Nd:YAG laser is demonstrated. The iodine absorption filter provides better performance than the Fabry-Perot etalon that it replaces. This study presents an instrument design that can be used a the basis for a design of a simple and robust lidar for the measurement of the optical properties of the atmosphere. The HSRL provides calibrated measurements of the optical properties of the atmospheric aerosols. These observations include measurements of aerosol backscatter cross sections, optical depth, backscatter phase function depolarization, and multiple scattering. The errors in the HSRL data are discussed and the effects of different errors on the measured optical parameters are shown.
A simple biota removal algorithm for 35 GHz cloud radar measurements
NASA Astrophysics Data System (ADS)
Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas
2018-03-01
Cloud radar reflectivity profiles can be an important measurement for the investigation of cloud vertical structure (CVS). However, extracting intended meteorological cloud content from the measurement often demands an effective technique or algorithm that can reduce error and observational uncertainties in the recorded data. In this work, a technique is proposed to identify and separate cloud and non-hydrometeor echoes using the radar Doppler spectral moments profile measurements. The point and volume target-based theoretical radar sensitivity curves are used for removing the receiver noise floor and identified radar echoes are scrutinized according to the signal decorrelation period. Here, it is hypothesized that cloud echoes are observed to be temporally more coherent and homogenous and have a longer correlation period than biota. That can be checked statistically using ˜ 4 s sliding mean and standard deviation value of reflectivity profiles. The above step helps in screen out clouds critically by filtering out the biota. The final important step strives for the retrieval of cloud height. The proposed algorithm potentially identifies cloud height solely through the systematic characterization of Z variability using the local atmospheric vertical structure knowledge besides to the theoretical, statistical and echo tracing tools. Thus, characterization of high-resolution cloud radar reflectivity profile measurements has been done with the theoretical echo sensitivity curves and observed echo statistics for the true cloud height tracking (TEST). TEST showed superior performance in screening out clouds and filtering out isolated insects. TEST constrained with polarimetric measurements was found to be more promising under high-density biota whereas TEST combined with linear depolarization ratio and spectral width perform potentially to filter out biota within the highly turbulent shallow cumulus clouds in the convective boundary layer (CBL). This TEST technique is promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.
Measurements of plasma loading in the presence of electrostatic waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riccardi, C.; Agostini, E.; Fontanesi, M.
1995-10-01
An experimental analysis of the plasma impedance with respect to the coupling of ES (electrostatic) waves is described in this paper. The waves are excited through a slow-wave antenna and the experiment performed in a toroidal device [C. Riccardi {ital et} {ital al}., Plasma Phys. {bold 36}, 1791 (1994)]. The measured impedance is compared with a simple theoretical model for magnetized homogeneous plasma, in order to establish the presence of bulk or surface waves and of some nonlinear effects when power is raised. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.
Photoeffect cross sections of some rare-earth elements at 145.4 keV
NASA Astrophysics Data System (ADS)
Umesh, T. K.; Ranganathaiah, C.; Sanjeevaiah, B.
1985-08-01
Total attenuation cross sections in the elements La, Ce, Pr, Nd, Sm, Gd, Dy, Ho, and Er were derived from the measured total cross sections of their simple oxide compounds, by employing the mixture rule at 145.4-keV photon energy. The compound cross sections have been measured by performing transmission experiments in a good geometry setup. From the derived total cross sections of elements, photoeffect cross sections have been obtained by subtracting the theoretical scattering cross sections. A good agreement is observed between the present data of photoeffect cross sections and Scofield's theoretical data.
Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Strum, R.; Stiles, D.
In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.
Digital phase demodulation for low-coherence interferometry-based fiber-optic sensors
Liu, Y.; Strum, R.; Stiles, D.; ...
2017-11-20
In this paper, we describe a digital phase demodulation scheme for low-coherence interferometry-based fiber-optic sensors by employing a simple generation of phase-shifted signals at the interrogation interferometer. The scheme allows a real-time calibration process and offers capability of measuring large variations (up to the coherence of the light source) at the bandwidth that is only limited by the data acquisition system. Finally, the proposed phase demodulation method is analytically derived and its validity and performance are experimentally verified using fiber-optic Fabry–Perot sensors for measurement of strains and vibrations.
First Accelerator Test of the Kinematic Lightweight Energy Meter (KLEM) Prototype
NASA Technical Reports Server (NTRS)
Bashindzhagyan, G.; Adams, J. H.; Bashindzhagyan, P.; Chilingarian, A.; Donnelly, J.; Drury, L.; Egorov, N.; Golubkov, S.; Grebenyuk, V.; Kalinin, A.;
2002-01-01
The essence of the KLEM (Kinematic Lightweight Energy Meter) instrument is to directly measure the elemental energy spectra of high-energy cosmic rays by determining the angular distribution of secondary particles produced in a target. The first test of the simple KLEM prototype has been performed at the CERN SPS test-beam with 180 GeV pions during 2001. The results of the first test analysis confirm that, using the KLEM method, the energy of 180 GeV pions can be measured with a relative error of about 67%, which is very close to the results of the simulation (65%).
Scattering measurements on natural and model trees
NASA Technical Reports Server (NTRS)
Rogers, James C.; Lee, Sung M.
1990-01-01
The acoustical back scattering from a simple scale model of a tree has been experimentally measured. The model consisted of a trunk and six limbs, each with 4 branches; no foliage or twigs were included. The data from the anechoic chamber measurements were then mathematically combined to construct the effective back scattering from groups of trees. Also, initial measurements have been conducted out-of-doors on a single tree in an open field in order to characterize its acoustic scattering as a function of azimuth angle. These measurements were performed in the spring, prior to leaf development. The data support a statistical model of forest scattering; the scattered signal spectrum is highly irregular but with a remarkable general resemblance to the incident signal spectrum. Also, the scattered signal's spectra showed little dependence upon scattering angle.
Median of patient results as a tool for assessment of analytical stability.
Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft; Sölétormos, György
2015-06-15
In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable analytical bias based on biological variation. Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. Patient results applied in analytical quality performance control procedures are the most reliable sources of material as they represent the genuine substance of the measurements and therefore circumvent the problems associated with non-commutable materials in external assessment. Patient medians in the monthly monitoring of analytical stability in laboratory medicine are an inexpensive, simple and reliable tool to monitor the steadiness of the analytical practice. Copyright © 2015 Elsevier B.V. All rights reserved.
Improving the photovoltaic performance of perovskite solar cells with acetate
Zhao, Qian; Li, G. R.; Song, Jian; Zhao, Yulong; Qiang, Yinghuai; Gao, X. P.
2016-01-01
In an all-solid-state perovskite solar cell, methylammonium lead halide film is in charge of generating photo-excited electrons, thus its quality can directly influence the final photovoltaic performance of the solar cell. This paper accentuates a very simple chemical approach to improving the quality of a perovskite film with a suitable amount of acetic acid. With introduction of acetate ions, a homogeneous, continual and hole-free perovskite film comprised of high-crystallinity grains is obtained. UV-visible spectra, steady-state and time-resolved photoluminescence (PL) spectra reveal that the obtained perovskite film under the optimized conditions shows a higher light absorption, more efficient electron transport, and faster electron extraction to the adjoining electron transport layer. The features result in the optimized perovskite film can provide an improved short-circuit current. The corresponding solar cells with a planar configuration achieves an improved power conversion efficiency of 13.80%, and the highest power conversion efficiency in the photovoltaic measurements is up to 14.71%. The results not only provide a simple approach to optimizing perovskite films but also present a novel angle of view on fabricating high-performance perovskite solar cells. PMID:27934924
Improving the photovoltaic performance of perovskite solar cells with acetate.
Zhao, Qian; Li, G R; Song, Jian; Zhao, Yulong; Qiang, Yinghuai; Gao, X P
2016-12-09
In an all-solid-state perovskite solar cell, methylammonium lead halide film is in charge of generating photo-excited electrons, thus its quality can directly influence the final photovoltaic performance of the solar cell. This paper accentuates a very simple chemical approach to improving the quality of a perovskite film with a suitable amount of acetic acid. With introduction of acetate ions, a homogeneous, continual and hole-free perovskite film comprised of high-crystallinity grains is obtained. UV-visible spectra, steady-state and time-resolved photoluminescence (PL) spectra reveal that the obtained perovskite film under the optimized conditions shows a higher light absorption, more efficient electron transport, and faster electron extraction to the adjoining electron transport layer. The features result in the optimized perovskite film can provide an improved short-circuit current. The corresponding solar cells with a planar configuration achieves an improved power conversion efficiency of 13.80%, and the highest power conversion efficiency in the photovoltaic measurements is up to 14.71%. The results not only provide a simple approach to optimizing perovskite films but also present a novel angle of view on fabricating high-performance perovskite solar cells.
Tobaruela, Almudena; Rojo, Francisco Javier; García Paez, José María; Bourges, Jean Yves; Herrero, Eduardo Jorge; Millán, Isabel; Alvarez, Lourdes; Cordon, Ángeles; Guinea, Gustavo V
2016-08-01
The aim of this study was to evaluate the variation of hardness with fatigue in calf pericardium, a biomaterial commonly used in bioprosthetic heart valves, and its relationship with the energy dissipated during the first fatigue cycle that has been shown to be a predictor of fatigue-life (García Páez et al., 2006, 2007; Rojo et al., 2010). Fatigue tests were performed in vitro on 24 pericardium specimens cut in a root-to-apex direction. The specimens were subjected to a maximum stress of 1MPa in blocks of 10, 25, 50, 100, 250, 500, 1000 and 1500 cycles. By means of a modified Shore A hardness test procedure, the hardness of the specimen was measured before and after fatigue tests. Results showed a significant correlation of such hardness with fatigue performance and with the energy dissipated in the first cycle of fatigue, a predictor of pericardium durability. The study showed indentation hardness as a simple and reliable indicator of mechanical performance, one which could be easily implemented in improving tissue selection. Copyright © 2016 Elsevier Ltd. All rights reserved.
Two-dimensional simple proportional feedback control of a chaotic reaction system
NASA Astrophysics Data System (ADS)
Mukherjee, Ankur; Searson, Dominic P.; Willis, Mark J.; Scott, Stephen K.
2008-04-01
The simple proportional feedback (SPF) control algorithm may, in principle, be used to attain periodic oscillations in dynamic systems exhibiting low-dimensional chaos. However, if implemented within a discrete control framework with sampling frequency limitations, controller performance may deteriorate. This phenomenon is illustrated using simulations of a chaotic autocatalytic reaction system. A two-dimensional (2D) SPF controller that explicitly takes into account some of the problems caused by limited sampling rates is then derived by introducing suitable modifications to the original SPF method. Using simulations, the performance of the 2D-SPF controller is compared to that of a conventional SPF control law when implemented as a sampled data controller. Two versions of the 2D-SPF controller are described: linear (L2D-SPF) and quadratic (Q2D-SPF). The performance of both the L2D-SPF and Q2D-SPF controllers is shown to be superior to the SPF when controller sampling frequencies are decreased. Furthermore, it is demonstrated that the Q2D-SPF controller provides better fixed point stabilization compared to both the L2D-SPF and the conventional SPF when concentration measurements are corrupted by noise.
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couture, Aaron Joseph; Casten, Richard F.; Cakirli, R. B.
Here, neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40%, and has limited predictive power, with predictions from different models rapidly differing by an order ofmore » magnitude a few nucleons from the last measurement.« less
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
Couture, Aaron Joseph; Casten, Richard F.; Cakirli, R. B.
2017-12-20
Here, neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40%, and has limited predictive power, with predictions from different models rapidly differing by an order ofmore » magnitude a few nucleons from the last measurement.« less
Optical zero-differential pressure switch and its evaluation in a multiple pressure measuring system
NASA Technical Reports Server (NTRS)
Powell, J. A.
1977-01-01
The design of a clamped-diaphragm pressure switch is described in which diaphragm motion is detected by a simple fiber-optic displacement sensor. The switch was evaluated in a pressure measurement system where it detected the zero crossing of the differential pressure between a static test pressure and a tank pressure that was periodically ramped from near zero to fullscale gage pressure. With a ramping frequency of 1 hertz and a full-scale tank pressure of 69 N/sq cm gage (100 psig), the switch delay was as long as 2 milliseconds. Pressure measurement accuracies were 0.25 to 0.75 percent of full scale. Factors affecting switch performance are also discussed.
Picosecond pulse measurements using the active laser medium
NASA Technical Reports Server (NTRS)
Bernardin, James P.; Lawandy, N. M.
1990-01-01
A simple method for measuring the pulse lengths of synchronously pumped dye lasers which does not require the use of an external nonlinear medium, such as a doubling crystal or two-photon fluorescence cell, to autocorrelate the pulses is discussed. The technique involves feeding the laser pulses back into the dye jet, thus correlating the output pulses with the intracavity pulses to obtain pulse length signatures in the resulting time-averaged laser power. Experimental measurements were performed using a rhodamine 6G dye laser pumped by a mode-locked frequency-doubled Nd:YAG laser. The results agree well with numerical computations, and the method proves effective in determining lengths of picosecond laser pulses.
Smartphones as experimental tools to measure acoustical and mechanical properties of vibrating rods
NASA Astrophysics Data System (ADS)
González, Manuel Á.; González, Miguel Á.
2016-07-01
Modern smartphones have calculation and sensor capabilities that make them suitable for use as versatile and reliable measurement devices in simple teaching experiments. In this work a smartphone is used, together with low cost materials, in an experiment to measure the frequencies emitted by vibrating rods of different materials, shapes and lengths. The results obtained with the smartphone have been compared with theoretical calculations and the agreement is good. Alternatively, physics students can perform the experiment described here and use their results to determine the dependencies of the obtained frequencies on the rod characteristics. In this way they will also practice research methods that they will probably use in their professional life.
Measuring the Indonesian provinces competitiveness by using PCA technique
NASA Astrophysics Data System (ADS)
Runita, Ditha; Fajriyah, Rohmatul
2017-12-01
Indonesia is a country which has vast teritoty. It has 34 provinces. Building local competitiveness is critical to enhance the long-term national competitiveness especially for a country as diverse as Indonesia. A competitive local government can attract and maintain successful firms and increase living standards for its inhabitants, because investment and skilled workers gravitate from uncompetitive regions to more competitive ones. Altough there are other methods to measuring competitiveness, but here we have demonstrated a simple method using principal component analysis (PCA). It can directly be applied to correlated, multivariate data. The analysis on Indonesian provinces provides 3 clusters based on the competitiveness measurement and the clusters are Bad, Good and Best perform provinces.
Development of specification for the superpave simple performance tests (SPT).
DOT National Transportation Integrated Search
2009-05-16
This report describes the development and establishment of a proposed Simple Performance : Test (SPT) specification in order to contribute to the asphalt materials technology in the state of : Michigan. The properties and characteristic of materials,...
Smart Swarms of Bacteria-Inspired Agents with Performance Adaptable Interactions
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-01-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment – by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots. PMID:21980274
Smart swarms of bacteria-inspired agents with performance adaptable interactions.
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-09-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment--by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots.
A Simple Case Study of a Grid Performance System
NASA Technical Reports Server (NTRS)
Aydt, Ruth; Gunter, Dan; Quesnel, Darcy; Smith, Warren; Taylor, Valerie; Biegel, Bryan (Technical Monitor)
2001-01-01
This document presents a simple case study of a Grid performance system based on the Grid Monitoring Architecture (GMA) being developed by the Grid Forum Performance Working Group. It describes how the various system components would interact for a very basic monitoring scenario, and is intended to introduce people to the terminology and concepts presented in greater detail in other Working Group documents. We believe that by focusing on the simple case first, working group members can familiarize themselves with terminology and concepts, and productively join in the ongoing discussions of the group. In addition, prototype implementations of this basic scenario can be built to explore the feasibility of the proposed architecture and to expose possible shortcomings. Once the simple case is understood and agreed upon, complexities can be added incrementally as warranted by cases not addressed in the most basic implementation described here. Following the basic performance monitoring scenario discussion, unresolved issues are introduced for future discussion.
NASA Astrophysics Data System (ADS)
Santos, José; Janeiro, Fernando M.; Ramos, Pedro M.
2015-10-01
This paper presents an embedded liquid viscosity measurement system based on a vibrating wire sensor. Although multiple viscometers based on different working principles are commercially available, there is still a market demand for a dedicated measurement system capable of performing accurate, fast measurements and requiring little or no operator training for simple systems and solution monitoring. The developed embedded system is based on a vibrating wire sensor that works by measuring the impedance response of the sensor, which depends on the viscosity and density of the liquid in which the sensor is immersed. The core of the embedded system is a digital signal processor (DSP) which controls the waveform generation and acquisitions for the measurement of the impedance frequency response. The DSP also processes the acquired waveforms and estimates the liquid viscosity. The user can interact with the measurement system through a keypad and an LCD or through a computer with a USB connection for data logging and processing. The presented system is tested on a set of viscosity standards and the estimated values are compared with the standard manufacturer specified viscosity values. A stability study of the measurement system is also performed.
Finite difference time domain grid generation from AMC helicopter models
NASA Technical Reports Server (NTRS)
Cravey, Robin L.
1992-01-01
A simple technique is presented which forms a cubic grid model of a helicopter from an Aircraft Modeling Code (AMC) input file. The AMC input file defines the helicopter fuselage as a series of polygonal cross sections. The cubic grid model is used as an input to a Finite Difference Time Domain (FDTD) code to obtain predictions of antenna performance on a generic helicopter model. The predictions compare reasonably well with measured data.
Optical testing using the transport-of-intensity equation.
Dorrer, C; Zuegel, J D
2007-06-11
The transport-of-intensity equation links the intensity and phase of an optical source to the longitudinal variation of its intensity in the presence of Fresnel diffraction. This equation can be used to provide a simple, accurate spatial-phase measurement for optical testing of flat surfaces. The properties of this approach are derived. The experimental demonstration is performed by quantifying the surface variations induced by the magnetorheological finishing process on laser rods.
2007-05-29
International Conference Acoustics Speech and Signal Processing (ICASSP 2007) conference 15 − 20 April 2007 in Honolulu, Hawaii. 1. E. Near Term...from the sensor measured in feet. The detection performance of the footstep in the presence of interfering speech was characterized in previously...investigation, we developed a simple piecewise linear approximation to the probability of detection curve with no interfering speech . This approximation was
Birkemose, M; Møller, A J; Madsen, M L; Brantlov, S; Sørensen, H; Overgaard, K; Johansen, P
2013-01-01
In order to maintain a homeostatic environment in human cells, the balance between absorption and separation of water must be retained. Imbalance will have consequences on both the cellular and organ levels. Studies performed on athletes have shown coherence between their hydration status and ability to perform. A dehydration of 2-7% of total body weight resulted in a marked decrease in performance. Measurement and monitoring of hydration status may be used to optimize athlete performance. Therefore, in this current study bioimpedance spectroscopy is used to determine the hydration status of athletes. Trials were made to investigate alternative ways of electrode placement when performing bioimpedance spectroscopy in order to measure relative dehydration. A total of 14 test subjects underwent measurements before, during, and after a cycle test of 3×25min. Electrodes where placed to measure body impedance in three different ways: wrist-ankle (recommended method), wrist-wrist, and transthoracic. Furthermore, the relative loss in weight of the subjects during the trial was registered. The study showed no relation between relative weight loss and the wrist-wrist and transthoracic placement method, using bioimpedance spectroscopy to measure relative dehydration. The inability of the method to detect such relative changes in hydration may be due to the bioimpedance spectroscopy technology being extremely sensitive to changes in skin temperature, movement artifacts, thoroughness in placing the electrodes, and the physiological impact on the human body when performing exercise. Therefore, further research into the area of bioimpedance spectroscopy is needed before this methodology can be applied in monitoring active athletes. Hence, a simple weight measurement still seems a more useful way of determining a relative change of hydration in an active setting.
NASA Astrophysics Data System (ADS)
Durry, Georges; Pouchet, Ivan; Amarouche, Nadir; Danguy, Théodore; Megie, Gerard
2000-10-01
A dual-beam detector is used to measure atmospheric trace species by differential absorption spectroscopy with commercial near-infrared InGaAs laser diodes. It is implemented on the Spectrom tre Diodes Laser Accordables, a balloonborne tunable diode laser spectrometer devoted to the in situ monitoring of CH 4 and H 2 O. The dual-beam detector is made of simple analogical subtractor circuits combined with InGaAs photodiodes. The detection strategy consists in taking the balanced analogical difference between the reference and the sample signals detected at the input and the output of an open optical multipass cell to apply the full dynamic range of the measurements (16 digits) to the weak molecular absorption information. The obtained sensitivity approaches the shot-noise limit. With a 56-m optical cell, the detection limit obtained when the spectra is recorded within 8 ms is 10 4 (expressed in absorbance units). The design and performances of both a simple substractor and an upgraded feedback substractor circuit are discussed with regard to atmospheric in situ CH 4 absorption spectra measured in the 1.653- m region. Mixing ratios are obtained from the absorption spectra by application of a nonlinear least-squares fit to the full molecular line shape in conjunction with in situ P and T measurements.
Pino, Maria Chiara; Mazza, Monica; Mariano, Melania; Peretti, Sara; Dimitriou, Dagmara; Masedu, Francesco; Valenti, Marco; Franco, Fabia
2017-09-01
Theory of mind (ToM) is impaired in individuals with autism spectrum disorders (ASD). The aims of this study were to: (i) examine the developmental trajectories of ToM abilities in two different mentalizing tasks in children with ASD compared to TD children; and (ii) to assess if a ToM simple test known as eyes-test could predict performance on the more advanced ToM task, i.e. comic strip test. Based on a sample of 37 children with ASD and 55 TD children, our results revealed slower development at varying rates in all ToM measures in children with ASD, with delayed onset compared to TD children. These results could stimulate new treatments for social abilities, which would lessen the social deficit in ASD.
Cosmic velocity-gravity relation in redshift space
NASA Astrophysics Data System (ADS)
Colombi, Stéphane; Chodorowski, Michał J.; Teyssier, Romain
2007-02-01
We propose a simple way to estimate the parameter β ~= Ω0.6/b from 3D galaxy surveys, where Ω is the non-relativistic matter-density parameter of the Universe and b is the bias between the galaxy distribution and the total matter distribution. Our method consists in measuring the relation between the cosmological velocity and gravity fields, and thus requires peculiar velocity measurements. The relation is measured directly in redshift space, so there is no need to reconstruct the density field in real space. In linear theory, the radial components of the gravity and velocity fields in redshift space are expected to be tightly correlated, with a slope given, in the distant observer approximation, by We test extensively this relation using controlled numerical experiments based on a cosmological N-body simulation. To perform the measurements, we propose a new and rather simple adaptive interpolation scheme to estimate the velocity and the gravity field on a grid. One of the most striking results is that non-linear effects, including `fingers of God', affect mainly the tails of the joint probability distribution function (PDF) of the velocity and gravity field: the 1-1.5 σ region around the maximum of the PDF is dominated by the linear theory regime, both in real and redshift space. This is understood explicitly by using the spherical collapse model as a proxy of non-linear dynamics. Applications of the method to real galaxy catalogues are discussed, including a preliminary investigation on homogeneous (volume-limited) `galaxy' samples extracted from the simulation with simple prescriptions based on halo and substructure identification, to quantify the effects of the bias between the galaxy distribution and the total matter distribution, as well as the effects of shot noise.
ERIC Educational Resources Information Center
Endres, Frank L.
Symbolic Interactive Matrix Processing Language (SIMPLE) is a conversational matrix-oriented source language suited to a batch or a time-sharing environment. The two modes of operation of SIMPLE are conversational mode and programing mode. This program uses a TAURUS time-sharing system and cathode ray terminals or teletypes. SIMPLE performs all…
Colditz, Ian G.; Ferguson, Drewe M.; Collins, Teresa; Matthews, Lindsay; Hemsworth, Paul H.
2014-01-01
Simple Summary Benchmarking is a tool widely used in agricultural industries that harnesses the experience of farmers to generate knowledge of practices that lead to better on-farm productivity and performance. We propose, by analogy with production performance, a method for measuring the animal welfare performance of an enterprise and describe a tool for farmers to monitor and improve the animal welfare performance of their business. A general framework is outlined for assessing and monitoring risks to animal welfare based on measures of animals, the environment they are kept in and how they are managed. The tool would enable farmers to continually improve animal welfare. Abstract Schemes for the assessment of farm animal welfare and assurance of welfare standards have proliferated in recent years. An acknowledged short-coming has been the lack of impact of these schemes on the welfare standards achieved on farm due in part to sociological factors concerning their implementation. Here we propose the concept of welfare performance based on a broad set of performance attributes of an enterprise and describe a tool based on risk assessment and benchmarking methods for measuring and managing welfare performance. The tool termed the Unified Field Index is presented in a general form comprising three modules addressing animal, resource, and management factors. Domains within these modules accommodate the principle conceptual perspectives for welfare assessment: biological functioning; emotional states; and naturalness. Pan-enterprise analysis in any livestock sector could be used to benchmark welfare performance of individual enterprises and also provide statistics of welfare performance for the livestock sector. An advantage of this concept of welfare performance is its use of continuous scales of measurement rather than traditional pass/fail measures. Through the feedback provided via benchmarking, the tool should help farmers better engage in on-going improvement of farm practices that affect animal welfare. PMID:26480317
Zinellu, Angelo; Sotgia, Salvatore; Zinellu, Elisabetta; Chessa, Roberto; Deiana, Luca; Carru, Ciriaco
2006-03-01
Guanidinoacetic acid (GAA) measurement has recently become of great interest for the diagnosis of creatine (Cn) metabolism disorders, and research calls for rapid and inexpensive methods for its detection in plasma and urine in order to assess a large number of patients. We propose a new assay for the measurement of GAA by a simple CZE UV-detection without previous sample derivatization. Plasma samples were filtered by Microcon-10 microconcentrators and directly injected into the capillary, while for urine specimens a simple water dilution before injection was needed. A baseline separation was obtained in less than 8 min using a 60.2 cm x 75 microm uncoated silica capillary, 75 mmol/L Tris-phosphate buffer pH 2.25 at 15 degrees C. The performance of the developed method was assessed by measuring plasma creatinine and Cn in 32 normal subjects and comparing the data obtained by the new method with those found with the previous CE assay. Our new method seems to be an inexpensive, fast and specific tool to assess a large number of patients both in clinical and in research laboratories.
NASA Astrophysics Data System (ADS)
Czerny, J.; Schulz, K. G.; Ludwig, A.; Riebesell, U.
2013-03-01
Mesocosms as large experimental units provide the opportunity to perform elemental mass balance calculations, e.g. to derive net biological turnover rates. However, the system is in most cases not closed at the water surface and gases exchange with the atmosphere. Previous attempts to budget carbon pools in mesocosms relied on educated guesses concerning the exchange of CO2 with the atmosphere. Here, we present a simple method for precise determination of air-sea gas exchange in mesocosms using N2O as a deliberate tracer. Beside the application for carbon budgeting, transfer velocities can be used to calculate exchange rates of any gas of known concentration, e.g. to calculate aquatic production rates of climate relevant trace gases. Using an arctic KOSMOS (Kiel Off Shore Mesocosms for future Ocean Simulation) experiment as an exemplary dataset, it is shown that the presented method improves accuracy of carbon budget estimates substantially. Methodology of manipulation, measurement, data processing and conversion to CO2 fluxes are explained. A theoretical discussion of prerequisites for precise gas exchange measurements provides a guideline for the applicability of the method under various experimental conditions.
Does daily nurse staffing match ward workload variability? Three hospitals' experiences.
Gabbay, Uri; Bukchin, Michael
2009-01-01
Nurse shortage and rising healthcare resource burdens mean that appropriate workforce use is imperative. This paper aims to evaluate whether daily nursing staffing meets ward workload needs. Nurse attendance and daily nurses' workload capacity in three hospitals were evaluated. Statistical process control was used to evaluate intra-ward nurse workload capacity and day-to-day variations. Statistical process control is a statistics-based method for process monitoring that uses charts with predefined target measure and control limits. Standardization was performed for inter-ward analysis by converting ward-specific crude measures to ward-specific relative measures by dividing observed/expected. Two charts: acceptable and tolerable daily nurse workload intensity, were defined. Appropriate staffing indicators were defined as those exceeding predefined rates within acceptable and tolerable limits (50 percent and 80 percent respectively). A total of 42 percent of the overall days fell within acceptable control limits and 71 percent within tolerable control limits. Appropriate staffing indicators were met in only 33 percent of wards regarding acceptable nurse workload intensity and in only 45 percent of wards regarding tolerable workloads. The study work did not differentiate crude nurse attendance and it did not take into account patient severity since crude bed occupancy was used. Double statistical process control charts and certain staffing indicators were used, which is open to debate. Wards that met appropriate staffing indicators prove the method's feasibility. Wards that did not meet appropriate staffing indicators prove the importance and the need for process evaluations and monitoring. Methods presented for monitoring daily staffing appropriateness are simple to implement either for intra-ward day-to-day variation by using nurse workload capacity statistical process control charts or for inter-ward evaluation using standardized measure of nurse workload intensity. The real challenge will be to develop planning systems and implement corrective interventions such as dynamic and flexible daily staffing, which will face difficulties and barriers. The paper fulfils the need for workforce utilization evaluation. A simple method using available data for daily staffing appropriateness evaluation, which is easy to implement and operate, is presented. The statistical process control method enables intra-ward evaluation, while standardization by converting crude into relative measures enables inter-ward analysis. The staffing indicator definitions enable performance evaluation. This original study uses statistical process control to develop simple standardization methods and applies straightforward statistical tools. This method is not limited to crude measures, rather it uses weighted workload measures such as nursing acuity or weighted nurse level (i.e. grade/band).
Oliveira, Jorge; Gamito, Pedro; Alghazzawi, Daniyal M; Fardoun, Habib M; Rosa, Pedro J; Sousa, Tatiana; Picareli, Luís Felipe; Morais, Diogo; Lopes, Paulo
2017-08-14
This investigation sought to understand whether performance in naturalistic virtual reality tasks for cognitive assessment relates to the cognitive domains that are supposed to be measured. The Shoe Closet Test (SCT) was developed based on a simple visual search task involving attention skills, in which participants have to match each pair of shoes with the colors of the compartments in a virtual shoe closet. The interaction within the virtual environment was made using the Microsoft Kinect. The measures consisted of concurrent paper-and-pencil neurocognitive tests for global cognitive functioning, executive functions, attention, psychomotor ability, and the outcomes of the SCT. The results showed that the SCT correlated with global cognitive performance as measured with the Montreal Cognitive Assessment (MoCA). The SCT explained one third of the total variance of this test and revealed good sensitivity and specificity in discriminating scores below one standard deviation in this screening tool. These findings suggest that performance of such functional tasks involves a broad range of cognitive processes that are associated with global cognitive functioning and that may be difficult to isolate through paper-and-pencil neurocognitive tests.
Generic comparison of protein inference engines.
Claassen, Manfred; Reiter, Lukas; Hengartner, Michael O; Buhmann, Joachim M; Aebersold, Ruedi
2012-04-01
Protein identifications, instead of peptide-spectrum matches, constitute the biologically relevant result of shotgun proteomics studies. How to appropriately infer and report protein identifications has triggered a still ongoing debate. This debate has so far suffered from the lack of appropriate performance measures that allow us to objectively assess protein inference approaches. This study describes an intuitive, generic and yet formal performance measure and demonstrates how it enables experimentalists to select an optimal protein inference strategy for a given collection of fragment ion spectra. We applied the performance measure to systematically explore the benefit of excluding possibly unreliable protein identifications, such as single-hit wonders. Therefore, we defined a family of protein inference engines by extending a simple inference engine by thousands of pruning variants, each excluding a different specified set of possibly unreliable identifications. We benchmarked these protein inference engines on several data sets representing different proteomes and mass spectrometry platforms. Optimally performing inference engines retained all high confidence spectral evidence, without posterior exclusion of any type of protein identifications. Despite the diversity of studied data sets consistently supporting this rule, other data sets might behave differently. In order to ensure maximal reliable proteome coverage for data sets arising in other studies we advocate abstaining from rigid protein inference rules, such as exclusion of single-hit wonders, and instead consider several protein inference approaches and assess these with respect to the presented performance measure in the specific application context.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Gary, Rebecca
2012-01-01
Rapid growth in the numbers of older adults with cardiovascular disease (CVD) is raising awareness and concern of the impact that common geriatric syndromes such as frailty may have on clinical outcomes, health-related quality of life, and rising economic burden associated with healthcare. Increasingly, frailty is recognized to be a highly prevalent and important risk factor that is associated with adverse cardiovascular outcomes. A limitation of previous studies in patients with CVD has been the lack of a consistent definition and measures to evaluate frailty. In this review, building upon the work of Fried and colleagues, a definition of frailty is provided that is applicable for evaluating frailty in older adults with CVD. Simple, well-established performance-based measures widely used in comprehensive geriatric assessment are recommended that can be readily implemented by nurses in most practice settings. The limited studies conducted in older adults with CVD have shown physical performance measures to be highly predictive of clinical outcomes. Implications for practice and areas for future research are described for the growing numbers of elderly cardiac patients who are frail frailty and at risk for disability.
Experimental Evaluation of Adaptive Modulation and Coding in MIMO WiMAX with Limited Feedback
NASA Astrophysics Data System (ADS)
Mehlführer, Christian; Caban, Sebastian; Rupp, Markus
2007-12-01
We evaluate the throughput performance of an OFDM WiMAX (IEEE 802.16-2004, Section 8.3) transmission system with adaptive modulation and coding (AMC) by outdoor measurements. The standard compliant AMC utilizes a 3-bit feedback for SISO and Alamouti coded MIMO transmissions. By applying a 6-bit feedback and spatial multiplexing with individual AMC on the two transmit antennas, the data throughput can be increased significantly for large SNR values. Our measurements show that at small SNR values, a single antenna transmission often outperforms an Alamouti transmission. We found that this effect is caused by the asymmetric behavior of the wireless channel and by poor channel knowledge in the two-transmit-antenna case. Our performance evaluation is based on a measurement campaign employing the Vienna MIMO testbed. The measurement scenarios include typical outdoor-to-indoor NLOS, outdoor-to-outdoor NLOS, as well as outdoor-to-indoor LOS connections. We found that in all these scenarios, the measured throughput is far from its achievable maximum; the loss is mainly caused by a too simple convolutional coding.
Psychosis-proneness and neural correlates of self-inhibition in theory of mind.
van der Meer, Lisette; Groenewold, Nynke A; Pijnenborg, Marieke; Aleman, André
2013-01-01
Impaired Theory of Mind (ToM) has been repeatedly reported as a feature of psychotic disorders. ToM is crucial in social interactions and for the development of social behavior. It has been suggested that reasoning about the belief of others, requires inhibition of the self-perspective. We investigated the neural correlates of self-inhibition in nineteen low psychosis prone (PP) and eighteen high PP subjects presenting with subclinical features. High PP subjects have a more than tenfold increased risk of developing a schizophrenia-spectrum disorder. Brain activation was measured with functional Magnetic Resonance Imaging during a ToM task differentiating between self-perspective inhibition and belief reasoning. Furthermore, to test underlying inhibitory mechanisms, we included a stop-signal task. We predicted worse behavioral performance for high compared to low PP subjects on both tasks. Moreover, based on previous neuroimaging results, different activation patterns were expected in the inferior frontal gyrus (IFG) in high versus low PP subjects in self-perspective inhibition and simple response inhibition. Results showed increased activation in left IFG during self-perspective inhibition, but not during simple response inhibition, for high PP subjects as compared to low PP subjects. High and low PP subjects showed equal behavioral performance. The results suggest that at a neural level, high PP subjects need more resources for inhibiting the self-perspective, but not for simple motor response inhibition, to equal the performance of low PP subjects. This may reflect a compensatory mechanism, which may no longer be available for patients with schizophrenia-spectrum disorders resulting in ToM impairments.
Psychosis-Proneness and Neural Correlates of Self-Inhibition in Theory of Mind
van der Meer, Lisette; Groenewold, Nynke A.; Pijnenborg, Marieke; Aleman, André
2013-01-01
Impaired Theory of Mind (ToM) has been repeatedly reported as a feature of psychotic disorders. ToM is crucial in social interactions and for the development of social behavior. It has been suggested that reasoning about the belief of others, requires inhibition of the self-perspective. We investigated the neural correlates of self-inhibition in nineteen low psychosis prone (PP) and eighteen high PP subjects presenting with subclinical features. High PP subjects have a more than tenfold increased risk of developing a schizophrenia-spectrum disorder. Brain activation was measured with functional Magnetic Resonance Imaging during a ToM task differentiating between self-perspective inhibition and belief reasoning. Furthermore, to test underlying inhibitory mechanisms, we included a stop-signal task. We predicted worse behavioral performance for high compared to low PP subjects on both tasks. Moreover, based on previous neuroimaging results, different activation patterns were expected in the inferior frontal gyrus (IFG) in high versus low PP subjects in self-perspective inhibition and simple response inhibition. Results showed increased activation in left IFG during self-perspective inhibition, but not during simple response inhibition, for high PP subjects as compared to low PP subjects. High and low PP subjects showed equal behavioral performance. The results suggest that at a neural level, high PP subjects need more resources for inhibiting the self-perspective, but not for simple motor response inhibition, to equal the performance of low PP subjects. This may reflect a compensatory mechanism, which may no longer be available for patients with schizophrenia-spectrum disorders resulting in ToM impairments. PMID:23874445
Direct measurement of clinical mammographic x-ray spectra using a CdTe spectrometer.
Santos, Josilene C; Tomal, Alessandra; Furquim, Tânia A; Fausto, Agnes M F; Nogueira, Maria S; Costa, Paulo R
2017-07-01
To introduce and evaluate a method developed for the direct measurement of mammographic x-ray spectra using a CdTe spectrometer. The assembly of a positioning system and the design of a simple and customized alignment device for this application is described. A positioning system was developed to easily and accurately locate the CdTe detector in the x-ray beam. Additionally, an alignment device to line up the detector with the central axis of the radiation beam was designed. Direct x-ray spectra measurements were performed in two different clinical mammography units and the measured x-ray spectra were compared with computer-generated spectra. In addition, the spectrometer misalignment effect was evaluated by comparing the measured spectra when this device is aligned relatively to when it is misaligned. The positioning and alignment of the spectrometer have allowed the measurements of direct mammographic x-ray spectra in agreement with computer-generated spectra. The most accurate x-ray spectral shape, related with the minimal HVL value, and high photon fluence for measured spectra was found with the spectrometer aligned according to the proposed method. The HVL values derived from both simulated and measured x-ray spectra differ at most 1.3 and 4.5% for two mammography devices evaluated in this study. The experimental method developed in this work allows simple positioning and alignment of a spectrometer for x-ray spectra measurements given the geometrical constraints and maintenance of the original configurations of mammography machines. © 2017 American Association of Physicists in Medicine.
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-21
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (gamma) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% +/- 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 x 10 cm(2) field at the first density interface from tissue to lung equivalent material. Small fields (2 x 2 cm(2)) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the heterogeneous phantom. For the clinical test cases, the average dose discrepancy was 0.5% +/- 1.1%. Relative dose investigations of the transverse plane for clinical beam arrangements were performed with a 2D gamma-evaluation procedure. For 3% dose difference and 3 mm DTA criteria, the average value for gamma(>1) was 4.7% +/- 3.7%, the average gamma(1%) value was 1.19 +/- 0.16 and the mean 2D gamma-value was 0.44 +/- 0.07 in the heterogeneous phantom. The iPlan MC algorithm leads to accurate dosimetric results under clinical test conditions.
Abdominal fat thickness measurement using Focused Impedance Method (FIM) - phantom study
NASA Astrophysics Data System (ADS)
Haowlader, Salahuddin; Baig, Tanveer Noor; Siddique-e Rabbani, K.
2010-04-01
Abdominal fat thickness is a risk indicator of heart diseases, diabetes, etc., and its measurement is therefore important from the point of view of preventive care. Tetrapolar electrical impedance measurements (TPIM) could offer a simple and low cost alternative for such measurement compared to conventional techniques using CT scan and MRI, and has been tried by different groups. Focused Impedance Method (FIM) appears attractive as it can give localised information. An intuitive physical model was developed and experimental work was performed on a phantom designed to simulate abdominal subcutaneous fat layer in a body. TPIM measurements were performed with varying electrode separations. For small separations of current and potential electrodes, the measured impedance changed little, but started to decrease sharply beyond a certain separation, eventually diminishing gradually to negligible values. The finding could be explained using the intuitive physical model and gives an important practical information. TPIM and FIM may be useful for measurement of SFL thickness only if the electrode separations are within a certain specific range, and will fail to give reliable results if beyond this range. Further work, both analytical and experimental, are needed to establish this technique on a sound footing.
Predicting Cost/Performance Trade-Offs for Whitney: A Commodity Computing Cluster
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Nitzberg, Bill; VanderWijngaart, Rob F.; Kutler, Paul (Technical Monitor)
1997-01-01
Recent advances in low-end processor and network technology have made it possible to build a "supercomputer" out of commodity components. We develop simple models of the NAS Parallel Benchmarks version 2 (NPB 2) to explore the cost/performance trade-offs involved in building a balanced parallel computer supporting a scientific workload. We develop closed form expressions detailing the number and size of messages sent by each benchmark. Coupling these with measured single processor performance, network latency, and network bandwidth, our models predict benchmark performance to within 30%. A comparison based on total system cost reveals that current commodity technology (200 MHz Pentium Pros with 100baseT Ethernet) is well balanced for the NPBs up to a total system cost of around $1,000,000.
Effects of cusped field thruster on the performance of drag-free control system
NASA Astrophysics Data System (ADS)
Cui, K.; Liu, H.; Jiang, W. J.; Sun, Q. Q.; Hu, P.; Yu, D. R.
2018-03-01
With increased measurement tasks of space science, more requirements for the spacecraft environment have been put forward. Those tasks (e.g. the measurement of Earth's steady state gravity field anomalies) lead to the desire for developing drag-free control. Higher requirements for the thruster performance are made due to the demand for the drag-free control system and real-time compensation for non-conservative forces. Those requirements for the propulsion system include wide continuous throttling ability, high resolution, rapid response, low noise and so on. As a promising candidate, the cusped field thruster has features such as the high working stability, the low erosion rate, a long lifetime and the simple structure, so that it is chosen as the thruster to be discussed in this paper. Firstly, the performance of a new cusped field thruster is tested and analyzed. Then a drag-free control scheme based on the cusped field thruster is designed to evaluate the performance of this thruster. Subsequently, the effects of the thrust resolution, transient response time and thrust uncertainty on the controller are calculated respectively. Finally, the performance of closed-loop system is analyzed, and the simulation results verify the feasibility of applying cusped field thruster to drag-free flight in the space science measurement tasks.
Thomas, Kelsey R.; Marsiske, Michael
2016-01-01
We investigated how race and verbal prompting interacted with age to predict age trajectories on a performance-based measure of everyday cognition. African American (n = 727) and White (n = 2052) older adults from the ACTIVE clinical trial were given the Observed Tasks of Daily Living (OTDL; a performance-based measure of medication management/finances/telephone use) at baseline and 1-, 2-, 3-, 5-, and 10-year follow-ups. When participants said “I don't know” or did not respond, they received a standardised verbal prompt, which served only as a cue to initiate the first step. At each occasion, unprompted (sum of items correct without prompting) and prompted (sum of correct prompted and unprompted items) scores were derived for each participant. Mixed effects models for change were used to determine the age trajectories of OTDL performance by race. When not prompted, African Americans demonstrated more rapid decline in OTDL performance than Whites, especially after age 80. When prompted, both groups had improved performance and evinced shallower decline, although African Americans continued to demonstrate a slightly more rapid decline. Simple prompting attenuated age-related changes of African Americans and Whites on a measure of everyday cognition. Prompting may be especially helpful for older African Americans. PMID:26480946
Thomas, Kelsey R; Marsiske, Michael
2017-06-01
We investigated how race and verbal prompting interacted with age to predict age trajectories on a performance-based measure of everyday cognition. African American (n = 727) and White (n = 2052) older adults from the ACTIVE clinical trial were given the Observed Tasks of Daily Living (OTDL; a performance-based measure of medication management/finances/telephone use) at baseline and 1-, 2-, 3-, 5-, and 10-year follow-ups. When participants said "I don't know" or did not respond, they received a standardised verbal prompt, which served only as a cue to initiate the first step. At each occasion, unprompted (sum of items correct without prompting) and prompted (sum of correct prompted and unprompted items) scores were derived for each participant. Mixed effects models for change were used to determine the age trajectories of OTDL performance by race. When not prompted, African Americans demonstrated more rapid decline in OTDL performance than Whites, especially after age 80. When prompted, both groups had improved performance and evinced shallower decline, although African Americans continued to demonstrate a slightly more rapid decline. Simple prompting attenuated age-related changes of African Americans and Whites on a measure of everyday cognition. Prompting may be especially helpful for older African Americans.
Simple device measures solar radiation
NASA Technical Reports Server (NTRS)
Humphries, W. R.
1977-01-01
Simple inexpensive thermometer, insolated from surroundings by transparent glass or plastic encasement, measures intensities of solar radiation, or radiation from other sources such as furnaces or ovens. Unit can be further modified to accomplish readings from remote locations.
Diffusing-wave spectroscopy in a standard dynamic light scattering setup
NASA Astrophysics Data System (ADS)
Fahimi, Zahra; Aangenendt, Frank J.; Voudouris, Panayiotis; Mattsson, Johan; Wyss, Hans M.
2017-12-01
Diffusing-wave spectroscopy (DWS) extends dynamic light scattering measurements to samples with strong multiple scattering. DWS treats the transport of photons through turbid samples as a diffusion process, thereby making it possible to extract the dynamics of scatterers from measured correlation functions. The analysis of DWS data requires knowledge of the path length distribution of photons traveling through the sample. While for flat sample cells this path length distribution can be readily calculated and expressed in analytical form; no such expression is available for cylindrical sample cells. DWS measurements have therefore typically relied on dedicated setups that use flat sample cells. Here we show how DWS measurements, in particular DWS-based microrheology measurements, can be performed in standard dynamic light scattering setups that use cylindrical sample cells. To do so we perform simple random-walk simulations that yield numerical predictions of the path length distribution as a function of both the transport mean free path and the detection angle. This information is used in experiments to extract the mean-square displacement of tracer particles in the material, as well as the corresponding frequency-dependent viscoelastic response. An important advantage of our approach is that by performing measurements at different detection angles, the average path length through the sample can be varied. For measurements performed on a single sample cell, this gives access to a wider range of length and time scales than obtained in a conventional DWS setup. Such angle-dependent measurements also offer an important consistency check, as for all detection angles the DWS analysis should yield the same tracer dynamics, even though the respective path length distributions are very different. We validate our approach by performing measurements both on aqueous suspensions of tracer particles and on solidlike gelatin samples, for which we find our DWS-based microrheology data to be in good agreement with rheological measurements performed on the same samples.
Accuracy of simple urine tests for diagnosis of urinary tract infections in low-risk pregnant women.
Feitosa, Danielle Cristina Alves; da Silva, Márcia Guimarães; de Lima Parada, Cristina Maria Garcia
2009-01-01
Anatomic and physiological alterations during pregnancy predispose pregnant women to urinary tract infections (UTI). This study aimed to identify the accuracy of the simple urine test for UTI diagnosis in low-risk pregnant women. Diagnostic test performance was conducted in Botucatu, SP, involving 230 pregnant women, between 2006 and 2008. Results showed 10% UTI prevalence. Sensitivity, specificity and accuracy of the simple urine test were 95.6%, 63.3% and 66.5%, respectively, in relation to UTI diagnoses. The analysis of positive (PPV) and negative (NPV) predictive values showed that, when a regular simple urine test was performed, the chance of UTI occurrence was small (NPV 99.2%). In view of an altered result for such a test, the possibility of UTI existence was small (PPV 22.4%). It was concluded that the accuracy of the simple urine test as a diagnostic means for UTI was low, and that performing a urine culture is essential for appropriate diagnosis.
NASA Astrophysics Data System (ADS)
Jerolmack, D. J.; Durian, D. J.; Ferdowsi, B.; Houssais, M.; Ortiz, C. P.
2016-12-01
As in most of Earth science, there is a tension in the design of sediment transport experiments between simplicity and the ability to isolate variables, and realism so that results maybe extrapolated to the field. This leads to tradeoffs in data acquisition, as "simple" experiments may be designed around the goal of maximizing observation of fundamental dynamics, while the dynamics of "realistic" experiments are typically more opaque. Here we present results from a series of "simple" sediment transport experiments involving a laminar shear flow over spherical plastic grains, where refractive-index matched scanning techniques are used to perform tomographic imaging of the sediment bed. This setup allows us to measure particle velocities over seven orders of magnitude - encompassing much of the range of natural flows from creeping soil to suspended load - and these measurements reveal new phenomena relevant for geomorphology and granular physics. We show that the onset of sediment transport is actually a continuous transition from creeping to bed load, and that sub-threshold creep in this laboratory "river" is similar to creep observed on hillslopes and in glassy materials. We also show that the transition from bed load to suspension can be modeled as a continuous transition from a dense to dilute granular flow, uniting sediment transport with granular rheology. We then perform experiments with bi-modal grains, which undergo granular segregation that delivers coarse grains from the subsurface to the surface. This results in armoring that is entirely consistent with observations of more realistic systems, but by a completely different mechanism from surface-transport based theories. Although these phenomena may likely be quantitatively or even qualitatively different in natural settings, they cannot be dismissed out of hand because experiments are "too simple". Indeed, most of our findings can be mapped to observations from more complicated experiments and also field studies. By embracing the control and resolution afforded by "simple" experiments, we allow the possibility to both determine the mechanistic underpinnings of transport, and to reveal fundamentally new dynamics that may change our perspective on how landscapes work.
Fiber-integrated refractive index sensor based on a diced Fabry-Perot micro-resonator.
Suntsov, Sergiy; Rüter, Christian E; Schipkowski, Tom; Kip, Detlef
2017-11-20
We report on a fiber-integrated refractive index sensor based on a Fabry-Perot micro-resonator fabricated using simple diamond blade dicing of a single-mode step-index fiber. The performance of the device has been tested for the refractive index measurements of sucrose solutions as well as in air. The device shows a sensitivity of 1160 nm/RIU (refractive index unit) at a wavelength of 1.55 μm and a temperature cross-sensitivity of less than 10 -7 RIU/°C. Based on evaluation of the broadband reflection spectra, refractive index steps of 10 -5 of the solutions were accurately measured. The conducted coating of the resonator sidewalls with layers of a high-index material with real-time reflection spectrum monitoring could help to significantly improve the sensor performance.
Evaluation of Preduster in Cement Industry Based on Computational Fluid Dynamic
NASA Astrophysics Data System (ADS)
Septiani, E. L.; Widiyastuti, W.; Djafaar, A.; Ghozali, I.; Pribadi, H. M.
2017-10-01
Ash-laden hot air from clinker in cement industry is being used to reduce water contain in coal, however it may contain large amount of ash even though it was treated by a preduster. This study investigated preduster performance as a cyclone separator in the cement industry by Computational Fluid Dynamic method. In general, the best performance of cyclone is it have relatively high efficiency with the low pressure drop. The most accurate and simple turbulence model, Reynold Average Navier Stokes (RANS), standard k-ε, and combination with Lagrangian model as particles tracking model were used to solve the problem. The measurement in simulation result are flow pattern in the cyclone, pressure outlet and collection efficiency of preduster. The applied model well predicted by comparing with the most accurate empirical model and pressure outlet in experimental measurement.
Sakimoto, Yuya; Sakata, Shogo
2014-01-01
It was showed that solving a simple discrimination task (A+, B−) and a simultaneous feature-negative (FN) task (A+, AB−) used the hippocampal-independent strategy. Recently, we showed that the number of sessions required for a rat to completely learn a task differed between the FN and simple discrimination tasks, and there was a difference in hippocampal theta activity between these tasks. These results suggested that solving the FN task relied on a different strategy than the simple discrimination task. In this study, we provided supportive evidence that solving the FN and simple discrimination tasks involved different strategies by examining changes in performance and hippocampal theta activity in the FN task after transfer from the simple discrimination task (A+, B− → A+, AB−). The results of this study showed that performance on the FN task was impaired and there was a difference in hippocampal theta activity between the simple discrimination task and FN task. Thus, we concluded that solving the FN task uses a different strategy than the simple discrimination task. PMID:24917797
Hotaling, James M; Leddy, Laura S; Haider, Mahum A; Mossanen, Matthew; Bailey, Michael R; MacConaghy, Brian; Olson, Francis; Krieger, John N
2014-01-01
Objective To conduct a proof-of-concept study to determine the potential utility of a novel, adjustable single-visit, disposable device to facilitate rapid adult circumcision. Design Prospective pilot trial of a novel surgical device Setting Tertiary care Veterans Administration medical center Patients 5 adult males Interventions Circumcisions performed by junior trainees using an adjustable, single-size surgical-assist device constructed by the University of Washington Applied Physics Laboratory. Main Outcome Measure(s) The attending surgeon and trainees completed standardized forms after each procedure to assess technical problems and ease of use. Follow-up visits were scheduled to evaluate adverse events, post-operative pain, cosmetic outcomes and participant satisfaction at 3, 8, 30 and 90 days post-operatively. Results The average operative time was 16.4 minutes. All cases were performed with local anesthesia and no case required electrocautery or conversion to standard surgery. At the post-operative day 3 visit all subjects were happy with their results and would recommend the procedure to another patient. One participant had a minor wound separation noted at the 30-day visit that resolved during follow-up. There were no wound infections, hematomas or other adverse events. Conclusions This proof-of-study suggests that the Simple Circumcision Device (SCD) may facilitate delivery of safe adult male circumcision services. PMID:24613534
Rules vs. Statistics: Insights from a Highly Inflected Language
Mirković, Jelena; Seidenberg, Mark S.; Joanisse, Marc F.
2011-01-01
Inflectional morphology has been taken as a paradigmatic example of rule-governed grammatical knowledge (Pinker, 1999). The plausibility of this claim may be related to the fact that it is mainly based on studies of English, which has a very simple inflectional system. We examined the representation of inflectional morphology in Serbian, which encodes number, gender and case for nouns. Linguists standardly characterize this system as a complex set of rules, with disagreements about their exact form. We present analyses of a large corpus of nouns which showed that, as in English, Serbian inflectional morphology is quasiregular: it exhibits numerous partial regularities creating neighborhoods that vary in size and consistency. We then asked whether a simple connectionist network could encode this statistical information in a manner that also supported generalization. A network trained on 3,244 Serbian nouns learned to produce correctly inflected phonological forms from a specification of a word’s lemma, gender, number and case, and generalized to untrained cases. The model’s performance was sensitive to variables that also influence human performance, including surface and lemma frequency. It was also influenced by inflectional neighborhood size, a novel measure of the consistency of meaning to form mapping. A word naming experiment with native Serbian speakers showed that this measure also affects human performance. The results suggest that, as in English, generating correctly inflected forms involves satisfying a small number of simultaneous probabilistic constraints relating form and meaning. Thus, common computational mechanisms may govern the representation and use of inflectional information across typologically diverse languages. PMID:21564267
Kuang, Cuifang; Ali, M Yakut; Hao, Xiang; Wang, Tingting; Liu, Xu
2010-10-01
In order to achieve a higher axial resolution for displacement measurement, a novel method is proposed based on total internal reflection filter and confocal microscope principle. A theoretical analysis of the basic measurement principles is presented. The analysis reveals that the proposed confocal detection scheme is effective in enhancing the resolution of nonlinearity of the reflectance curve greatly. In addition, a simple prototype system has been developed based on the theoretical analysis and a series of experiments have been performed under laboratory conditions to verify the system feasibility, accuracy, and stability. The experimental results demonstrate that the axial resolution in displacement measurements is better than 1 nm in a range of 200 nm which is threefold better than that can be achieved using the plane reflector.
Modeling and measuring limb fine-motor unsteadiness
NASA Technical Reports Server (NTRS)
Magdaleno, R. E.; Jex, H. R.; Allen, R. W.
1973-01-01
Fine-motor unsteadiness its properties, conceptual and analytical models, and experimental measurements is examined. Based on a data review, the tentative model derived includes: neuromuscular system, grip interface, and control system dynamic elements. The properties of this model change with muscle tension and match a wide group of extant data. A simple experiment was performed to investigate the amplitude/force relationships of the tremor mode. As the finger-pull force increased from 5 to 20 Newtons, the tremor mode frequency for a given individual stayed within roughly + or - 1 Hz over a range from 9-12 Hz, while the average magnitude of the rms tremor acceleration increased tenfold. A standardized test for making such measurements is given and applications in the fields of psychophysiological stress and strain measurements are mentioned.
Constant envelope OFDM scheme for 6PolSK-QPSK
NASA Astrophysics Data System (ADS)
Li, Yupeng; Ding, Ding
2018-03-01
A constant envelope OFDM scheme with phase modulator (PM-CE-OFDM) for 6PolSK-QPSK modulation was demonstrated. Performance under large fiber launch power is measured to check its advantages in counteracting fiber nonlinear impairments. In our simulation, PM-CE-OFDM, RF-assisted constant envelope OFDM (RF-CE-OFDM) and conventional OFDM (Con-OFDM) are transmitted through 80 km standard single mode fiber (SSMF) single channel and WDM system. Simulation results confirm that PM-CE-OFDM has best performance in resisting fiber nonlinearity. In addition, benefiting from the simple system structure, the complexity and cost of PM-CE-OFDM system could be reduced effectively.
Note: cryogenic microstripline-on-Kapton microwave interconnects.
Harris, A I; Sieth, M; Lau, J M; Church, S E; Samoska, L A; Cleary, K
2012-08-01
Simple broadband microwave interconnects are needed for increasing the size of focal plane heterodyne radiometer arrays. We have measured loss and crosstalk for arrays of microstrip transmission lines in flex circuit technology at 297 and 77 K, finding good performance to at least 20 GHz. The dielectric constant of Kapton substrates changes very little from 297 to 77 K, and the electrical loss drops. The small cross-sectional area of metal in a printed circuit structure yields overall thermal conductivities similar to stainless steel coaxial cable. Operationally, the main performance tradeoffs are between crosstalk and thermal conductivity. We tested a patterned ground plane to reduce heat flux.
Divided attention and driving: a pilot study using virtual reality technology.
Lengenfelder, Jean; Schultheis, Maria T; Al-Shihabi, Talal; Mourant, Ronald; DeLuca, John
2002-02-01
Virtual reality (VR) was used to investigate the influence of divided attention (simple versus complex) on driving performance (speed control). Three individuals with traumatic brain injury (TBI) and three healthy controls (HC), matched for age, education, and gender, were examined. Preliminary results revealed no differences on driving speed between TBI and HC. In contrast, TBI subjects demonstrated a greater number of errors on a secondary task performed while driving. The findings suggest that VR may provide an innovative medium for direct evaluation of basic cognitive functions (ie, divided attention) and its impact on everyday tasks (ie, driving) not previously available through traditional neuropsychological measures.
Cryogenic Test Capability at Marshall Space Flight Center's X-ray Cryogenic Test Facility
NASA Technical Reports Server (NTRS)
Kegley, Jeffrey; Baker, Mark; Carpenter, Jay; Eng, Ron; Haight, Harlan; Hogue, William; McCracken, Jeff; Siler, Richard; Wright, Ernie
2006-01-01
Marshall Space Flight Center's X-ray & Cryogenic Test Facility (XRCF) has been performing sub-liquid nitrogen temperature testing since 1999. Optical wavefront measurement, thermal structural deformation, mechanism functional & calibration, and simple cryo-conditioning tests have been completed. Recent modifications have been made to the facility in support of the James Webb Space Telescope (JWST) program. The chamber's payload envelope and the facility s refrigeration capacity have both been increased. Modifications have also been made to the optical instrumentation area improving access for both the installation and operation of optical instrumentation outside the vacuum chamber. The facility's capabilities, configuration, and performance data will be presented.
A simplified satellite navigation system for an autonomous Mars roving vehicle.
NASA Technical Reports Server (NTRS)
Janosko, R. E.; Shen, C. N.
1972-01-01
The use of a retroflecting satellite and a laser rangefinder to navigate a Martian roving vehicle is considered in this paper. It is shown that a simple system can be employed to perform this task. An error analysis is performed on the navigation equations and it is shown that the error inherent in the scheme proposed can be minimized by the proper choice of measurement geometry. A nonlinear programming approach is used to minimize the navigation error subject to constraints that are due to geometric and laser requirements. The problem is solved for a particular set of laser parameters and the optimal solution is presented.
Application of gain scheduling to the control of batch bioreactors
NASA Technical Reports Server (NTRS)
Cardello, Ralph; San, Ka-Yiu
1987-01-01
The implementation of control algorithms to batch bioreactors is often complicated by the inherent variations in process dynamics during the course of fermentation. Such a wide operating range may render the performance of fixed gain PID controllers unsatisfactory. In this work, a detailed study on the control of batch fermentation is performed. Furthermore, a simple batch controller design is proposed which incorporates the concept of gain-scheduling, a subclass of adaptive control, with oxygen uptake rate as an auxiliary variable. The control of oxygen tension in the biorector is used as a vehicle to convey the proposed idea, analysis and results. Simulation experiments indicate significant improvement in controller performance can be achieved by the proposed approach even in the presence of measurement noise.
NASA Astrophysics Data System (ADS)
Dubuque, Shaun; Coffman, Thayne; McCarley, Paul; Bovik, A. C.; Thomas, C. William
2009-05-01
Foveated imaging has been explored for compression and tele-presence, but gaps exist in the study of foveated imaging applied to acquisition and tracking systems. Results are presented from two sets of experiments comparing simple foveated and uniform resolution targeting (acquisition and tracking) algorithms. The first experiments measure acquisition performance when locating Gabor wavelet targets in noise, with fovea placement driven by a mutual information measure. The foveated approach is shown to have lower detection delay than a notional uniform resolution approach when using video that consumes equivalent bandwidth. The second experiments compare the accuracy of target position estimates from foveated and uniform resolution tracking algorithms. A technique is developed to select foveation parameters that minimize error in Kalman filter state estimates. Foveated tracking is shown to consistently outperform uniform resolution tracking on an abstract multiple target task when using video that consumes equivalent bandwidth. Performance is also compared to uniform resolution processing without bandwidth limitations. In both experiments, superior performance is achieved at a given bandwidth by foveated processing because limited resources are allocated intelligently to maximize operational performance. These findings indicate the potential for operational performance improvements over uniform resolution systems in both acquisition and tracking tasks.
Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing.
Hayashi, Masahito; Morimae, Tomoyuki
2015-11-27
We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.
Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing
NASA Astrophysics Data System (ADS)
Hayashi, Masahito; Morimae, Tomoyuki
2015-11-01
We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.
Gray, C; Cantagallo, A; Della Sala, S; Basaglia, N
1998-05-01
Twenty-four patients, showing a good clinical recovery from coma-inducing injury and coping well with the activities of everyday living, were tested, at least 1 year after trauma, on motor speed and reaction time, and given a neuropsychological examination. While the patients generally performed within the normal range on the neuropsychological tests, their motor speeds and reaction times--both simple (SRT) and complex (CRT)--were significantly slower than those of matched controls. This points to a subclinical bradykinesia. The patients' motor speed scores did not correlate significantly with any of the neuropsychological tests; nor did SRT or CRT. While the difference between simple and complex reaction time was significantly greater in the patient group, the percentage difference was not significantly different between the two groups. Collectively, these results suggest that bradykinesia and bradyphrenia do not necessarily overlap. Finally, there was no significant correlation between motor performance and severity of original injury, whether the latter was measured by number and size of lesions or by duration of post-traumatic amnesia.
A PDMS Device Coupled with Culture Dish for In Vitro Cell Migration Assay.
Lv, Xiaoqing; Geng, Zhaoxin; Fan, Zhiyuan; Wang, Shicai; Pei, WeiHua; Chen, Hongda
2018-04-30
Cell migration and invasion are important factors during tumor progression and metastasis. Wound-healing assay and the Boyden chamber assay are efficient tools to investigate tumor development because both of them could be applied to measure cell migration rate. Therefore, a simple and integrated polydimethylsiloxane (PDMS) device was developed for cell migration assay, which could perform quantitative evaluation of cell migration behaviors, especially for the wound-healing assay. The integrated device was composed of three units, which included cell culture dish, PDMS chamber, and wound generation mold. The PDMS chamber was integrated with cell culture chamber and could perform six experiments under different conditions of stimuli simultaneously. To verify the function of this device, it was utilized to explore the tumor cell migration behaviors under different concentrations of fetal bovine serum (FBS) and transforming growth factor (TGF-β) at different time points. This device has the unique capability to create the "wound" area in parallel during cell migration assay and provides a simple and efficient platform for investigating cell migration assay in biomedical application.
NASA Astrophysics Data System (ADS)
Ganeshraja, Ayyakannu Sundaram; Clara, Antoni Samy; Rajkumar, Kanniah; Wang, Yanjie; Wang, Yu; Wang, Junhu; Anbalagan, Krishnamoorthy
2015-10-01
The present article is focused on recent developments toward the preparation of room temperature ferromagnetic nanocomposites using better photocatalytic performance. These nanocomposites were successfully prepared by a simple hydrothermal method and their molecular formulas were confirmed as Ti0.90Sn0.10O2 (S1), 0.2CuO-Ti0.73Sn0.06Cu0.21O2-δ (S2), and Ti0.82Sn0.09Fe0.09O2-δ (S3). The ICP, XRD, DRS, FTIR, Raman, XAFS, XPS, EPR, SEM-EDX, HRSEM, HRTEM, photoluminescence and vibrating sample magnetometric measurements were employed to characterize the phase structures, morphologies, optical and magnetic properties of the photocatalysts. The local structures of Sn4+ and Fe3+ were confirmed by 119Sn and 57Fe Mössbauer analysis. The photocatalytic activities of the samples were evaluated by the degradation of methyl orange in water under visible light irradiation. Among the samples, tin doped TiO2 (S1) showed the best photocatalytic performance and stability.
Kumar, Puspendra; Jha, Shivesh; Naved, Tanveer
2013-01-01
Validated modified lycopodium spore method has been developed for simple and rapid quantification of herbal powdered drugs. Lycopodium spore method was performed on ingredients of Shatavaryadi churna, an ayurvedic formulation used as immunomodulator, galactagogue, aphrodisiac and rejuvenator. Estimation of diagnostic characters of each ingredient of Shatavaryadi churna individually was carried out. Microscopic determination, counting of identifying number, measurement of area, length and breadth of identifying characters were performed using Leica DMLS-2 microscope. The method was validated for intraday precision, linearity, specificity, repeatability, accuracy and system suitability, respectively. The method is simple, precise, sensitive, and accurate, and can be used for routine standardisation of raw materials of herbal drugs. This method gives the ratio of individual ingredients in the powdered drug so that any adulteration of genuine drug with its adulterant can be found out. The method shows very good linearity value between 0.988-0.999 for number of identifying character and area of identifying character. Percentage purity of the sample drug can be determined by using the linear equation of standard genuine drug.
Reactive underwater object inspection based on artificial electric sense.
Lebastard, Vincent; Boyer, Frédéric; Lanneau, Sylvain
2016-07-26
Weakly electric fish can perform complex cognitive tasks based on extracting information from blurry electric images projected from their immediate environment onto their electro-sensitive skin. In particular they can be trained to recognize the intrinsic properties of objects such as their shape, size and electric nature. They do this by means of novel perceptual strategies that exploit the relations between the physics of a self-generated electric field, their body morphology and the ability to perform specific movement termed probing motor acts (PMAs). In this article we artificially reproduce and combine these PMAs to build an autonomous control strategy that allows an artificial electric sensor to find electrically contrasted objects, and to orbit around them based on a minimum set of measurements and simple reactive feedback control laws of the probe's motion. The approach does not require any simulation models and could be implemented on an autonomous underwater vehicle (AUV) equipped with artificial electric sense. The AUV has only to satisfy certain simple geometric properties, such as bi-laterally (left/right) symmetrical electrodes and possess a reasonably high aspect (length/width) ratio.
Tang, G Y; Wu, H J; Wu, L; Li, Z J; Yao, Y G
2001-05-01
The catechins, particularly in green tea, have been found to possess anti-mutagenic and anti-tumorigenic properties. As each catechin possesses distinct properties, a simple and rapid method that could be used for analysis of individual catechins in a complex mixture would be necessary. A relatively simple and rapid method for simultaneous separation of five catechins and caffeine in tea polyphenol by isocratic elution high performance liquid chromatography has been developed. The analysis could be finished within 30 min. They were measured using Resolve C18 column (at 43 degrees C) and UV detector (at 280 nm), water-85% phosphoric acid aqueous solution-acetonitrile-dimethyl formamide(DMF) (859:1:120:20, V/V) as mobile phase. There was a good linear relationship between the content of component and its peak area for catechins and caffeine, with the correlation coefficients of 0.9992-0.9999. The average recoveries (n = 5) were 83.33%-104.42%, and the relative standard deviations (n = 6) were 0.74%-1.43%. The effect of concentration of DMF in mobile phase was studied.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
MacAulay, Rebecca K; Wagner, Mark T; Szeles, Dana; Milano, Nicholas J
2017-07-01
Longitudinal research indicates that cognitive load dual-task gait assessment is predictive of cognitive decline and thus might provide a sensitive measure to screen for mild cognitive impairment (MCI). However, research among older adults being clinically evaluated for cognitive concerns, a defining feature of MCI, is lacking. The present study investigated the effect of performing a cognitive task on normal walking speed in patients presenting to a memory clinic with cognitive complaints. Sixty-one patients with a mean age of 68 years underwent comprehensive neuropsychological testing, clinical interview, and gait speed (simple- and dual-task conditions) assessments. Thirty-four of the 61 patients met criteria for MCI. Repeated measure analyses of covariance revealed that greater age and MCI both significantly associated with slower gait speed, ps<.05. Follow-up analysis indicated that the MCI group had significantly slower dual-task gait speed but did not differ in simple-gait speed. Multivariate linear regression across groups found that executive attention performance accounted for 27.4% of the variance in dual-task gait speed beyond relevant demographic and health risk factors. The present study increases the external validity of dual-task gait assessment of MCI. Differences in dual-task gait speed appears to be largely attributable to executive attention processes. These findings have clinical implications as they demonstrate expected patterns of gait-brain behavior relationships in response to a cognitive dual task within a clinically representative population. Cognitive load dual-task gait assessment may provide a cost efficient and sensitive measure to detect older adults at high risk of a dementia disorder. (JINS, 2017, 23, 493-501).
Medicaid nursing home pay for performance: where do we stand?
Arling, Greg; Job, Carol; Cooke, Valerie
2009-10-01
Nursing home pay-for-performance (P4P) programs are intended to maximize the value obtained from public and private expenditures by measuring and rewarding better nursing home performance. We surveyed the 6 states with operational P4P systems in 2007. We describe key features of six Medicaid nursing home P4P systems and make recommendations for further development of nursing home P4P. We surveyed the six states with operational P4P systems in 2007. The range of performance measures employed by the states is quite broad: staffing level and satisfaction, findings from the regulatory system, clinical quality indicators, resident quality of life or satisfaction with care, family satisfaction, access to care for special populations, and efficiency. The main data sources for the measures are the Minimum Data Set (MDS), nursing home inspections, special surveys of nursing home residents, consumers or employees, and facility cost reports or other administrative systems. The most common financial incentive for better performance is a percentage bonus or an add-on to a facility's per diem rate. The bonus is generally proportional to a facility performance score, which consists of simple or weighted sums of scores on individual measures. States undertaking nursing home P4P programs should involve key stakeholders at all stages of P4P system design and implementation. Performance measures should be comprehensive, valid and reliable, risk adjusted where appropriate, and communicated clearly to providers and consumers. The P4P system should encourage provider investment in better care yet recognize state fiscal restraints. Consumer report cards, quality improvement initiatives, and the regulatory process should complement and reinforce P4P. Finally, the P4P system should be transparent and continuously evaluated.
Kaplan, Metin; Erol, Fatih Serhat; Bozgeyik, Zülküf; Koparan, Mehmet
2007-07-01
In the present study, the clinical effectiveness of a surgical procedure in which no draining tubes are installed following simple burr hole drainage and saline irrigation is investigated. 10 patients, having undergone operative intervention for unilateral chronic subdural hemorrhage, having a clinical grade of 2 and a hemorrhage thickness of 2 cm, were included in the study. The cerebral blood flow rates of middle cerebral artery were evaluated bilaterally with Doppler before and after the surgery. All the cases underwent the operation using the simple burr hole drainage technique without the drain and consequent saline irrigation. Statistical analysis was performed by Wilcoxon signed rank test (p<0.05). There was a pronounced decrease in the preoperative MCA blood flow in the hemisphere the hemorrhage had occurred (p=0.008). An increased PI value on the side of the hemorrhage drew our attention (p=0.005). Postoperative MCA blood flow measurements showed a statistically significant improvement (p=0.005). Furthermore, the PI value showed normalization (p<0.05). The paresis and the level of consciousness improved in all cases. Simple burr hole drainage technique is sufficient for the improvement of cerebral blood flow and clinical recovery in patients with chronic subdural hemorrhage.
A simple performance calculation method for LH2/LOX engines with different power cycles
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1973-01-01
A simple method for the calculation of the specific impulse of an engine with a gas generator cycle is presented. The solution is obtained by a power balance between turbine and pump. Approximate equations for the performance of the combustion products of LH2/LOX are derived. Performance results are compared with solutions of different engine types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, D. Mark
Here, three polymers are routinely used as binders for plastic bonded explosives by Lawrence Livermore National Laboratory, FK-800, Viton A 100, and Oxy 461. Attenuated total reflectance Fourier transform infrared measurements were performed on 10 different lots of FK-800, 5 different lots of Oxy 461, and 3 different lots of Viton A-100, one sample of Viton VTR 5883 and 2 Fluorel polymers of hexafluoropropene and vinylidene fluoride. The characteristic IR bands were measured. If possible, their vibrational modes were assigned based on literature data. Simple Mopac calculations were used to validate these vibrational mode assignments. Somewhat more sophisticated calculations weremore » run using Gaussian on the same structures.« less
Materials Chemistry and Performance of Silicone-Based Replicating Compounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brumbach, Michael T.; Mirabal, Alex James; Kalan, Michael
Replicating compounds are used to cast reproductions of surface features on a variety of materials. Replicas allow for quantitative measurements and recordkeeping on parts that may otherwise be difficult to measure or maintain. In this study, the chemistry and replicating capability of several replicating compounds was investigated. Additionally, the residue remaining on material surfaces upon removal of replicas was quantified. Cleaning practices were tested for several different replicating compounds. For all replicating compounds investigated, a thin silicone residue was left by the replica. For some compounds, additional inorganic species could be identified in the residue. Simple solvent cleaning could removemore » some residue.« less
Multi-hole pressure probes to air data system for subsonic small-scale air vehicles
NASA Astrophysics Data System (ADS)
Shevchenko, A. M.; Berezin, D. R.; Puzirev, L. N.; Tarasov, A. Z.; Kharitonov, A. M.; Shmakov, A. S.
2016-10-01
A brief review of research performed to develop multi-hole probes to measure of aerodynamic angles, dynamic head, and static pressure of a flying vehicle. The basis of these works is the application a well-known classical multi-hole pressure probe technique of measuring of a 3D flow to use in the air data system. Two multi-hole pressure probes with spherical and hemispherical head to air-data system for subsonic small-scale vehicles have been developed. A simple analytical probe model with separation of variables is proposed. The probes were calibrated in the wind tunnel, one of them is in-flight tested.
Optical technique to study the impact of heavy rain on aircraft performance
NASA Technical Reports Server (NTRS)
Hess, C. F.; Li, F.
1985-01-01
A laser based technique was investigated and shown to have the potential to obtain measurements of the size and velocity of water droplets used in a wind tunnel to simulate rain. A theoretical model was developed which included some simple effects due to droplet nonsphericity. Parametric studies included the variation of collection distance (up to 4 m), angle of collection, effect of beam interference by the spray, and droplet shape. Accurate measurements were obtained under extremely high liquid water content and spray interference. The technique finds applications in the characterization of two phase flows where the size and velocity of particles are needed.
Haranas, Ioannis; Gkigkitzis, Ioannis; Kotsireas, Ilias; Austerlitz, Carlos
2017-01-01
Understanding how the brain encodes information and performs computation requires statistical and functional analysis. Given the complexity of the human brain, simple methods that facilitate the interpretation of statistical correlations among different brain regions can be very useful. In this report we introduce a numerical correlation measure that may serve the interpretation of correlational neuronal data, and may assist in the evaluation of different brain states. The description of the dynamical brain system, through a global numerical measure may indicate the presence of an action principle which may facilitate a application of physics principles in the study of the human brain and cognition.
Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No
2015-11-01
One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.
A review on simple assembly line balancing type-e problem
NASA Astrophysics Data System (ADS)
Jusop, M.; Rashid, M. F. F. Ab
2015-12-01
Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.
Strategic predictors of performance in a divided attention task
Faragó, Kinga Bettina; Lőrincz, András
2018-01-01
In this study we investigate the strategies of subjects in a complex divided attention task. We conducted a series of experiments with ten participants and evaluated their performance. After an extensive analysis, we identified four strategic measures that justify the achievement of the participants, by highlighting the individual differences and predicting performance in a regression analysis using generalized estimating equations. Selecting the more urgent task and user action between multiple simultaneous possibilities form two of the strategic decisions, respectively. The third one refers to choosing a response within the same task when the opportunity is present. The fourth and most important measure of strategy involves thinking ahead and executing an action before a situation would become critical. This latter one has the effect of reducing later cognitive load or timing constraints and it is shown to explain almost as much variance in performance as the other three, more straightforward predictors together. In addition to determining these strategic predictors, we also show how manipulating task difficulty induces a shift in strategy, thus impairing human performance in the rehearsed task. The results of this study indicate that considerable differences in the divided attention ability of normal subjects can be identified early and with simple measurements. The importance of describing and analyzing strategies is also emphasized, which can substantially influence performance in complex tasks and may serve training needs. PMID:29621292
Strategic predictors of performance in a divided attention task.
Rill, Róbert Adrian; Faragó, Kinga Bettina; Lőrincz, András
2018-01-01
In this study we investigate the strategies of subjects in a complex divided attention task. We conducted a series of experiments with ten participants and evaluated their performance. After an extensive analysis, we identified four strategic measures that justify the achievement of the participants, by highlighting the individual differences and predicting performance in a regression analysis using generalized estimating equations. Selecting the more urgent task and user action between multiple simultaneous possibilities form two of the strategic decisions, respectively. The third one refers to choosing a response within the same task when the opportunity is present. The fourth and most important measure of strategy involves thinking ahead and executing an action before a situation would become critical. This latter one has the effect of reducing later cognitive load or timing constraints and it is shown to explain almost as much variance in performance as the other three, more straightforward predictors together. In addition to determining these strategic predictors, we also show how manipulating task difficulty induces a shift in strategy, thus impairing human performance in the rehearsed task. The results of this study indicate that considerable differences in the divided attention ability of normal subjects can be identified early and with simple measurements. The importance of describing and analyzing strategies is also emphasized, which can substantially influence performance in complex tasks and may serve training needs.
Young, John Q; van Dijk, Savannah M; O'Sullivan, Patricia S; Custers, Eugene J; Irby, David M; Ten Cate, Olle
2016-09-01
The handover represents a high-risk event in which errors are common and lead to patient harm. A better understanding of the cognitive mechanisms of handover errors is essential to improving handover education and practice. This paper reports on an experiment conducted to study the effects of learner knowledge, case complexity (i.e. cases with or without a clear diagnosis) and their interaction on handover accuracy and cognitive load. Participants were 52 Dutch medical students in Years 2 and 6. The experiment employed a repeated-measures design with two explanatory variables: case complexity (simple or complex) as the within-subject variable, and learner knowledge (as indicated by illness script maturity) as the between-subject covariate. The dependent variables were handover accuracy and cognitive load. Each participant performed a total of four simulated handovers involving two simple cases and two complex cases. Higher illness script maturity predicted increased handover accuracy (p < 0.001) and lower cognitive load (p = 0.007). Case complexity did not independently affect either outcome. For handover accuracy, there was no interaction between case complexity and illness script maturity. For cognitive load, there was an interaction effect between illness script maturity and case complexity, indicating that more mature illness scripts reduced cognitive load less in complex cases than in simple cases. Students with more mature illness scripts performed more accurate handovers and experienced lower cognitive load. For cognitive load, these effects were more pronounced in simple than complex cases. If replicated, these findings suggest that handover curricula and protocols should provide support that varies according to the knowledge of the trainee. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Simple neck pain questions used in surveys, evaluated in relation to health outcomes: a cohort study
2012-01-01
Background The high prevalence of pain reported in many epidemiological studies, and the degree to which this prevalence reflects severe pain is under discussion in the literature. The aim of the present study was to evaluate use of the simple neck pain questions commonly included in large epidemiological survey studies with respect to aspects of health. We investigated if and how an increase in number of days with pain is associated with reduction in health outcomes. Methods A cohort of university students (baseline age 19–25 years) were recruited in 2002 and followed annually for 4 years. The baseline response rate was 69% which resulted in 1200 respondents (627 women, 573 men). Participants were asked about present and past pain and perceptions of their general health, sleep disturbance, stress and energy levels, and general performance. The data were analyzed using a mixed model for repeated measurements and a random intercept logistic model. Results When reporting present pain, participants also reported lower prevalence of very good health, higher stress and sleep disturbance scores and lower energy score. Among those with current neck pain, additional questions characterizing the pain such as duration (categorized), additional pain sites and decreased general performance were associated with lower probability of very good health and higher amounts of sleep disturbance. Knowing about the presence or not of pain explains more of the variation in health between individuals, than within individuals. Conclusion This study of young university students has demonstrated that simple neck pain survey questions capture features of pain that affect aspects of health such as perceived general health, sleep disturbance, mood in terms of stress and energy. Simple pain questions are more useful for group descriptions than for describing or following pain in an individual. PMID:23102060