Sample records for aspects estimated performance

  1. Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    2004-01-01

    The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…

  2. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  3. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  4. Methods of albumin estimation in clinical biochemistry: Past, present, and future.

    PubMed

    Kumar, Deepak; Banerjee, Dibyajyoti

    2017-06-01

    Estimation of serum and urinary albumin is routinely performed in clinical biochemistry laboratories. In the past, precipitation-based methods were popular for estimation of human serum albumin (HSA). Currently, dye-binding or immunochemical methods are widely practiced. Each of these methods has its limitations. Research endeavors to overcome such limitations are on-going. The current trends in methodological aspects of albumin estimation guiding the field have not been reviewed. Therefore, it is the need of the hour to review several aspects of albumin estimation. The present review focuses on the modern trends of research from a conceptual point of view and gives an overview of recent developments to offer the readers a comprehensive understanding of the subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Professional Rescuers' experiences of motivation for cardiopulmonary resuscitation: A qualitative study.

    PubMed

    Assarroudi, Abdolghader; Heshmati Nabavi, Fatemeh; Ebadi, Abbas; Esmaily, Habibollah

    2017-06-01

    Rescuers' psychological competence, particularly their motivation, can improve the cardiopulmonary resuscitation outcomes. Data were collected using semistructured interviews with 24 cardiopulmonary resuscitation team members and analyzed through deductive content analysis based on Vroom's expectancy theory. Nine generic categories were developed: (i) estimation of the chance of survival; (ii) estimation of self-efficacy; (iii) looking for a sign of effectiveness; (iv) supportive organizational structure; (v) revival; (vi) acquisition of external incentives; (vii) individual drives; (viii) commitment to personal values; and (ix) avoiding undesirable social outcomes. When professional rescuers were called to perform cardiopulmonary resuscitation, they subjectively evaluated the patient's chance of survival, the likelihood of achieving of the desired outcome, and the ability to perform cardiopulmonary resuscitation interventions. If their evaluations were positive, and the consequences of cardiopulmonary resuscitation were considered favorable, they were strongly motivated to perform it. Beyond the scientific aspects, the motivation to perform cardiopulmonary resuscitation was influenced by intuitive, emotional, and spiritual aspects. © 2017 John Wiley & Sons Australia, Ltd.

  6. Does performance management affect nurses' well-being?

    PubMed

    Decramer, Adelien; Audenaert, Mieke; Van Waeyenberg, Thomas; Claeys, Tine; Claes, Claudia; Vandevelde, Stijn; van Loon, Jos; Crucke, Saskia

    2015-04-01

    This article focuses on employee performance-management practices in the healthcare sector. We specifically aim to contribute to a better understanding of the impact of employee performance-management practices on affective well-being of nurses in hospitals. Theory suggests that the features of employee-performance management (planning and evaluation of individual performances) predict affective well-being (in this study: job satisfaction and affective commitment). Performance-management planning and evaluation and affective well-being were drawn from a survey of nurses at a Flemish hospital. Separate estimations were performed for different aspects of affective well-being. Performance planning has a negative effect on job satisfaction of nurses. Both vertical alignment and satisfaction with the employee performance-management system increase the affective well-being of nurses; however, the impact of vertical alignment differs for different aspects of affective well-being (i.e. job satisfaction and affective commitment). Performance-management planning and evaluation of nurses are associated with attitudinal outcomes. The results indicate that employee performance-management features have different impacts on different aspects of well-being. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. The specificity of the effects of stimulant medication on classroom learning-related measures of cognitive processing for attention deficit disorder children.

    PubMed

    Balthazor, M J; Wagner, R K; Pelham, W E

    1991-02-01

    There appear to be beneficial effects of stimulant medication on daily classroom measures of cognitive functioning for Attention Deficit Disorder (ADD) children, but the specificity and origin of such effects is unclear. Consistent with previous results, 0.3 mg/kg methylphenidate improved ADD children's performance on a classroom reading comprehension measure. Using the Posner letting-matching task and four additional measures of phonological processing, we attempted to isolate the effects of methylphenidate to parameter estimates of (a) selective attention, (b) the basic cognitive process of retrieving name codes from permanent memory, and (c) a constant term that represented nonspecific aspects of information processing. Responses to the letter-matching stimuli were faster and more accurate with medication compared to placebo. The improvement in performance was isolated to the parameter estimate that reflected nonspecific aspects of information processing. A lack of medication effect on the other measures of phonological processing supported the Posner task findings in indicating that methylphenidate appears to exert beneficial effects on academic processing through general rather than specific aspects of information processing.

  8. Radar cross section models for limited aspect angle windows

    NASA Astrophysics Data System (ADS)

    Robinson, Mark C.

    1992-12-01

    This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.

  9. Identifying Optimal Temporal Scale for the Correlation of AOD and Ground Measurements of PM2.5 to Improve the Model Performance in a Real-time Air Quality Estimation System

    NASA Technical Reports Server (NTRS)

    Li, Hui; Faruque, Fazlay; Williams, Worth; Al-Hamdan, Mohammad; Luvall, Jeffrey C.; Crosson, William; Rickman, Douglas; Limaye, Ashutosh

    2009-01-01

    Aerosol optical depth (AOD), an indirect estimate of particle matter using satellite observations, has shown great promise in improving estimates of PM 2.5 air quality surface. Currently, few studies have been conducted to explore the optimal way to apply AOD data to improve the model accuracy of PM 2.5 surface estimation in a real-time air quality system. We believe that two major aspects may be worthy of consideration in that area: 1) the approach to integrate satellite measurements with ground measurements in the pollution estimation, and 2) identification of an optimal temporal scale to calculate the correlation of AOD and ground measurements. This paper is focused on the second aspect on the identifying the optimal temporal scale to correlate AOD with PM2.5. Five following different temporal scales were chosen to evaluate their impact on the model performance: 1) within the last 3 days, 2) within the last 10 days, 3) within the last 30 days, 4) within the last 90 days, and 5) the time period with the highest correlation in a year. The model performance is evaluated for its accuracy, bias, and errors based on the following selected statistics: the Mean Bias, the Normalized Mean Bias, the Root Mean Square Error, Normalized Mean Error, and the Index of Agreement. This research shows that the model with the temporal scale of within the last 30 days displays the best model performance in this study area using 2004 and 2005 data sets.

  10. Greenhouse Gas Emissions Model (GEM) for Medium- and Heavy-Duty Vehicle Compliance

    EPA Pesticide Factsheets

    EPA’s Greenhouse Gas Emissions Model (GEM) is a free, desktop computer application that estimates the greenhouse gas (GHG) emissions and fuel efficiency performance of specific aspects of heavy-duty vehicles.

  11. The Effects of Physical Impairment on Shooting Performance

    DTIC Science & Technology

    2012-08-01

    Anthropometry Anthropometric data were collected from each participant. Summary anthropometric statistics are shown in table 1. Table 1...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection information. Send comments regarding this burden estimate or any other aspect of

  12. Robust Stereo Visual Odometry Using Improved RANSAC-Based Methods for Mobile Robot Localization

    PubMed Central

    Liu, Yanqing; Gu, Yuzhang; Li, Jiamao; Zhang, Xiaolin

    2017-01-01

    In this paper, we present a novel approach for stereo visual odometry with robust motion estimation that is faster and more accurate than standard RANSAC (Random Sample Consensus). Our method makes improvements in RANSAC in three aspects: first, the hypotheses are preferentially generated by sampling the input feature points on the order of ages and similarities of the features; second, the evaluation of hypotheses is performed based on the SPRT (Sequential Probability Ratio Test) that makes bad hypotheses discarded very fast without verifying all the data points; third, we aggregate the three best hypotheses to get the final estimation instead of only selecting the best hypothesis. The first two aspects improve the speed of RANSAC by generating good hypotheses and discarding bad hypotheses in advance, respectively. The last aspect improves the accuracy of motion estimation. Our method was evaluated in the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) and the New Tsukuba dataset. Experimental results show that the proposed method achieves better results for both speed and accuracy than RANSAC. PMID:29027935

  13. Post-Remediation Evaluation of EVO Treatment: How Can We Improve Performance

    DTIC Science & Technology

    2017-11-15

    this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing this collection of information . Send comments regarding this burden...estimate or any other aspect of this collection of information , including suggestions for reducing this burden to Department of Defense, Washington

  14. The Effect of Modified Eye Position on Shooting Performance

    DTIC Science & Technology

    2011-04-01

    participants was 20/20, with one participant aided by corrective contact lenses. 3.3 Anthropometry Anthropometric data was collected from each...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the... data needed, and completing and reviewing the collection information. Send comments regarding this burden estimate or any other aspect of this

  15. Music and Sound in Time Processing of Children with ADHD

    PubMed Central

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families’ lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6–14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant’s performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. Results: (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD. PMID:26441688

  16. Music and Sound in Time Processing of Children with ADHD.

    PubMed

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  17. IHPRPT Improvements to the Solid Performance Program (SPP)

    DTIC Science & Technology

    2007-03-19

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of...other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a

  18. Vehicle performance impact on space shuttle design and concept evaluation

    NASA Technical Reports Server (NTRS)

    Craig, M. K.

    1972-01-01

    The continuing examination of widely varied space shuttle concepts makes an understanding of concept interaction with vehicle performance imperative. The estimation of vehicle performance is highly appurtenant to all aspects of shuttle design and hence performance has classically been a key indicator of overall concept desirability and potential. Vehicle performance assumes the added role of defining interactions between specific design characteristics, the sum total of which define a specific concept. Special attention is given to external tank effects.

  19. Effects of Age and Schooling on Intellectual Performance: Estimates Obtained from Analysis of Continuous Variation in Age and Length of Schooling

    ERIC Educational Resources Information Center

    Cliffordson, Christina; Gustafsson, Jan-Eric

    2008-01-01

    The effects of age and schooling on different aspects of intellectual performance, taking track of study into account, are investigated. The analyses were based on military enlistment test scores, obtained by 48,269 males, measuring Fluid ability (Gf), Crystallized intelligence (Gc), and General visualization (Gv) ability. A regression method,…

  20. Identifying Enterprise Leverage Points in Defense Acquisition Program Performance

    DTIC Science & Technology

    2009-09-01

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this...of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB

  1. Army Logistician. Volume 35, Issue 5, September-October 2003

    DTIC Science & Technology

    2003-10-01

    Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per...and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information...designed to deliver performance-enhancing natu- ral food elements to troops in the field. The gel con- tains a mixture of glucose and maltodextrin —a

  2. Psycho-Social Aspects of Educating Epileptic Children: Roles for School Psychologists.

    ERIC Educational Resources Information Center

    Frank, Brenda B.

    1985-01-01

    Epileptic children may have physical and emotional needs which can interfere with learning and socialization. Current prevalence estimates, definitions, and classifications of epilepsy are surveyed. Factors affecting the epileptic child's school performance and specific learning problems are addressed. Specific roles are presented for school…

  3. Cost and performance model for redox flow batteries

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2014-02-01

    A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.

  4. Returns on Investment in California County Departments of Public Health

    PubMed Central

    2016-01-01

    Objectives. To estimate the average return on investment for the overall activities of county departments of public health in California. Methods. I gathered the elements necessary to estimate the average return on investment for county departments of public health in California during the period 2001 to 2008–2009. These came from peer-reviewed journal articles published as part of a larger project to develop a method for determining return on investment for public health by using a health economics framework. I combined these elements by using the standard formula for computing return on investment, and performed a sensitivity analysis. Then I compared the return on investment for county departments of public health with the returns on investment generated for various aspects of medical care. Results. The estimated return on investment from $1 invested in county departments of public health in California ranges from $67.07 to $88.21. Conclusions. The very large estimated return on investment for California county departments of public health relative to the return on investment for selected aspects of medical care suggests that public health is a wise investment. PMID:27310339

  5. Returns on Investment in California County Departments of Public Health.

    PubMed

    Brown, Timothy T

    2016-08-01

    To estimate the average return on investment for the overall activities of county departments of public health in California. I gathered the elements necessary to estimate the average return on investment for county departments of public health in California during the period 2001 to 2008-2009. These came from peer-reviewed journal articles published as part of a larger project to develop a method for determining return on investment for public health by using a health economics framework. I combined these elements by using the standard formula for computing return on investment, and performed a sensitivity analysis. Then I compared the return on investment for county departments of public health with the returns on investment generated for various aspects of medical care. The estimated return on investment from $1 invested in county departments of public health in California ranges from $67.07 to $88.21. The very large estimated return on investment for California county departments of public health relative to the return on investment for selected aspects of medical care suggests that public health is a wise investment.

  6. Number games, magnitude representation, and basic number skills in preschoolers.

    PubMed

    Whyte, Jemma Catherine; Bull, Rebecca

    2008-03-01

    The effect of 3 intervention board games (linear number, linear color, and nonlinear number) on young children's (mean age = 3.8 years) counting abilities, number naming, magnitude comprehension, accuracy in number-to-position estimation tasks, and best-fit numerical magnitude representations was examined. Pre- and posttest performance was compared following four 25-min intervention sessions. The linear number board game significantly improved children's performance in all posttest measures and facilitated a shift from a logarithmic to a linear representation of numerical magnitude, emphasizing the importance of spatial cues in estimation. Exposure to the number card games involving nonsymbolic magnitude judgments and association of symbolic and nonsymbolic quantities, but without any linear spatial cues, improved some aspects of children's basic number skills but not numerical estimation precision.

  7. Improved Doubly Robust Estimation when Data are Monotonely Coarsened, with Application to Longitudinal Studies with Dropout

    PubMed Central

    Tsiatis, Anastasios A.; Davidian, Marie; Cao, Weihua

    2010-01-01

    Summary A routine challenge is that of making inference on parameters in a statistical model of interest from longitudinal data subject to drop out, which are a special case of the more general setting of monotonely coarsened data. Considerable recent attention has focused on doubly robust estimators, which in this context involve positing models for both the missingness (more generally, coarsening) mechanism and aspects of the distribution of the full data, that have the appealing property of yielding consistent inferences if only one of these models is correctly specified. Doubly robust estimators have been criticized for potentially disastrous performance when both of these models are even only mildly misspecified. We propose a doubly robust estimator applicable in general monotone coarsening problems that achieves comparable or improved performance relative to existing doubly robust methods, which we demonstrate via simulation studies and by application to data from an AIDS clinical trial. PMID:20731640

  8. Integrating SHM and Time Variant System Performance of Naval Ship Structures For Near Real Time Decision Making Under Uncertainty: A Comprehensive Framework

    DTIC Science & Technology

    2016-12-06

    direction and speed based on cost minimization and best estimated time of arrival (ETA). Sometimes, ships are forced to travel 43 Lehigh Technical...the allowable time to complete the travel . Another important aspect, addressed in the case study, is to investigate the optimal routing of aged...The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing

  9. Analysis of high-aspect-ratio jet-flap wings of arbitrary geometry

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    An analytical technique to compute the performance of an arbitrary jet-flapped wing is developed. The solution technique is based on the method of Maskell and Spence in which the well-known lifting-line approach is coupled with an auxiliary equation providing the extra function needed in jet-flap theory. The present method is generalized to handle straight, uncambered wings of arbitrary planform, twist, and blowing (including unsymmetrical cases). An analytical procedure is developed for continuous variations in the above geometric data with special functions to exactly treat discontinuities in any of the geometric and blowing data. A rational theory for the effect of finite wing thickness is introduced as well as simplified concepts of effective aspect ratio for rapid estimation of performance.

  10. Safe mobility for elderly drivers--considerations based on expert and self-assessment.

    PubMed

    Broberg, Thomas; Dukic Willstrand, Tania

    2014-05-01

    To further understand the needs of the growing population of elderly drivers and create solutions for safe mobility it is important to understand the driving scenarios and aspects in day to day traffic that may be of challenge for this group. More so, individual differences in how drivers perceive their own driving ability may have an effect on how individuals limit their mobility and/or increase their exposure to risk situations, with a potential negative effect on safety. In this study two sets of assessments have been used in order to identify scenarios and aspects needing consideration in creating safe mobility for elderly drivers; an expert assessment using on-road driving together with assessments through semi structured in-depth interviews. This combination also enables categorisation of the drivers, comparing their own perception of their driving performance with the expert assessment based on actual on-road driving. Four different categories of drivers were identified: adequate (positive), over, under and adequate (negative) estimators. A number of important aspects were identified in the study. Adapting speed to the situation and driving too fast, especially on straight roads in the city, is one aspect. Seeking the attention of other road users at intersections and roundabouts is another important consideration identified. Awareness of difficulties related to speed adaptation and attention was low amongst all the driver categories. However, a difference in attitude was seen in the categories with a more humble and acceptant attitude amongst the adequate and under estimator groups, as compared to the over estimators suggesting that the aspect of attitudes is another important factor for consideration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Exploring Dynamical Assessments of Affect, Behavior, and Cognition and Math State Test Achievement

    ERIC Educational Resources Information Center

    San Pedro, Maria Ofelia Z.; Snow, Erica L.; Baker, Ryan S.; McNamara, Danielle S.; Heffernan, Neil T.

    2015-01-01

    There is increasing evidence that fine-grained aspects of student performance and interaction within educational software are predictive of long-term learning. Machine learning models have been used to provide assessments of affect, behavior, and cognition based on analyses of system log data, estimating the probability of a student's particular…

  12. Attention Deficit Hyperactivity Disorder: A Rasch Analysis of the SWAN Rating Scale

    ERIC Educational Resources Information Center

    Young, Deidra J.; Levy, Florence; Martin, Neilson C.; Hay, David A.

    2009-01-01

    The prevalence of attention-deficit/hyperactivity disorder (ADHD) has been estimated at 3-7% in the population. Children with this disorder are often characterized by symptoms of inattention and/or impulsivity and hyperactivity, which can significantly impact on many aspects of their behaviour and performance. This study investigated the…

  13. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    PubMed Central

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  14. Genetics of Parenting: The Power of the Dark Side

    ERIC Educational Resources Information Center

    Oliver, Bonamy R.; Trzaskowski, Maciej; Plomin, Robert

    2014-01-01

    Reviews of behavioral genetic studies note that "control" aspects of parenting yield low estimates of heritability, while "affective" aspects (parental feelings) yield moderate estimates. Research to date has not specifically considered whether positive and negative aspects of parenting--for both feelings and control--may…

  15. Humans make efficient use of natural image statistics when performing spatial interpolation.

    PubMed

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  16. Analysis of high aspect ratio jet flap wings of arbitrary geometry.

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.

  17. Statistical study of generalized nonlinear phase step estimation methods in phase-shifting interferometry.

    PubMed

    Langoju, Rajesh; Patil, Abhijit; Rastogi, Pramod

    2007-11-20

    Signal processing methods based on maximum-likelihood theory, discrete chirp Fourier transform, and spectral estimation methods have enabled accurate measurement of phase in phase-shifting interferometry in the presence of nonlinear response of the piezoelectric transducer to the applied voltage. We present the statistical study of these generalized nonlinear phase step estimation methods to identify the best method by deriving the Cramér-Rao bound. We also address important aspects of these methods for implementation in practical applications and compare the performance of the best-identified method with other bench marking algorithms in the presence of harmonics and noise.

  18. Climate change and drought effects on rural income distribution in the Mediterranean: a case study for Spain

    NASA Astrophysics Data System (ADS)

    Quiroga, Sonia; Suárez, Cristina

    2016-06-01

    This paper examines the effects of climate change and drought on agricultural incomes in Spanish rural areas. Present research has focused on the effects of these extreme climatological events through response functions, considering effects on crop productivity and average incomes. Among the impacts of droughts, we focused on potential effects on income distribution. The study of the effects on abnormally dry periods is therefore needed in order to perform an analysis of diverse social aspects in the long term. We estimate crop production functions for a range of Mediterranean crops in Spain and we use a measure of the decomposition of inequality to estimate the impact of climate change and drought on yield disparities. Certain adaptation measures may require a better understanding of risks by the public to achieve general acceptance. We provide empirical estimations for the marginal effects of the two impacts considered: farms' average income and income distribution. Our estimates consider crop production response to both biophysical and socio-economic aspects to analyse long-term implications on competitiveness and disparities. As for the results, we find disparities in the adaptation priorities depending on the crop and the region analysed.

  19. Research study entitled advanced X-ray astrophysical observatory (AXAF). [system engineering for a total X-ray telescope assembly

    NASA Technical Reports Server (NTRS)

    Rasche, R. W.

    1979-01-01

    General background and overview material are presented along with data from studies performed to determine the sensitivity, feasibility, and required performance of systems for a total X-ray telescope assembly. Topics covered include: optical design, mirror support concepts, mirror weight estimates, the effects of l g on mirror elements, mirror assembly resonant frequencies, optical bench considerations, temperature control of the mirror assembly, and the aspect determination system.

  20. Sources of Variability in Performance Times at the World Orienteering Championships.

    PubMed

    Hébert-Losier, Kim; Platt, Simon; Hopkins, William G

    2015-07-01

    An improvement equal to 0.3 of the typical variation in an elite athlete's race-to-race performance estimates the smallest worthwhile enhancement, which has not yet been determined for orienteers. Moreover, much of the research in high-performance orienteering has focused on physical and cognitive aspects, although course characteristics might influence race performance. Analysis of race data provides insights into environmental effects and other aspects of competitive performance. Our aim was to examine such factors in relation to World Orienteering Championships performances. We used mixed linear modelling to analyze finishing times from the three qualification rounds and final round of the sprint, middle-distance, and long-distance disciplines of World Orienteering Championships from 2006 to 2013. Models accounted for race length, distance climbed, number of controls, home advantage, venue identity, round (qualification final), athlete identity, and athlete age. Within-athlete variability (coefficient of variation, mean ± SD) was lower in the final (4.9% ± 1.4%) than in the qualification (7.3% ± 2.4%) rounds and provided estimates of smallest worthwhile enhancements of 1.0%-3.5%. The home advantage was clear in most disciplines, with distance climbed particularly impacting sprint performances. Small to very large between-venue differences were apparent. Performance predictability expressed as intraclass correlation coefficients was extremely high within years and was high to very high between years. Age of peak performance ranged from 27 to 31 yr. Our results suggest that elite orienteers should focus on training and strategies that enhance performance by at least 1.0%-3.5% for smallest worthwhile enhancement. Moreover, as greater familiarity with the terrain likely mediated the home advantage, foreign athletes would benefit from training in nations hosting the World Orienteering Championships for familiarization.

  1. Analytical Aspects Relating to the Estimation of Carbon Filter Performance for Military Applications

    DTIC Science & Technology

    2013-07-01

    materials having relatively low room-temperature vapor pressures and exhibiting Type I Brunauer, Edward, and Teller (BET) isotherms . Many compounds that...47 7.2.4 Breakthrough Time Relationship ...................................................49 7.3 Sorption of...various flow rates (from ref 3) ......51 10. Illustration of BET isotherm types (from ref 6) .........................................58 11

  2. Thermionic fuel element for the S-prime reactor

    NASA Astrophysics Data System (ADS)

    Van Hagan, Thomas H.; Drees, Elizabeth A.

    1993-01-01

    Technical aspects of the thermionic fuel element (TFE) design proposed for the S-PRIME space nuclear power system are discussed. Topics covered include the rational for selecting a multicell TFE approach, a technical description of the S-PRIME TFE and its estimated performance, and the technology readiness of the design, which emphasizes techology maturity and low risk.

  3. Assessment of the Prognostic and Treatment-Predictive Performance of the Combined HOXB13:IL17BR-MGI Gene Expression Signature in the Trans-ATAC Cohort

    DTIC Science & Technology

    2013-12-01

    documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to...and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of...three BCI-L groups identified two risk populations for both early and late DR with 84% (556/665) of patients having low risk for early DR, and a smaller

  4. Dutch healthcare reform: did it result in performance improvement of health plans? A comparison of consumer experiences over time.

    PubMed

    Hendriks, Michelle; Spreeuwenberg, Peter; Rademakers, Jany; Delnoij, Diana M J

    2009-09-17

    Many countries have introduced elements of managed competition in their healthcare system with the aim to accomplish more efficient and demand-driven health care. Simultaneously, generating and reporting of comparative healthcare information has become an important quality-improvement instrument. We examined whether the introduction of managed competition in the Dutch healthcare system along with public reporting of quality information was associated with performance improvement in health plans. Experiences of consumers with their health plan were measured in four consecutive years (2005-2008) using the CQI(R) health plan instrument 'Experiences with Healthcare and Health Insurer'. Data were available of 13,819 respondents (response = 45%) of 30 health plans in 2005, of 8,266 respondents (response = 39%) of 32 health plans in 2006, of 8,088 respondents (response = 34%) of 32 health plans in 2007, and of 7,183 respondents (response = 31%) of 32 health plans in 2008. We performed multilevel regression analyses with three levels: respondent, health plan and year of measurement. Per year and per quality aspect, we estimated health plan means while adjusting for consumers' age, education and self-reported health status. We tested for linear and quadratic time effects using chi-squares. The overall performance of health plans increased significantly from 2005 to 2008 on four quality aspects. For three other aspects, we found that the overall performance first declined and then increased from 2006 to 2008, but the performance in 2008 was not better than in 2005. The overall performance of health plans did not improve more often for quality aspects that were identified as important areas of improvement in the first year of measurement. On six out of seven aspects, the performance of health plans that scored below average in 2005 increased more than the performance of health plans that scored average and/or above average in that year. We found mixed results concerning the effects of managed competition on the performance of health plans. To determine whether managed competition in the healthcare system leads to quality improvement in health plans, it is important to examine whether and for what reasons health plans initiate improvement efforts.

  5. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  6. Comparison of Mars Aircraft Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Colozza, Anthony J.

    2003-01-01

    The propulsion system is a critical aspect of the performance and feasibility of a Mars aircraft. Propulsion system mass and performance greatly influence the aircraft s design and mission capabilities. Various propulsion systems were analyzed to estimate the system mass necessary for producing 35N of thrust within the Mars environment. Three main categories of propulsion systems were considered: electric systems, combustion engine systems and rocket systems. Also, the system masses were compared for mission durations of 1, 2, and 4 h.

  7. Ethics in age estimation of unaccompanied minors.

    PubMed

    Thevissen, P W; Kvaal, S I; Willems, G

    2012-11-30

    Children absconding from countries of conflict and war are often not able to document their age. When an age is given, it is frequently untraceable or poorly documented and therefore questioned by immigration authorities. Consequently many countries perform age estimations on these children. Provision of ethical practice during the age estimation investigation of unaccompanied minors is considered from different angles: (1) The UN convention on children's rights, formulating specific rights, protection, support, healthcare and education for unaccompanied minors. (2) Since most age estimation investigations are based on medical examination, the four basic principles of biomedical ethics, namely autonomy, beneficence, non-malevolence, justice. (3) The use of medicine for non treatment purposes. (4) How age estimates with highest accuracy in age prediction can be obtained. Ethical practice in age estimation of unaccompanied minors is achieved when different but related aspects are searched, evaluated, weighted in importance and subsequently combined. However this is not always feasible and unanswered questions remain.

  8. Performance of e-ASPECTS software in comparison to that of stroke physicians on assessing CT scans of acute ischemic stroke patients.

    PubMed

    Herweh, Christian; Ringleb, Peter A; Rauch, Geraldine; Gerry, Steven; Behrens, Lars; Möhlenbruch, Markus; Gottorf, Rebecca; Richter, Daniel; Schieber, Simon; Nagel, Simon

    2016-06-01

    The Alberta Stroke Program Early CT score (ASPECTS) is an established 10-point quantitative topographic computed tomography scan score to assess early ischemic changes. We compared the performance of the e-ASPECTS software with those of stroke physicians at different professional levels. The baseline computed tomography scans of acute stroke patients, in whom computed tomography and diffusion-weighted imaging scans were obtained less than two hours apart, were retrospectively scored by e-ASPECTS as well as by three stroke experts and three neurology trainees blinded to any clinical information. The ground truth was defined as the ASPECTS on diffusion-weighted imaging scored by another two non-blinded independent experts on consensus basis. Sensitivity and specificity in an ASPECTS region-based and an ASPECTS score-based analysis as well as receiver-operating characteristic curves, Bland-Altman plots with mean score error, and Matthews correlation coefficients were calculated. Comparisons were made between the human scorers and e-ASPECTS with diffusion-weighted imaging being the ground truth. Two methods for clustered data were used to estimate sensitivity and specificity in the region-based analysis. In total, 34 patients were included and 680 (34 × 20) ASPECTS regions were scored. Mean time from onset to computed tomography was 172 ± 135 min and mean time difference between computed tomographyand magnetic resonance imaging was 41 ± 31 min. The region-based sensitivity (46.46% [CI: 30.8;62.1]) of e-ASPECTS was better than three trainees and one expert (p ≤ 0.01) and not statistically different from another two experts. Specificity (94.15% [CI: 91.7;96.6]) was lower than one expert and one trainee (p < 0.01) and not statistically different to the other four physicians. e-ASPECTS had the best Matthews correlation coefficient of 0.44 (experts: 0.38 ± 0.08 and trainees: 0.19 ± 0.05) and the lowest mean score error of 0.56 (experts: 1.44 ± 1.79 and trainees: 1.97 ± 2.12). e-ASPECTS showed a similar performance to that of stroke experts in the assessment of brain computed tomographys of acute ischemic stroke patients with the Alberta Stroke Program Early CT score method. © 2016 World Stroke Organization.

  9. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2015-10-01

    The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.

  10. Subsonic aircraft: Evolution and the matching of size to performance

    NASA Technical Reports Server (NTRS)

    Loftin, L. K., Jr.

    1980-01-01

    Methods for estimating the approximate size, weight, and power of aircraft intended to meet specified performance requirements are presented for both jet-powered and propeller-driven aircraft. The methods are simple and require only the use of a pocket computer for rapid application to specific sizing problems. Application of the methods is illustrated by means of sizing studies of a series of jet-powered and propeller-driven aircraft with varying design constraints. Some aspects of the technical evolution of the airplane from 1918 to the present are also briefly discussed.

  11. High-performance liquid chromatographic determination of benzil in air as an indicator of emissions derived from polyester powder coatings.

    PubMed

    Pukkila, J; Kokotti, H; Peltonen, K

    1989-10-06

    A method to estimate occupational exposure to emissions from the curing of polyester powder paints was developed. The method is based on the monitoring only of a certain marker compound in workroom air in order to make the determinations easier. Benzil, reproducibly emitted from all the powders tested, was chosen as the indicator for curing (220 degrees C)-derived emissions. A method for the air sampling and high-performance liquid chromatographic benzil is described. Aspects of the use of marker compounds are discussed.

  12. Objective assessment of operator performance during ultrasound-guided procedures.

    PubMed

    Tabriz, David M; Street, Mandie; Pilgram, Thomas K; Duncan, James R

    2011-09-01

    Simulation permits objective assessment of operator performance in a controlled and safe environment. Image-guided procedures often require accurate needle placement, and we designed a system to monitor how ultrasound guidance is used to monitor needle advancement toward a target. The results were correlated with other estimates of operator skill. The simulator consisted of a tissue phantom, ultrasound unit, and electromagnetic tracking system. Operators were asked to guide a needle toward a visible point target. Performance was video-recorded and synchronized with the electromagnetic tracking data. A series of algorithms based on motor control theory and human information processing were used to convert raw tracking data into different performance indices. Scoring algorithms converted the tracking data into efficiency, quality, task difficulty, and targeting scores that were aggregated to create performance indices. After initial feasibility testing, a standardized assessment was developed. Operators (N = 12) with a broad spectrum of skill and experience were enrolled and tested. Overall scores were based on performance during ten simulated procedures. Prior clinical experience was used to independently estimate operator skill. When summed, the performance indices correlated well with estimated skill. Operators with minimal or no prior experience scored markedly lower than experienced operators. The overall score tended to increase according to operator's clinical experience. Operator experience was linked to decreased variation in multiple aspects of performance. The aggregated results of multiple trials provided the best correlation between estimated skill and performance. A metric for the operator's ability to maintain the needle aimed at the target discriminated between operators with different levels of experience. This study used a highly focused task model, standardized assessment, and objective data analysis to assess performance during simulated ultrasound-guided needle placement. The performance indices were closely related to operator experience.

  13. [Traumatic brain injuries--forensic and expertise aspects].

    PubMed

    Vuleković, Petar; Simić, Milan; Misić-Pavkov, Gordana; Cigić, Tomislav; Kojadinović, Zeljko; Dilvesi, Dula

    2008-01-01

    Traumatic brain injuries have major socio-economic importance due to their frequency, high mortality and serious consequences. According to their nature the consequences of these injuries may be classified as neurological, psychiatric and esthetic. Various lesions of brain structures cause neurological consequences such as disturbance of motor functions, sensibility, coordination or involuntary movements, speech disturbances and other deviations, as well as epilepsy. Psychiatric consequences include cognitive deficit, emotional disturbances and behavior disturbances. CRIMINAL-LEGAL ASPECT OF TRAUMATIC BRAIN INJURIES AND LITIGATION: Criminal-legal aspect of traumatic brain injuries expertise understands the qualification of these injuries as mild, serious and qualified serious body injuries as well as the expertise about the mechanisms of their occurrence. Litigation expertise includes the estimation of pain, fear, diminished, i.e. lost vital activity and disability, esthetic marring, and psychological suffer based on the diminished general vital activity and esthetic marring. Evaluation of consequences of traumatic brain injuries should be performed only when it can be positively confirmed that they are permanent, i.e. at least one year after the injury. Expertise of these injuries is interdisciplinary. Among clinical doctors the most competent medical expert is the one who is in charge for diagnostics and injury treatment, with the recommendation to avoid, if possible, the doctor who conducted treatment. For the estimation of general vital activity, the neurological consequences, pain and esthetic marring expertise, the most competent doctors are neurosurgeon and neurologist. Psychological psychiatric consequences and fear expertise have to be performed by the psychiatrist. Specialists of forensic medicine contribute with knowledge of criminal low and legal expertise.

  14. Estimating Catchment-Scale Snowpack Variability in Complex Forested Terrain, Valles Caldera National Preserve, NM

    NASA Astrophysics Data System (ADS)

    Harpold, A. A.; Brooks, P. D.; Biederman, J. A.; Swetnam, T.

    2011-12-01

    Difficulty estimating snowpack variability across complex forested terrain currently hinders the prediction of water resources in the semi-arid Southwestern U.S. Catchment-scale estimates of snowpack variability are necessary for addressing ecological, hydrological, and water resources issues, but are often interpolated from a small number of point-scale observations. In this study, we used LiDAR-derived distributed datasets to investigate how elevation, aspect, topography, and vegetation interact to control catchment-scale snowpack variability. The study area is the Redondo massif in the Valles Caldera National Preserve, NM, a resurgent dome that varies from 2500 to 3430 m and drains from all aspects. Mean LiDAR-derived snow depths from four catchments (2.2 to 3.4 km^2) draining different aspects of the Redondo massif varied by 30%, despite similar mean elevations and mixed conifer forest cover. To better quantify this variability in snow depths we performed a multiple linear regression (MLR) at a 7.3 by 7.3 km study area (5 x 106 snow depth measurements) comprising the four catchments. The MLR showed that elevation explained 45% of the variability in snow depths across the study area, aspect explained 18% (dominated by N-S aspect), and vegetation 2% (canopy density and height). This linear relationship was not transferable to the catchment-scale however, where additional MLR analyses showed the influence of aspect and elevation differed between the catchments. The strong influence of North-South aspect in most catchments indicated that the solar radiation is an important control on snow depth variability. To explore the role of solar radiation, a model was used to generate winter solar forcing index (SFI) values based on the local and remote topography. The SFI was able to explain a large amount of snow depth variability in areas with similar elevation and aspect. Finally, the SFI was modified to include the effects of shading from vegetation (in and out of canopy), which further explained snow depth variability. The importance of SFI for explaining catchment-scale snow depth variability demonstrates that aspect is not a sufficient metric for direct radiation in complex terrain where slope and remote topographic shading are significant. Surprisingly, the net effects of interception and shading by vegetation on snow depths were minimal compared to elevation and aspect in these catchments. These results suggest that snowpack losses from interception may be balanced by increased shading to reduce the overall impacts from vegetation compared to topographic factors in this high radiation environment. Our analysis indicated that elevation and solar radiation are likely to control snow variability in larger catchments, with interception and shading from vegetation becoming more important at smaller scales.

  15. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  16. Encoding specificity manipulations do affect retrieval from memory.

    PubMed

    Zeelenberg, René

    2005-05-01

    In a recent article, P.A. Higham (2002) [Strong cues are not necessarily weak: Thomson and Tulving (1970) and the encoding specificity principle revisited. Memory &Cognition, 30, 67-80] proposed a new way to analyze cued recall performance in terms of three separable aspects of memory (retrieval, monitoring, and report bias) by comparing performance under both free-report and forced-report instructions. He used this method to derive estimates of these aspects of memory in an encoding specificity experiment similar to that reported by D.M. Thomson and E. Tulving (1970) [Associative encoding and retrieval: weak and strong cues. Journal of Experimental Psychology, 86, 255-262]. Under forced-report instructions, the encoding specificity manipulation did not affect performance. Higham concluded that the manipulation affected monitoring and report bias, but not retrieval. I argue that this interpretation of the results is problematic because the Thomson and Tulving paradigm is confounded, and show in three experiments using a more appropriate design that encoding specificity manipulations do affect performance in forced-report cued recall. Because in Higham's framework forced-report performance provides a measure of retrieval that is uncontaminated by monitoring and report bias it is concluded that encoding specificity manipulations do affect retrieval from memory.

  17. Practical aspects of photovoltaic technology, applications and cost (revised)

    NASA Technical Reports Server (NTRS)

    Rosenblum, L.

    1985-01-01

    The purpose of this text is to provide the reader with the background, understanding, and computational tools needed to master the practical aspects of photovoltaic (PV) technology, application, and cost. The focus is on stand-alone, silicon solar cell, flat-plate systems in the range of 1 to 25 kWh/day output. Technology topics covered include operation and performance of each of the major system components (e.g., modules, array, battery, regulators, controls, and instrumentation), safety, installation, operation and maintenance, and electrical loads. Application experience and trends are presented. Indices of electrical service performance - reliability, availability, and voltage control - are discussed, and the known service performance of central station electric grid, diesel-generator, and PV stand-alone systems are compared. PV system sizing methods are reviewed and compared, and a procedure for rapid sizing is described and illustrated by the use of several sample cases. The rapid sizing procedure yields an array and battery size that corresponds to a minimum cost system for a given load requirement, insulation condition, and desired level of service performance. PV system capital cost and levelized energy cost are derived as functions of service performance and insulation. Estimates of future trends in PV system costs are made.

  18. Cross-Sectional HIV Incidence Surveillance: A Benchmarking of Approaches for Estimating the 'Mean Duration of Recent Infection'.

    PubMed

    Kassanjee, Reshma; De Angelis, Daniela; Farah, Marian; Hanson, Debra; Labuschagne, Jan Phillipus Lourens; Laeyendecker, Oliver; Le Vu, Stéphane; Tom, Brian; Wang, Rui; Welte, Alex

    2017-03-01

    The application of biomarkers for 'recent' infection in cross-sectional HIV incidence surveillance requires the estimation of critical biomarker characteristics. Various approaches have been employed for using longitudinal data to estimate the Mean Duration of Recent Infection (MDRI) - the average time in the 'recent' state. In this systematic benchmarking of MDRI estimation approaches, a simulation platform was used to measure accuracy and precision of over twenty approaches, in thirty scenarios capturing various study designs, subject behaviors and test dynamics that may be encountered in practice. Results highlight that assuming a single continuous sojourn in the 'recent' state can produce substantial bias. Simple interpolation provides useful MDRI estimates provided subjects are tested at regular intervals. Regression performs the best - while 'random effects' describe the subject-clustering in the data, regression models without random effects proved easy to implement, stable, and of similar accuracy in scenarios considered; robustness to parametric assumptions was improved by regressing 'recent'/'non-recent' classifications rather than continuous biomarker readings. All approaches were vulnerable to incorrect assumptions about subjects' (unobserved) infection times. Results provided show the relationships between MDRI estimation performance and the number of subjects, inter-visit intervals, missed visits, loss to follow-up, and aspects of biomarker signal and noise.

  19. TVA-based assessment of visual attentional functions in developmental dyslexia

    PubMed Central

    Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca

    2014-01-01

    There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129

  20. A multimodal approach to estimating vigilance using EEG and forehead EOG.

    PubMed

    Zheng, Wei-Long; Lu, Bao-Liang

    2017-04-01

    Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.

  1. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  2. Baseball Throwing Mechanics as They Relate to Pathology and Performance - A Review

    PubMed Central

    Whiteley, Rod

    2007-01-01

    It is a commonly held perception amongst biomechanists, sports medicine practitioners, baseball coaches and players, that an individual baseball player's style of throwing or pitching influences their performance and susceptibility to injury. With the results of a series of focus groups with baseball managers and pitching coaches in mind, the available scientific literature was reviewed regarding the contribution of individual aspects of pitching and throwing mechanics to potential for injury and performance. After a discussion of the limitations of kinematic and kinetic analyses, the individual aspects of pitching mechanics are discussed under arbitrary headings: Foot position at stride foot contact; Elbow flexion; Arm rotation; Arm horizontal abduction; Arm abduction; Lead knee position; Pelvic orientation; Deceleration-phase related issues; Curveballs; and Teaching throwing mechanics. In general, popular opinion of baseball coaching staff was found to be largely in concordance with the scientific investigations of biomechanists with several notable exceptions. Some difficulties are identified with the practical implementation of analyzing throwing mechanics in the field by pitching coaches, and with some unquantified aspects of scientific analyses. Key pointsBiomechanical analyses including kinematic and kinetic analyses allow for estimation of pitching performance and potential for injury.Some difficulties both theoretic and practical exist for the implementation and interpretation of such analyses.Commonly held opinions of baseball pitching authorities are largely held to concur with biomechanical analyses.Recommendations can be made regarding appropriate pitching and throwing technique in light of these investigations. PMID:24149219

  3. Effects of Winglets on the Drag of a Low-Aspect-Ratio Configuration

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1996-01-01

    A wind-tunnel investigation has been performed to determine the effect of winglets on the induced drag of a low-aspect-ratio wing configuration at Mach numbers between 0.30 and 0.85 and a nominal angle-of-attack range from -2 deg to 20 deg. Results of the tests at the cruise lift coefficient showed significant increases in lift-drag ratio for the winglet configuration relative to a wing-alone configuration designed for the same lift coefficient and Mach number. Further, even larger increases in lift-drag ratio were observed at lift coefficients above the design value at all Mach numbers tested. The addition of these winglets had a negligible effect on the static lateral-directional stability characteristics of the configuration. No tests were made to determine the effect of these winglets at supersonic Mach numbers, where increases in drag caused by winglets might be more significant. Computational analyses were also performed for the two configurations studied. Linear and small-disturbance formulations were used. The codes were found to give reasonable performance estimates sufficient for predicting changes of this magnitude.

  4. Disassociation of cognitive and affective aspects of theory of mind in obsessive-compulsive disorder.

    PubMed

    Liu, Wanting; Fan, Jie; Gan, Jun; Lei, Hui; Niu, Chaoyang; Chan, Raymond C K; Zhu, Xiongzhao

    2017-09-01

    Impairment in social functioning has been widely described in obsessive-compulsive disorder (OCD). However, several aspects of social cognition, such as theory of mind (ToM), have not been substantially investigated in this context. This study examined cognitive and affective ToM in 40 OCD patients and 38 age-, sex-, and education-matched healthy controls (HCs) with the computerized Yoni task and a battery of neurocognitive tests. OCD symptom severity was assessed with the Yale-Brown Obsessive-Compulsive Scale (Y-BOCS). Depressive and anxiety symptoms were also assessed. Compared to HCs, OCD patients performed worse on second-order affective condition trials, but not cognitive or physical condition trials, of the Yoni task; there were not group differences in any of the first-order condition domains. Second-order ToM performance of OCD patients was associated with estimated intelligence and working memory performance. After controlling for neurocognitive variables, the group difference in second-order affective condition performance remained significant. These findings indicate that the affective component of ToM may be selectively impaired in OCD patients and that the observed deficit is largely independent of other neurocognitive impairments and clinical characteristics. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  5. Design/cost tradeoff studies. Appendix A. Supporting analyses and tradeoffs, book 2. Earth Observatory Satellite system definition study (EOS)

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Attitude reference systems for use with the Earth Observatory Satellite (EOS) are described. The systems considered are fixed and gimbaled star trackers, star mappers, and digital sun sensors. Covariance analyses were performed to determine performance for the most promising candidate in low altitude and synchronous orbits. The performance of attitude estimators that employ gyroscopes which are periodically updated by a star sensor is established by a single axis covariance analysis. The other systems considered are: (1) the propulsion system design, (2) electric power and electrical integration, (3) thermal control, (4) ground data processing, and (5) the test plan and cost reduction aspects of observatory integration and test.

  6. Comparisons of cloud cover evaluated from LANDSAT imagery and meteorological stations across the British Isles

    NASA Technical Reports Server (NTRS)

    Barrett, E. C. (Principal Investigator); Grant, C. K.

    1976-01-01

    The author has identified the following significant results. This stage of the study has confirmed the initial supposition that LANDSAT data could be analyzed to provide useful data on cloud amount, and that useful light would be thrown thereby on the performance of the ground observer of this aspect of the state of the sky. This study, in comparison with previous studies of a similar nature using data from meteorological satellites, has benefited greatly from the much higher resolution data provided by LANDSAT. This has permitted consideration of not only the overall performance of the surface observer in estimating total cloud cover, but also his performance under different sky conditions.

  7. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  8. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    NASA Astrophysics Data System (ADS)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  9. The Novel Nonlinear Adaptive Doppler Shift Estimation Technique and the Coherent Doppler Lidar System Validation Lidar

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.

    2006-01-01

    The signal processing aspect of a 2-m wavelength coherent Doppler lidar system under development at NASA Langley Research Center in Virginia is investigated in this paper. The lidar system is named VALIDAR (validation lidar) and its signal processing program estimates and displays various wind parameters in real-time as data acquisition occurs. The goal is to improve the quality of the current estimates such as power, Doppler shift, wind speed, and wind direction, especially in low signal-to-noise-ratio (SNR) regime. A novel Nonlinear Adaptive Doppler Shift Estimation Technique (NADSET) is developed on such behalf and its performance is analyzed using the wind data acquired over a long period of time by VALIDAR. The quality of Doppler shift and power estimations by conventional Fourier-transform-based spectrum estimation methods deteriorates rapidly as SNR decreases. NADSET compensates such deterioration in the quality of wind parameter estimates by adaptively utilizing the statistics of Doppler shift estimate in a strong SNR range and identifying sporadic range bins where good Doppler shift estimates are found. The authenticity of NADSET is established by comparing the trend of wind parameters with and without NADSET applied to the long-period lidar return data.

  10. Monte Carlo Simulation of a 12 MeV Cargo Container Inspection System

    NASA Astrophysics Data System (ADS)

    Ozcan, Ibrahim; Chandler, Katherine; Spaulding, Randy; Farfan, Eduardo

    2007-05-01

    After the terrorist events of 9/11, border security has become one of the most important issues in national security due to the large number of cargo containers entering the country. Screening of all cargo containers for nuclear materials should be performed during border inspections. The technical aspects of inspecting cargo containers using electron accelerators have been studied previously. However, the radiological protection aspects involved in these studies have not been fully considered. This screening process may accidentally harm operators, workers, and bystanders; as well as stowaways hiding inside the containers. In this research project, external doses were estimated at various locations near the inspection system. A 12-MeV linear accelerator (LINAC) was used in the experiment. The relationship between the various locations and doses were determined in this simulation. The simulation was performed using MCNPX. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.NWS07.B2.8

  11. Locomotion in labrid fishes: implications for habitat use and cross-shelf biogeography on the Great Barrier Reef

    NASA Astrophysics Data System (ADS)

    Bellwood, D.; Wainwright, P.

    2001-09-01

    Coral reefs exhibit marked zonation patterns within single reefs and across continental shelves. For sessile organisms these zones are often related to wave exposure. We examined the extent to which wave exposure may shape the distribution patterns of fishes. We documented the distribution of 98 species of wrasses and parrotfishes at 33 sites across the Great Barrier Reef. The greatest difference between labrid assemblages was at the habitat level, with exposed reef flats and crests on mid- and outer reefs possessing a distinct faunal assemblage. These exposed sites were dominated by individuals with high pectoral fin aspect ratios, i.e. fishes believed to be capable of lift-based swimming which often achieve high speeds. Overall, there was a strong correlation between estimated swimming performance, as indicated by fin aspect ratio, and degree of water movement. We propose that swimming performance in fishes limits access to high-energy locations and may be a significant factor influencing habitat use and regional biogeography of reef fishes.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Aladsair J.; Viswanathan, Vilayanur V.; Stephenson, David E.

    A robust performance-based cost model is developed for all-vanadium, iron-vanadium and iron chromium redox flow batteries. Systems aspects such as shunt current losses, pumping losses and thermal management are accounted for. The objective function, set to minimize system cost, allows determination of stack design and operating parameters such as current density, flow rate and depth of discharge (DOD). Component costs obtained from vendors are used to calculate system costs for various time frames. A 2 kW stack data was used to estimate unit energy costs and compared with model estimates for the same size electrodes. The tool has been sharedmore » with the redox flow battery community to both validate their stack data and guide future direction.« less

  13. Theoretical stability in coefficient inverse problems for general hyperbolic equations with numerical reconstruction

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Liu, Yikan; Yamamoto, Masahiro

    2018-04-01

    In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.

  14. Nuclear dna amounts in angiosperms.

    PubMed

    Bennett, M D; Smith, J B

    1976-05-27

    The number of angiosperm species for which nuclear DNA amount estimates have been made has nearly trebled since the last collected lists of such values were published, and therefore, publication of a more comprehensive list is over due. This paper lists absolute nuclear DNA amounts for 753 angiosperm species. The dats were assembled primarily for reference purposes, and so the species are listed in alphabetical order, as this was felt to be more helpful to cyto- and biochemists whom, it is anticipated, will be among its major users. The paper also reviews aspects of the history, nomenclature, methods, accuracy and problems of nuclear DNA estimation in angiosperms. No attempt is made to reconsider those aspects of nuclear DNA estimation which have been fully revised previously, although the bibliography of such aspects is given. Instead, the paper is intended as a source of basic information regarding the terminology, practice and limitations of nuclear DNA estimation, especially by Feulgen microdensitometry, as currently practiced.

  15. A multimodal approach to estimating vigilance using EEG and forehead EOG

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-Long; Lu, Bao-Liang

    2017-04-01

    Objective. Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. Approach. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. Main results. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. Significance. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.

  16. Conventional Rapid Latex Agglutination in Estimation of von Willebrand Factor: Method Revisited and Potential Clinical Applications

    PubMed Central

    Che Hussin, Che Maraina

    2014-01-01

    Measurement of von Willebrand factor antigen (VWF : Ag) levels is usually performed in a specialised laboratory which limits its application in routine clinical practice. So far, no commercial rapid test kit is available for VWF : Ag estimation. This paper discusses the technical aspect of latex agglutination method which was established to suit the purpose of estimating von Willebrand factor (VWF) levels in the plasma sample. The latex agglutination test can be performed qualitatively and semiquantitatively. Reproducibility, stability, linearity, limit of detection, interference, and method comparison studies were conducted to evaluate the performance of this test. Semiquantitative latex agglutination test was strongly correlated with the reference immunoturbidimetric assay (Spearman's rho = 0.946, P < 0.001, n = 132). A substantial agreement (κ = 0.77) was found between qualitative latex agglutination test and the reference assay. Using the scoring system for the rapid latex test, no agglutination is with 0% VWF : Ag (control negative), 1+ reaction is equivalent to <20% VWF : Ag, and 4+ reaction indicates >150% VWF : Ag (when comparing with immunoturbidimetric assay). The findings from evaluation studies suggest that latex agglutination method is suitable to be used as a rapid test kit for the estimation of VWF : Ag levels in various clinical conditions associated with high levels and low levels of VWF : Ag. PMID:25759835

  17. Program to develop a performance and heat load prediction system for multistage turbines

    NASA Technical Reports Server (NTRS)

    Sharma, OM

    1994-01-01

    Flows in low-aspect ratio turbines, such as the SSME fuel turbine, are three dimensional and highly unsteady due to the relative motion of adjacent airfoil rows and the circumferential and spanwise gradients in total pressure and temperature, The systems used to design these machines, however, are based on the assumption that the flow is steady. The codes utilized in these design systems are calibrated against turbine rig and engine data through the use of empirical correlations and experience factors. For high aspect ratio turbines, these codes yield reasonably accurate estimates of flow and temperature distributions. However, future design trends will see lower aspect ratio (reduced number of parts) and higher inlet temperature which will result in increased three dimensionality and flow unsteadiness in turbines. Analysis of recently acquired data indicate that temperature streaks and secondary flows generated in combustors and up-stream airfoils can have a large impact on the time-averaged temperature and angle distributions in downstream airfoil rows.

  18. Decentralization, stabilization, and estimation of large-scale linear systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1976-01-01

    In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.

  19. The effect of heart motion on parameter bias in dynamic cardiac SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ross, S.G.; Gullberg, G.T.; Huesman, R.H.

    1996-12-31

    Dynamic cardiac SPECT can be used to estimate kinetic rate parameters which describe the wash-in and wash-out of tracer activity between the blood and the myocardial tissue. These kinetic parameters can in turn be correlated to myocardial perfusion. There are, however, many physical aspects associated with dynamic SPECT which can introduce errors into the estimates. This paper describes a study which investigates the effect of heart motion on kinetic parameter estimates. Dynamic SPECT simulations are performed using a beating version of the MCAT phantom. The results demonstrate that cardiac motion has a significant effect on the blood, tissue, and backgroundmore » content of regions of interest. This in turn affects estimates of wash-in, while it has very little effect on estimates of wash-out. The effect of cardiac motion on parameter estimates appears not to be as great as effects introduced by photon noise and geometric collimator response. It is also shown that cardiac motion results in little extravascular contamination of the left ventricle blood region of interest.« less

  20. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  1. Parameter estimation of kinetic models from metabolic profiles: two-phase dynamic decoupling method.

    PubMed

    Jia, Gengjie; Stephanopoulos, Gregory N; Gunawan, Rudiyanto

    2011-07-15

    Time-series measurements of metabolite concentration have become increasingly more common, providing data for building kinetic models of metabolic networks using ordinary differential equations (ODEs). In practice, however, such time-course data are usually incomplete and noisy, and the estimation of kinetic parameters from these data is challenging. Practical limitations due to data and computational aspects, such as solving stiff ODEs and finding global optimal solution to the estimation problem, give motivations to develop a new estimation procedure that can circumvent some of these constraints. In this work, an incremental and iterative parameter estimation method is proposed that combines and iterates between two estimation phases. One phase involves a decoupling method, in which a subset of model parameters that are associated with measured metabolites, are estimated using the minimization of slope errors. Another phase follows, in which the ODE model is solved one equation at a time and the remaining model parameters are obtained by minimizing concentration errors. The performance of this two-phase method was tested on a generic branched metabolic pathway and the glycolytic pathway of Lactococcus lactis. The results showed that the method is efficient in getting accurate parameter estimates, even when some information is missing.

  2. A statistical characterization of the finger tapping test: modeling, estimation, and applications.

    PubMed

    Austin, Daniel; McNames, James; Klein, Krystal; Jimison, Holly; Pavel, Misha

    2015-03-01

    Sensory-motor performance is indicative of both cognitive and physical function. The Halstead-Reitan finger tapping test is a measure of sensory-motor speed commonly used to assess function as part of a neuropsychological evaluation. Despite the widespread use of this test, the underlying motor and cognitive processes driving tapping behavior during the test are not well characterized or understood. This lack of understanding may make clinical inferences from test results about health or disease state less accurate because important aspects of the task such as variability or fatigue are unmeasured. To overcome these limitations, we enhanced the tapper with a sensor that enables us to more fully characterize all the aspects of tapping. This modification enabled us to decompose the tapping performance into six component phases and represent each phase with a set of parameters having clear functional interpretation. This results in a set of 29 total parameters for each trial, including change in tapping over time, and trial-to-trial and tap-to-tap variability. These parameters can be used to more precisely link different aspects of cognition or motor function to tapping behavior. We demonstrate the benefits of this new instrument with a simple hypothesis-driven trial comparing single and dual-task tapping.

  3. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  4. Counting defects in an instantaneous quench.

    PubMed

    Ibaceta, D; Calzetta, E

    1999-09-01

    We consider the formation of defects in a nonequilibrium second-order phase transition induced by an instantaneous quench to zero temperature in a type II superconductor. We perform a full nonlinear simulation where we follow the evolution in time of the local order parameter field. We determine how far into the phase transition theoretical estimates of the defect density based on the Gaussian approximation yield a reliable prediction for the actual density. We also characterize quantitatively some aspects of the out of equilibrium phase transition.

  5. Development and Testing of a Novel Standard Particle for Performance Verification of Biodefense/Bioterrorism Detection Systems

    DTIC Science & Technology

    2003-11-19

    Higher boiling point (29.4o F) 39.5 1.36 1,1- difluoroethane HFA 152a – Not used for pharmaceutical inhalers , is used for personal products Boiling...technologies have been implemented. One aspect of this rapid development that has kept biodetection Page 1 Report Documentation Page Form...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for

  6. Solar power satellite system definition study. Part 2, volume 4: Microwave power transmission systems

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A slotted waveguide planar array was established as the baseline design for the spaceborne transmitter antenna. Key aspects of efficient energy conversion at both ends of the power transfer link were analyzed and optimized alternate approaches in the areas of antenna and tube design are discussed. An integrated design concept was developed which meets design requirements, observes structural and thermal constraints, exhibits good performance and was developed in adequate depth to permit cost estimating at the subsystem/component level.

  7. Sub-Second Parallel State Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly statemore » estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.« less

  8. Aspect-related Vegetation Differences Amplify Soil Moisture Variability in Semiarid Landscapes

    NASA Astrophysics Data System (ADS)

    Yetemen, O.; Srivastava, A.; Kumari, N.; Saco, P. M.

    2017-12-01

    Soil moisture variability (SMV) in semiarid landscapes is affected by vegetation, soil texture, climate, aspect, and topography. The heterogeneity in vegetation cover that results from the effects of microclimate, terrain attributes (slope gradient, aspect, drainage area etc.), soil properties, and spatial variability in precipitation have been reported to act as the dominant factors modulating SMV in semiarid ecosystems. However, the role of hillslope aspect in SMV, though reported in many field studies, has not received the same degree of attention probably due to the lack of extensive large datasets. Numerical simulations can then be used to elucidate the contribution of aspect-driven vegetation patterns to this variability. In this work, we perform a sensitivity analysis to study on variables driving SMV using the CHILD landscape evolution model equipped with a spatially-distributed solar-radiation component that couples vegetation dynamics and surface hydrology. To explore how aspect-driven vegetation heterogeneity contributes to the SMV, CHILD was run using a range of parameters selected to reflect different scenarios (from uniform to heterogeneous vegetation cover). Throughout the simulations, the spatial distribution of soil moisture and vegetation cover are computed to estimate the corresponding coefficients of variation. Under the uniform spatial precipitation forcing and uniform soil properties, the factors affecting the spatial distribution of solar insolation are found to play a key role in the SMV through the emergence of aspect-driven vegetation patterns. Hence, factors such as catchment gradient, aspect, and latitude, define water stress and vegetation growth, and in turn affect the available soil moisture content. Interestingly, changes in soil properties (porosity, root depth, and pore-size distribution) over the domain are not as effective as the other factors. These findings show that the factors associated to aspect-related vegetation differences amplify the soil moisture variability of semi-arid landscapes.

  9. Analysis of scanner data for crop inventories

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Cicone, R. C.; Kauth, R. J.; Malila, W. A.

    1981-01-01

    Progress and technical issues are reported in the development of corn/soybeans area estimation procedures for use on data from South America, with particular emphasis on Argentina. Aspects related to the supporting research section of the AgRISTARS Project discussed include: (1) multisegment corn/soybean estimation; (2) through the season separability of corn and soybeans within the U.S. corn belt; (3) TTS estimation; (4) insights derived from the baseline corn and soybean procedure; (5) small fields research; and (6) simulating the spectral appearance of wheat as a function of its growth and development. To assist the foreign commodity production forecasting, the performance of the baseline corn/soybean procedure was analyzed and the procedure modified. Fundamental limitations were found in the existing guidelines for discriminating these two crops. The temporal and spectral characteristics of corn and soybeans must be determined because other crops grow with them in Argentina. The state of software technology is assessed and the use of profile techniques for estimation is considered.

  10. Chasing maximal performance: a cautionary tale from the celebrated jumping frogs of Calaveras County.

    PubMed

    Astley, H C; Abbott, E M; Azizi, E; Marsh, R L; Roberts, T J

    2013-11-01

    Maximal performance is an essential metric for understanding many aspects of an organism's biology, but it can be difficult to determine because a measured maximum may reflect only a peak level of effort, not a physiological limit. We used a unique opportunity provided by a frog jumping contest to evaluate the validity of existing laboratory estimates of maximum jumping performance in bullfrogs (Rana catesbeiana). We recorded video of 3124 bullfrog jumps over the course of the 4-day contest at the Calaveras County Jumping Frog Jubilee, and determined jump distance from these images and a calibration of the jump arena. Frogs were divided into two groups: 'rental' frogs collected by fair organizers and jumped by the general public, and frogs collected and jumped by experienced, 'professional' teams. A total of 58% of recorded jumps surpassed the maximum jump distance in the literature (1.295 m), and the longest jump was 2.2 m. Compared with rental frogs, professionally jumped frogs jumped farther, and the distribution of jump distances for this group was skewed towards long jumps. Calculated muscular work, historical records and the skewed distribution of jump distances all suggest that the longest jumps represent the true performance limit for this species. Using resampling, we estimated the probability of observing a given jump distance for various sample sizes, showing that large sample sizes are required to detect rare maximal jumps. These results show the importance of sample size, animal motivation and physiological conditions for accurate maximal performance estimates.

  11. Toward Automatic Verification of Goal-Oriented Flow Simulations

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2014-01-01

    We demonstrate the power of adaptive mesh refinement with adjoint-based error estimates in verification of simulations governed by the steady Euler equations. The flow equations are discretized using a finite volume scheme on a Cartesian mesh with cut cells at the wall boundaries. The discretization error in selected simulation outputs is estimated using the method of adjoint-weighted residuals. Practical aspects of the implementation are emphasized, particularly in the formulation of the refinement criterion and the mesh adaptation strategy. Following a thorough code verification example, we demonstrate simulation verification of two- and three-dimensional problems. These involve an airfoil performance database, a pressure signature of a body in supersonic flow and a launch abort with strong jet interactions. The results show reliable estimates and automatic control of discretization error in all simulations at an affordable computational cost. Moreover, the approach remains effective even when theoretical assumptions, e.g., steady-state and solution smoothness, are relaxed.

  12. Semi-quantitative estimation of cellular SiO2 nanoparticles using flow cytometry combined with X-ray fluorescence measurements.

    PubMed

    Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun

    2014-09-01

    In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.

  13. The performance of the CASTOR calorimeter during LHC Run 2

    NASA Astrophysics Data System (ADS)

    van de Klundert, Merijn H. F.; CMS Collaboration

    2017-11-01

    CASTOR is an electromagnetic and hadronic tungsten-quartz sampling Cerenkov calorimeter located at the Compact Muon Solenoid experiment at the Large Hadron Collider. The detector has pseudorapidity borders at -5.2 and -6.6. An overview is presented on the various aspects of CASTOR’s performance and their relations during LHC Run 2. The equalisation of CASTOR’s channels is performed using beam-halo muons. Thereafter, CASTOR’s pedestal spectrum is studied. It is shown that noise estimates which are extracted using a fit, give on average a 10% lower threshold than statistical estimates. Gain correction factors, which are needed for the intercalibration, are obtained using a statistical, in-situ applicable method. The results of this method are shown to be reasonably consistent with laboratory measurements. Penultimately the absolute calibration is discussed, with emphasis on the relation between the scale uncertainty and CASTOR’s alignment. It is shown that the alignment’s contribution to the systematic uncertainty is decreased by over 50% in LHC Run 2 w.r.t. LHC Run 1. Finally generalisations of the conclusions to other subsystems and future improvements are discussed.

  14. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    PubMed

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.

  15. SU-F-P-19: Fetal Dose Estimate for a High-Dose Fluoroscopy Guided Intervention Using Modern Data Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moirano, J

    Purpose: An accurate dose estimate is necessary for effective patient management after a fetal exposure. In the case of a high-dose exposure, it is critical to use all resources available in order to make the most accurate assessment of the fetal dose. This work will demonstrate a methodology for accurate fetal dose estimation using tools that have recently become available in many clinics, and show examples of best practices for collecting data and performing the fetal dose calculation. Methods: A fetal dose estimate calculation was performed using modern data collection tools to determine parameters for the calculation. The reference pointmore » air kerma as displayed by the fluoroscopic system was checked for accuracy. A cumulative dose incidence map and DICOM header mining were used to determine the displayed reference point air kerma. Corrections for attenuation caused by the patient table and pad were measured and applied in order to determine the peak skin dose. The position and depth of the fetus was determined by ultrasound imaging and consultation with a radiologist. The data collected was used to determine a normalized uterus dose from Monte Carlo simulation data. Fetal dose values from this process were compared to other accepted calculation methods. Results: An accurate high-dose fetal dose estimate was made. Comparison to accepted legacy methods were were within 35% of estimated values. Conclusion: Modern data collection and reporting methods ease the process for estimation of fetal dose from interventional fluoroscopy exposures. Many aspects of the calculation can now be quantified rather than estimated, which should allow for a more accurate estimation of fetal dose.« less

  16. Life cycle assessment of lignocellulosic ethanol: a review of key factors and methods affecting calculated GHG emissions and energy use.

    PubMed

    Gerbrandt, Kelsey; Chu, Pei Lin; Simmonds, Allison; Mullins, Kimberley A; MacLean, Heather L; Griffin, W Michael; Saville, Bradley A

    2016-04-01

    Lignocellulosic ethanol has potential for lower life cycle greenhouse gas emissions compared to gasoline and conventional grain-based ethanol. Ethanol production 'pathways' need to meet economic and environmental goals. Numerous life cycle assessments of lignocellulosic ethanol have been published over the last 15 years, but gaps remain in understanding life cycle performance due to insufficient data, and model and methodological issues. We highlight key aspects of these issues, drawing on literature and a case study of corn stover ethanol. Challenges include the complexity of feedstock/ecosystems and market-mediated aspects and the short history of commercial lignocellulosic ethanol facilities, which collectively have led to uncertainty in GHG emissions estimates, and to debates on LCA methods and the role of uncertainty in decision making. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Numerical simulation and experimental validation of Lamb wave propagation behavior in composite plates

    NASA Astrophysics Data System (ADS)

    Kim, Sungwon; Uprety, Bibhisha; Mathews, V. John; Adams, Daniel O.

    2015-03-01

    Structural Health Monitoring (SHM) based on Acoustic Emission (AE) is dependent on both the sensors to detect an impact event as well as an algorithm to determine the impact location. The propagation of Lamb waves produced by an impact event in thin composite structures is affected by several unique aspects including material anisotropy, ply orientations, and geometric discontinuities within the structure. The development of accurate numerical models of Lamb wave propagation has important benefits towards the development of AE-based SHM systems for impact location estimation. Currently, many impact location algorithms utilize the time of arrival or velocities of Lamb waves. Therefore the numerical prediction of characteristic wave velocities is of great interest. Additionally, the propagation of the initial symmetric (S0) and asymmetric (A0) wave modes is important, as these wave modes are used for time of arrival estimation. In this investigation, finite element analyses were performed to investigate aspects of Lamb wave propagation in composite plates with active signal excitation. A comparative evaluation of two three-dimensional modeling approaches was performed, with emphasis placed on the propagation and velocity of both the S0 and A0 wave modes. Results from numerical simulations are compared to experimental results obtained from active AE testing. Of particular interest is the directional dependence of Lamb waves in quasi-isotropic carbon/epoxy composite plates. Numerical and experimental results suggest that although a quasi-isotropic composite plate may have the same effective elastic modulus in all in-plane directions, the Lamb wave velocity may have some directional dependence. Further numerical analyses were performed to investigate Lamb wave propagation associated with circular cutouts in composite plates.

  18. A study on leakage radiation dose at ELV-4 electron accelerator bunker

    NASA Astrophysics Data System (ADS)

    Chulan, Mohd Rizal Md; Yahaya, Redzuwan; Ghazali, Abu BakarMhd

    2014-09-01

    Shielding is an important aspect in the safety of an accelerator and the most important aspects of a bunker shielding is the door. The bunker's door should be designed properly to minimize the leakage radiation and shall not exceed the permitted limit of 2.5μSv/hr. In determining the leakage radiation dose that passed through the door and gaps between the door and the wall, 2-dimensional manual calculations are often used. This method is hard to perform because visual 2-dimensional is limited and is also very difficult in the real situation. Therefore estimation values are normally performed. In doing so, the construction cost would be higher because of overestimate or underestimate which require costly modification to the bunker. Therefore in this study, two methods are introduced to overcome the problem such as simulation using MCNPX Version 2.6.0 software and manual calculation using 3-dimensional model from Autodesk Inventor 2010 software. The values from the two methods were eventually compared to the real values from direct measurements using Ludlum Model 3 with Model 44-9 probe survey meter.

  19. Estimation of influential points in any data set from coefficient of determination and its leave-one-out cross-validated counterpart.

    PubMed

    Tóth, Gergely; Bodai, Zsolt; Héberger, Károly

    2013-10-01

    Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.

  20. Improving Space Project Cost Estimating with Engineering Management Variables

    NASA Technical Reports Server (NTRS)

    Hamaker, Joseph W.; Roth, Axel (Technical Monitor)

    2001-01-01

    Current space project cost models attempt to predict space flight project cost via regression equations, which relate the cost of projects to technical performance metrics (e.g. weight, thrust, power, pointing accuracy, etc.). This paper examines the introduction of engineering management parameters to the set of explanatory variables. A number of specific engineering management variables are considered and exploratory regression analysis is performed to determine if there is statistical evidence for cost effects apart from technical aspects of the projects. It is concluded that there are other non-technical effects at work and that further research is warranted to determine if it can be shown that these cost effects are definitely related to engineering management.

  1. Calibration of the computer model describing flows in the water supply system; example of the application of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Orłowska-Szostak, Maria; Orłowski, Ryszard

    2017-11-01

    The paper discusses some relevant aspects of the calibration of a computer model describing flows in the water supply system. The authors described an exemplary water supply system and used it as a practical illustration of calibration. A range of measures was discussed and applied, which improve the convergence and effective use of calculations in the calibration process and also the effect of such calibration which is the validity of the results obtained. Drawing up results of performed measurements, i.e. estimating pipe roughnesses, the authors performed using the genetic algorithm implementation of which is a software developed by Resan Labs company from Brazil.

  2. Computational Infrastructure for Engine Structural Performance Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.

  3. Integrated Model for Performance Analysis of All-Optical Multihop Packet Switches

    NASA Astrophysics Data System (ADS)

    Jeong, Han-You; Seo, Seung-Woo

    2000-09-01

    The overall performance of an all-optical packet switching system is usually determined by two criteria, i.e., switching latency and packet loss rate. In some real-time applications, however, in which packets arriving later than a timeout period are discarded as loss, the packet loss rate becomes the most dominant criterion for system performance. Here we focus on evaluating the performance of all-optical packet switches in terms of the packet loss rate, which normally arises from the insufficient hardware or the degradation of an optical signal. Considering both aspects, we propose what we believe is a new analysis model for the packet loss rate that reflects the complicated interactions between physical impairments and system-level parameters. On the basis of the estimation model for signal quality degradation in a multihop path we construct an equivalent analysis model of a switching network for evaluating an average bit error rate. With the model constructed we then propose an integrated model for estimating the packet loss rate in three architectural examples of multihop packet switches, each of which is based on a different switching concept. We also derive the bounds on the packet loss rate induced by bit errors. Finally, it is verified through simulation studies that our analysis model accurately predicts system performance.

  4. Weight-elimination neural networks applied to coronary surgery mortality prediction.

    PubMed

    Ennett, Colleen M; Frize, Monique

    2003-06-01

    The objective was to assess the effectiveness of the weight-elimination cost function in improving classification performance of artificial neural networks (ANNs) and to observe how changing the a priori distribution of the training set affects network performance. Backpropagation feedforward ANNs with and without weight-elimination estimated mortality for coronary artery surgery patients. The ANNs were trained and tested on cases with 32 input variables describing the patient's medical history; the output variable was in-hospital mortality (mortality rates: training 3.7%, test 3.8%). Artificial training sets with mortality rates of 20%, 50%, and 80% were created to observe the impact of training with a higher-than-normal prevalence. When the results were averaged, weight-elimination networks achieved higher sensitivity rates than those without weight-elimination. Networks trained on higher-than-normal prevalence achieved higher sensitivity rates at the cost of lower specificity and correct classification. The weight-elimination cost function can improve the classification performance when the network is trained with a higher-than-normal prevalence. A network trained with a moderately high artificial mortality rate (artificial mortality rate of 20%) can improve the sensitivity of the model without significantly affecting other aspects of the model's performance. The ANN mortality model achieved comparable performance as additive and statistical models for coronary surgery mortality estimation in the literature.

  5. Conceptual design study of potential early commercial MHD powerplant. Report of task 2 results

    NASA Astrophysics Data System (ADS)

    Hals, F. A.

    1981-03-01

    The conceptual design of one of the potential early commercial MHD power plants was studied. The plant employs oxygen enrichment of the combustion air and preheating of this oxygen enriched air to an intermediate temperature of 1200 F attainable with a tubular type recuperative heat exchanger. Conceptual designs of plant componets and equipment with performance, operational characteristics, and costs are reported. Plant economics and overall performance including full and part load operation are reviewed. The projected performance and estimated cost of this early MHD plant are compared to conventional power plants, although it does not offer the same high efficiency and low costs as the mature MHD power plant. Environmental aspects and the methods incorporated in plant design for emission control of sulfur and nitrogen are reviewed.

  6. Conceptual design study of potential early commercial MHD powerplant. Report of task 2 results

    NASA Technical Reports Server (NTRS)

    Hals, F. A.

    1981-01-01

    The conceptual design of one of the potential early commercial MHD power plants was studied. The plant employs oxygen enrichment of the combustion air and preheating of this oxygen enriched air to an intermediate temperature of 1200 F attainable with a tubular type recuperative heat exchanger. Conceptual designs of plant componets and equipment with performance, operational characteristics, and costs are reported. Plant economics and overall performance including full and part load operation are reviewed. The projected performance and estimated cost of this early MHD plant are compared to conventional power plants, although it does not offer the same high efficiency and low costs as the mature MHD power plant. Environmental aspects and the methods incorporated in plant design for emission control of sulfur and nitrogen are reviewed.

  7. Functionality versus dimensionality in psychological taxonomies, and a puzzle of emotional valence

    PubMed Central

    2018-01-01

    This paper applies evolutionary and functional constructivism approaches to the discussion of psychological taxonomies, as implemented in the neurochemical model Functional Ensemble of Temperament (FET). FET asserts that neurochemical systems developed in evolution to regulate functional-dynamical aspects of construction of actions: orientation, selection (integration), energetic maintenance, and management of automatic behavioural elements. As an example, the paper reviews the neurochemical mechanisms of interlocking between emotional dispositions and performance capacities. Research shows that there are no specific neurophysiological systems of positive or negative affect, and that emotional valence is rather an integrative product of many brain systems during estimations of needs and the capacities required to satisfy these needs. The interlocking between emotional valence and functional aspects of performance appears to be only partial since all monoamine and opioid receptor systems play important roles in non-emotional aspects of behaviour, in addition to emotionality. This suggests that the Positive/Negative Affect framework for DSM/ICD classifications of mental disorders oversimplifies the structure of non-emotionality symptoms of these disorders. Contingent dynamical relationships between neurochemical systems cannot be represented by linear statistical models searching for independent dimensions (such as factor analysis); nevertheless, these relationships should be reflected in psychological and psychiatric taxonomies. This article is part of the theme issue ‘Diverse perspectives on diversity: multi-disciplinary approaches to taxonomies of individual differences’. PMID:29483351

  8. Errors in retarding potential analyzers caused by nonuniformity of the grid-plane potential.

    NASA Technical Reports Server (NTRS)

    Hanson, W. B.; Frame, D. R.; Midgley, J. E.

    1972-01-01

    One aspect of the degradation in performance of retarding potential analyzers caused by potential depressions in the retarding grid is quantitatively estimated from laboratory measurements and theoretical calculations. A simple expression is obtained that permits the use of laboratory measurements of grid properties to make first-order corrections to flight data. Systematic positive errors in ion temperature of approximately 16% for the Ogo 4 instrument and 3% for the Ogo 6 instrument are deduced. The effects of the transverse electric fields arising from the grid potential depressions are not treated.

  9. The Cost of Iraq, Afghanistan, and Other Global War on Terror Operations Since 9/11

    DTIC Science & Technology

    2014-12-08

    generally reduce the amount obligated. For FY2012 and FY2013, DOD data is as of June 2014 and so lapsed funds only reflect monies with a one-year life . At...this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters...5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  10. Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks

    PubMed Central

    Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire

    2009-01-01

    This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145

  11. Effects of Whitecaps on Satellite-Derived Ocean Color

    NASA Technical Reports Server (NTRS)

    Frouin, Robert

    2000-01-01

    During the 3.25 years of the project, various aspects of satellite ocean-color remote sensing were investigated, including effect of whitecaps on atmospheric correction, validity of aerosol models, and evaluation of ocean-color products. Algorithms to estimate pigment concentration and photo-synthetically active radiation (PAR) were developed, and studies of geophysical phenomena, such as the 1998 Asian Dust event, were performed. The influence of solar radiation absorption by phytoplankton on mixed layer dynamics, ocean circulation, and climate was also investigated. The project's results and findings are described.

  12. A methodology to ensure and improve accuracy of Ki67 labelling index estimation by automated digital image analysis in breast cancer tissue.

    PubMed

    Laurinavicius, Arvydas; Plancoulaine, Benoit; Laurinaviciene, Aida; Herlin, Paulette; Meskauskas, Raimundas; Baltrusaityte, Indra; Besusparis, Justinas; Dasevicius, Darius; Elie, Nicolas; Iqbal, Yasir; Bor, Catherine

    2014-01-01

    Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists' VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particularfor the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers.

  13. A methodology to ensure and improve accuracy of Ki67 labelling index estimation by automated digital image analysis in breast cancer tissue

    PubMed Central

    2014-01-01

    Introduction Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Methods Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. Results ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists’ VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particular for the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Conclusions Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers. PMID:24708745

  14. Parachute mortar design.

    NASA Technical Reports Server (NTRS)

    Pleasants, J. E.

    1973-01-01

    Mortars are used as one method for ejecting parachutes into the airstream to decelerate spacecraft and aircraft pilot escape modules and to effect spin recovery of the aircraft. An approach to design of mortars in the class that can accommodate parachutes in the 20- to 55-foot-diameter size is presented. Parachute deployment considerations are discussed. Comments are made on the design of a power unit, mortar tube, cover, and sabot. Propellant selection and breech characteristics and size are discussed. A method of estimating hardware weights and reaction load is presented. In addition, some aspects of erodible orifices are given as well as comments concerning ambient effects on performance. This paper collates data and experience from design and flight qualification of four mortar systems, and provides pertinent estimations that should be of interest on programs considering parachute deployment.

  15. On virial analysis at low aspect ratio

    DOE PAGES

    Bongard, Michael W.; Barr, Jayson L.; Fonck, Raymond J.; ...

    2016-07-28

    The validity of virial analysis to infer global MHD equilibrium poloidal beta β p and internal inductance ℓ i from external magnetics measurements is examined for low aspect ratio configurations with A < 2. Numerical equilibrium studies at varied aspect ratio are utilized to validate the technique at finite aspect ratio. The effect of applying high-A approximations to low-A experimental data is quantified and demonstrates significant over-estimation of stored energy (factors of 2–10) in spherical tokamak geometry. Experimental approximations to equilibrium-dependent volume integral terms in the analysis are evaluated at low-A. Highly paramagnetic configurations are found to be inadequately representedmore » through the virial mean radius parameter R T. Alternate formulations for inferring β p and ℓ i that are independent of R T to avoid this difficulty are presented for the static isotropic limit. Lastly, these formulations are suitable for fast estimation of tokamak stored energy components at low aspect ratio using virial analysis.« less

  16. Negative psychological aspects and survival in lung cancer patients.

    PubMed

    Nakaya, Naoki; Saito-Nakaya, Kumi; Akechi, Tatsuo; Kuriyama, Shinichi; Inagaki, Masatoshi; Kikuchi, Nobutaka; Nagai, Kanji; Tsugane, Shoichiro; Nishiwaki, Yutaka; Tsuji, Ichiro; Uchitomi, Yosuke

    2008-05-01

    We conducted a prospective cohort study in Japan to investigate associations between negative psychological aspects and cancer survival. Between July 1999 and July 2004, a total of 1178 lung cancer patients were enrolled. The questionnaire asked about socioeconomic variables, smoking status, clinical symptoms, and psychological aspects after diagnosis. Negative psychological aspects were assessed for the subscales of helplessness/hopelessness and depression. Clinical stage, performance status (PS), and histologic type were obtained from medical charts. The subjects were followed up until December 2004, and 686 had died. A Cox regression model was used to estimate the hazards ratio (HR) of all-cause mortality. After adjustment for socioeconomic variables and smoking status in addition to sex, age, and histologic type, both helplessness/hopelessness and depression subscales showed significant linear positive associations with the risk of mortality (p for trend<0.001 for both). However, after adjustment for clinical state variables in addition to sex, age, and histologic type, these significant linear positive associations were no longer observed (p for trend=0.41 and 0.26, respectively). Our data supported the hypothesis that the association between helplessness/hopelessness and depression and the risk of mortality among lung cancer patients was largely confounded by clinical state variables including clinical stage, PS, and clinical symptoms. (c) 2007 John Wiley & Sons, Ltd.

  17. Survey of research on unsteady aerodynamic loading of delta wings

    NASA Technical Reports Server (NTRS)

    Ashley, H.; Vaneck, T.; Katz, J.; Jarrah, M. A.

    1991-01-01

    For aeronautical applications, there has been recent interest in accurately determining the aerodynamic forces and moments experienced by low-aspect-ratio wings performing transient maneuvers which go to angles of attack as high as 90 deg. Focusing on the delta planform with sharp leading edges, the paper surveys experimental and theoretical investigations dealing with the associated unsteady flow phenomena. For maximum angles above a value between 30 and 40 deg, flow details and airloads are dominated by hysteresis in the 'bursting' instability of intense vortices which emanate from the leading edge. As examples of relevant test results, force and moment histories are presented for a model series with aspect ratios 1, 1.5 and 2. Influences of key parameters are discussed, notably those which measure unsteadiness. Comparisons are given with two theories: a paneling approximation that cannot capture bursting but clarifies other unsteady influences, and a simplified estimation scheme which uses measured bursting data.

  18. Geometric saliency to characterize radar exploitation performance

    NASA Astrophysics Data System (ADS)

    Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve

    2014-06-01

    Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.

  19. When approximate number acuity predicts math performance: The moderating role of math anxiety

    PubMed Central

    Libertus, Melissa E.

    2018-01-01

    Separate lines of research suggest that people who are better at estimating numerical quantities using the approximate number system (ANS) have better math performance, and that people with high levels of math anxiety have worse math performance. Only a handful of studies have examined both ANS acuity and math anxiety in the same participants and those studies report contradictory results. To address these inconsistencies, in the current study 87 undergraduate students completed assessments of ANS acuity, math anxiety, and three different measures of math. We considered moderation models to examine the interplay of ANS acuity and math anxiety on different aspects of math performance. Math anxiety and ANS acuity were both unique significant predictors of the ability to automatically recall basic number facts. ANS acuity was also a unique significant predictor of the ability to solve applied math problems, and this relation was further qualified by a significant interaction with math anxiety: the positive association between ANS acuity and applied problem solving was only present in students with high math anxiety. Our findings suggest that ANS acuity and math anxiety are differentially related to various aspects of math and should be considered together when examining their respective influences on math ability. Our findings also raise the possibility that good ANS acuity serves as a protective factor for highly math-anxious students on certain types of math assessments. PMID:29718939

  20. When approximate number acuity predicts math performance: The moderating role of math anxiety.

    PubMed

    Braham, Emily J; Libertus, Melissa E

    2018-01-01

    Separate lines of research suggest that people who are better at estimating numerical quantities using the approximate number system (ANS) have better math performance, and that people with high levels of math anxiety have worse math performance. Only a handful of studies have examined both ANS acuity and math anxiety in the same participants and those studies report contradictory results. To address these inconsistencies, in the current study 87 undergraduate students completed assessments of ANS acuity, math anxiety, and three different measures of math. We considered moderation models to examine the interplay of ANS acuity and math anxiety on different aspects of math performance. Math anxiety and ANS acuity were both unique significant predictors of the ability to automatically recall basic number facts. ANS acuity was also a unique significant predictor of the ability to solve applied math problems, and this relation was further qualified by a significant interaction with math anxiety: the positive association between ANS acuity and applied problem solving was only present in students with high math anxiety. Our findings suggest that ANS acuity and math anxiety are differentially related to various aspects of math and should be considered together when examining their respective influences on math ability. Our findings also raise the possibility that good ANS acuity serves as a protective factor for highly math-anxious students on certain types of math assessments.

  1. The effects of moderate alcohol concentrations on driving and cognitive performance during ascending and descending blood alcohol concentrations.

    PubMed

    Starkey, Nicola J; Charlton, Samuel G

    2014-07-01

    Alcohol has an adverse effect on driving performance; however, the effects of moderate doses on different aspects of the driving task are inconsistent and differ across the intoxication curve. This research aimed to investigate driving and cognitive performance asymmetries (acute tolerance and acute protracted error) accompanying the onset and recovery from moderate alcohol consumption. Sixty-one participants received a placebo, medium (target blood alcohol concentration [BAC] 0.05 mg/ml) or high (target BAC 0.08 mg/ml) dose of alcohol. Participants completed a simulated drive, cognitive tests and subjective rating scales five times over a 3.5 h period. When ascending and descending BACs (0.05 and 0.09 mg/ml) were compared participants' self-ratings of intoxication and willingness to drive showed acute tolerance. Acute protracted errors were observed for response speed, maze learning errors, time exceeding the speed limit and exaggerated steering responses to hazards. Participants' estimates of their level of intoxication were poorly related to their actual BAC levels (and hence degree of impairment), and various aspects of driving and cognitive performance worsened during descending BACs. This indicates that drivers are not good at judging their fitness to drive after drinking only moderate amounts of alcohol and suggests an important focus for public education regarding alcohol and driving. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Beyond Happiness and Satisfaction: Toward Well-Being Indices Based on Stated Preference*

    PubMed Central

    Benjamin, Daniel J.; Kimball, Miles S.; Heffetz, Ori; Szembrot, Nichole

    2014-01-01

    This paper proposes foundations and a methodology for survey-based tracking of well-being. First, we develop a theory in which utility depends on “fundamental aspects” of well-being, measurable with surveys. Second, drawing from psychologists, philosophers, and economists, we compile a comprehensive list of such aspects. Third, we demonstrate our proposed method for estimating the aspects’ relative marginal utilities—a necessary input for constructing an individual-level well-being index—by asking ~4,600 U.S. survey respondents to state their preference between pairs of aspect bundles. We estimate high relative marginal utilities for aspects related to family, health, security, values, freedom, happiness, and life satisfaction. PMID:25404760

  3. From rational numbers to algebra: separable contributions of decimal magnitude and relational understanding of fractions.

    PubMed

    DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J

    2015-05-01

    To understand the development of mathematical cognition and to improve instructional practices, it is critical to identify early predictors of difficulty in learning complex mathematical topics such as algebra. Recent work has shown that performance with fractions on a number line estimation task predicts algebra performance, whereas performance with whole numbers on similar estimation tasks does not. We sought to distinguish more specific precursors to algebra by measuring multiple aspects of knowledge about rational numbers. Because fractions are the first numbers that are relational expressions to which students are exposed, we investigated how understanding the relational bipartite format (a/b) of fractions might connect to later algebra performance. We presented middle school students with a battery of tests designed to measure relational understanding of fractions, procedural knowledge of fractions, and placement of fractions, decimals, and whole numbers onto number lines as well as algebra performance. Multiple regression analyses revealed that the best predictors of algebra performance were measures of relational fraction knowledge and ability to place decimals (not fractions or whole numbers) onto number lines. These findings suggest that at least two specific components of knowledge about rational numbers--relational understanding (best captured by fractions) and grasp of unidimensional magnitude (best captured by decimals)--can be linked to early success with algebraic expressions. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Frozen Mummies from Andean Mountaintop Shrines: Bioarchaeology and Ethnohistory of Inca Human Sacrifice

    PubMed Central

    Ceruti, Maria Constanza

    2015-01-01

    This study will focus on frozen mummies of sacrificial victims from mounts Llullaillaco (6739 m), Quehuar (6130 m), El Toro (6160 m), and the Aconcagua massif. These finds provide bioarchaeological data from mountaintop sites that has been recovered in scientifically controlled excavations in the northwest of Argentina, which was once part of the southern province of the Inca Empire. Numerous interdisciplinary studies have been conducted on the Llullaillaco mummies, including radiological evaluations by conventional X-rays and CT scans, which provided information about condition and pathology of the bones and internal organ, as well as dental studies oriented to the estimation of the ages of the three children at the time of death. Ancient DNA studies and hair analysis were also performed in cooperation with the George Mason University, the University of Bradford, and the Laboratory of Biological Anthropology at the University of Copenhagen. Ethnohistorical sources reveal interesting aspects related to the commemorative, expiatory, propitiatory, and dedicatory aspects of human sacrifice performed under Inca rule. The selection of the victims along with the procedures followed during the performance of the capacocha ceremony will be discussed, based on the bioarchaeological evidences from frozen mummies and the accounts recorded by the Spanish chroniclers. PMID:26345378

  5. Aerodynamic Analysis of a Hale Aircraft Joined-Wing Configuration

    NASA Astrophysics Data System (ADS)

    Sivaji, Rangarajan; Ghia, Urmila; Ghia, Karman; Thornburg, Hugh

    2003-11-01

    Aerodynamic analysis of a high-aspect ratio, joined wing of a High-Altitude Long Endurance (HALE) aircraft is performed. The requirement of high lift over extended flight periods for the HALE aircraft leads to high-aspect ratio wings experiencing significant deflections necessitating consideration of aeroelastic effects. The finite-volume solver COBALT, with Reynolds-averaged Navier-Stokes (RANS) and Detached Eddy Simulation (DES) capabilities, is used for the flow simulations. Calculations are performed at á = 0° and 12° for M = 0.6, at an altitude of 30,000 feet, at a Re per unit length of 5.6x106. The wing cross sections are NACA 4421 airfoils. Because of the high lift-to-drag ratio wings, an inviscid flow analysis is also performed. The inviscid surface pressure coefficient (Cp) is compared with the corresponding viscous Cp to examine the feasibility of the use of the inviscid pressure loads as an estimate of the total fluid loads on the structure. The viscous and inviscid Cp results compare reasonably only at á = 0°. The viscous flow is examined in detail via surface and field velocity vectors, vorticity, density and pressure contours. For á = 12°, the unsteady DES solutions show a weak shock at the aft-wing trailing edge. Also, the flow near the joint exhibits a region of mild separation.

  6. Frozen Mummies from Andean Mountaintop Shrines: Bioarchaeology and Ethnohistory of Inca Human Sacrifice.

    PubMed

    Ceruti, Maria Constanza

    2015-01-01

    This study will focus on frozen mummies of sacrificial victims from mounts Llullaillaco (6739 m), Quehuar (6130 m), El Toro (6160 m), and the Aconcagua massif. These finds provide bioarchaeological data from mountaintop sites that has been recovered in scientifically controlled excavations in the northwest of Argentina, which was once part of the southern province of the Inca Empire. Numerous interdisciplinary studies have been conducted on the Llullaillaco mummies, including radiological evaluations by conventional X-rays and CT scans, which provided information about condition and pathology of the bones and internal organ, as well as dental studies oriented to the estimation of the ages of the three children at the time of death. Ancient DNA studies and hair analysis were also performed in cooperation with the George Mason University, the University of Bradford, and the Laboratory of Biological Anthropology at the University of Copenhagen. Ethnohistorical sources reveal interesting aspects related to the commemorative, expiatory, propitiatory, and dedicatory aspects of human sacrifice performed under Inca rule. The selection of the victims along with the procedures followed during the performance of the capacocha ceremony will be discussed, based on the bioarchaeological evidences from frozen mummies and the accounts recorded by the Spanish chroniclers.

  7. Topology design and performance analysis of an integrated communication network

    NASA Technical Reports Server (NTRS)

    Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.

    1985-01-01

    A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.

  8. Instrumentation and Performance Analysis Plans for the HIFiRE Flight 2 Experiment

    NASA Technical Reports Server (NTRS)

    Gruber, Mark; Barhorst, Todd; Jackson, Kevin; Eklund, Dean; Hass, Neal; Storch, Andrea M.; Liu, Jiwen

    2009-01-01

    Supersonic combustion performance of a bi-component gaseous hydrocarbon fuel mixture is one of the primary aspects under investigation in the HIFiRE Flight 2 experiment. In-flight instrumentation and post-test analyses will be two key elements used to determine the combustion performance. Pre-flight computational fluid dynamics (CFD) analyses provide valuable information that can be used to optimize the placement of a constrained set of wall pressure instrumentation in the experiment. The simulations also allow pre-flight assessments of performance sensitivities leading to estimates of overall uncertainty in the determination of combustion efficiency. Based on the pre-flight CFD results, 128 wall pressure sensors have been located throughout the isolator/combustor flowpath to minimize the error in determining the wall pressure force at Mach 8 flight conditions. Also, sensitivity analyses show that mass capture and combustor exit stream thrust are the two primary contributors to uncertainty in combustion efficiency.

  9. The Geography of Inequality: Why Separate Means Unequal in American Public Schools*

    PubMed Central

    Logan, John R.; Minca, Elisabeta; Adar, Sinem

    2013-01-01

    Persistent school segregation does not only mean that children of different racial and ethnic backgrounds attend different schools, but their schools are also unequal in their performance. This study documents nationally the extent of disparities in school performance between schools attended by whites and Asians compared to blacks, Hispanics, and Native Americans. It further examines the geography of school inequality in two ways. First it analyzes the segregation of students between different types of school profiles based on racial composition, poverty and metropolitan location. Second it estimates the independent effects of these and other school and school district characteristics on school performance, identifying which aspects of school segregation are the most important sources of disadvantage. A focus on schools at the bottom of the distribution as in No Schools Left Behind would not ameliorate wide disparities between groups that are found run across the whole spectrum of school performance. PMID:24259754

  10. Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.

    2007-01-01

    Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.

  11. Interaction of hypertension and age in visual selective attention performance.

    PubMed

    Madden, D J; Blumenthal, J A

    1998-01-01

    Previous research suggests that some aspects of cognitive performance decline as a joint function of age and hypertension. In this experiment, 51 unmedicated individuals with mild essential hypertension and 48 normotensive individuals, 18-78 years of age, performed a visual search task. The estimated time required to identify a display character and shift attention between display positions increased with age. This attention shift time did not differ significantly between hypertensive and normotensive participants, but regression analyses indicated some mediation of the age effect by blood pressure. For individuals less than 60 years of age, the error rate was greater for hypertensive than for normotensive participants. Although the present design could detect effects of only moderate to large size, the results suggest that effects of hypertension may be more evident in a relatively general measure of performance (mean error rate) than in the speed of shifting visual attention.

  12. On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters

    PubMed Central

    van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian

    2017-01-01

    The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127

  13. Age estimation by pulp-to-tooth area ratio using cone-beam computed tomography: A preliminary analysis.

    PubMed

    Rai, Arpita; Acharya, Ashith B; Naikmasur, Venkatesh G

    2016-01-01

    Age estimation of living or deceased individuals is an important aspect of forensic sciences. Conventionally, pulp-to-tooth area ratio (PTR) measured from periapical radiographs have been utilized as a nondestructive method of age estimation. Cone-beam computed tomography (CBCT) is a new method to acquire three-dimensional images of the teeth in living individuals. The present study investigated age estimation based on PTR of the maxillary canines measured in three planes obtained from CBCT image data. Sixty subjects aged 20-85 years were included in the study. For each tooth, mid-sagittal, mid-coronal, and three axial sections-cementoenamel junction (CEJ), one-fourth root level from CEJ, and mid-root-were assessed. PTR was calculated using AutoCAD software after outlining the pulp and tooth. All statistical analyses were performed using an SPSS 17.0 software program. Linear regression analysis showed that only PTR in axial plane at CEJ had significant age correlation ( r = 0.32; P < 0.05). This is probably because of clearer demarcation of pulp and tooth outline at this level.

  14. Parameter-based estimation of CT dose index and image quality using an in-house android™-based software

    NASA Astrophysics Data System (ADS)

    Mubarok, S.; Lubis, L. E.; Pawiro, S. A.

    2016-03-01

    Compromise between radiation dose and image quality is essential in the use of CT imaging. CT dose index (CTDI) is currently the primary dosimetric formalisms in CT scan, while the low and high contrast resolutions are aspects indicating the image quality. This study was aimed to estimate CTDIvol and image quality measures through a range of exposure parameters variation. CTDI measurements were performed using PMMA (polymethyl methacrylate) phantom of 16 cm diameter, while the image quality test was conducted by using catphan ® 600. CTDI measurements were carried out according to IAEA TRS 457 protocol using axial scan mode, under varied parameters of tube voltage, collimation or slice thickness, and tube current. Image quality test was conducted accordingly under the same exposure parameters with CTDI measurements. An Android™ based software was also result of this study. The software was designed to estimate the value of CTDIvol with maximum difference compared to actual CTDIvol measurement of 8.97%. Image quality can also be estimated through CNR parameter with maximum difference to actual CNR measurement of 21.65%.

  15. Rain/No-Rain Identification from Bispectral Satellite Information using Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Tao, Y.

    2016-12-01

    Satellite-based precipitation estimation products have the advantage of high resolution and global coverage. However, they still suffer from insufficient accuracy. To accurately estimate precipitation from satellite data, there are two most important aspects: sufficient precipitation information in the satellite information and proper methodologies to extract such information effectively. This study applies the state-of-the-art machine learning methodologies to bispectral satellite information for Rain/No-Rain detection. Specifically, we use deep neural networks to extract features from infrared and water vapor channels and connect it to precipitation identification. To evaluate the effectiveness of the methodology, we first applies it to the infrared data only (Model DL-IR only), the most commonly used inputs for satellite-based precipitation estimation. Then we incorporates water vapor data (Model DL-IR + WV) to further improve the prediction performance. Radar stage IV dataset is used as ground measurement for parameter calibration. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), is used as a reference to compare the performance of both models in both winter and summer seasons.The experiments show significant improvement for both models in precipitation identification. The overall performance gains in the Critical Success Index (CSI) are 21.60% and 43.66% over the verification periods for Model DL-IR only and Model DL-IR+WV model compared to PERSIANN-CCS, respectively. Moreover, specific case studies show that the water vapor channel information and the deep neural networks effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.

  16. Initial Performance of the Aspect System on the Chandra Observatory: Post-Facto Aspect Reconstruction

    NASA Technical Reports Server (NTRS)

    Aldcroft, T.; Karovska, M.; Cresitello-Dittmar, M.; Cameron, R.

    2000-01-01

    The aspect system of the Chandra Observatory plays a key role in realizing the full potential of Chandra's x-ray optics and detectors. To achieve the highest spatial and spectral resolution (for grating observations), an accurate post-facto time history of the spacecraft attitude and internal alignment is needed. The CXC has developed a suite of tools which process sensor data from the aspect camera assembly and gyroscopes, and produce the spacecraft aspect solution. In this poster, the design of the aspect pipeline software is briefly described, followed by details of aspect system performance during the first eight months of flight. The two key metrics of aspect performance are: image reconstruction accuracy, which measures the x-ray image blurring introduced by aspect; and celestial location, which is the accuracy of detected source positions in absolute sky coordinates.

  17. Seismogenic width controls aspect ratios of earthquake ruptures

    NASA Astrophysics Data System (ADS)

    Weng, Huihui; Yang, Hongfeng

    2017-03-01

    We investigate the effect of seismogenic width on aspect ratios of earthquake ruptures by using numerical simulations of strike-slip faulting and an energy balance criterion near rupture tips. If the seismogenic width is smaller than a critical value, then ruptures cannot break the entire fault, regardless of the size of the nucleation zone. The seismic moments of these self-arresting ruptures increase with the nucleation size, forming nucleation-related events. The aspect ratios increase with the seismogenic width but are smaller than 8. In contrast, ruptures become breakaway and tend to have high aspect ratios (>8) if the seismogenic width is sufficiently large. But the critical nucleation size is larger than the theoretical estimate for an unbounded fault. The eventual seismic moments of breakaway ruptures do not depend on the nucleation size. Our results suggest that estimating final earthquake magnitude from the nucleation phase may only be plausible on faults with small seismogenic width.

  18. Simultaneous estimation of diet composition and calibration coefficients with fatty acid signature data

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.

    2017-01-01

    Knowledge of animal diets provides essential insights into their life history and ecology, although diet estimation is challenging and remains an active area of research. Quantitative fatty acid signature analysis (QFASA) has become a popular method of estimating diet composition, especially for marine species. A primary assumption of QFASA is that constants called calibration coefficients, which account for the differential metabolism of individual fatty acids, are known. In practice, however, calibration coefficients are not known, but rather have been estimated in feeding trials with captive animals of a limited number of model species. The impossibility of verifying the accuracy of feeding trial derived calibration coefficients to estimate the diets of wild animals is a foundational problem with QFASA that has generated considerable criticism. We present a new model that allows simultaneous estimation of diet composition and calibration coefficients based only on fatty acid signature samples from wild predators and potential prey. Our model performed almost flawlessly in four tests with constructed examples, estimating both diet proportions and calibration coefficients with essentially no error. We also applied the model to data from Chukchi Sea polar bears, obtaining diet estimates that were more diverse than estimates conditioned on feeding trial calibration coefficients. Our model avoids bias in diet estimates caused by conditioning on inaccurate calibration coefficients, invalidates the primary criticism of QFASA, eliminates the need to conduct feeding trials solely for diet estimation, and consequently expands the utility of fatty acid data to investigate aspects of ecology linked to animal diets.

  19. Four years of Landsat-7 on-orbit geometric calibration and performance

    USGS Publications Warehouse

    Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.

    2004-01-01

    Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.

  20. Detecting aircraft with a low-resolution infrared sensor.

    PubMed

    Jakubowicz, Jérémie; Lefebvre, Sidonie; Maire, Florian; Moulines, Eric

    2012-06-01

    Existing computer simulations of aircraft infrared signature (IRS) do not account for dispersion induced by uncertainty on input data, such as aircraft aspect angles and meteorological conditions. As a result, they are of little use to estimate the detection performance of IR optronic systems; in this case, the scenario encompasses a lot of possible situations that must be indeed addressed, but cannot be singly simulated. In this paper, we focus on low-resolution infrared sensors and we propose a methodological approach for predicting simulated IRS dispersion of poorly known aircraft and performing aircraft detection on the resulting set of low-resolution infrared images. It is based on a sensitivity analysis, which identifies inputs that have negligible influence on the computed IRS and can be set at a constant value, on a quasi-Monte Carlo survey of the code output dispersion, and on a new detection test taking advantage of level sets estimation. This method is illustrated in a typical scenario, i.e., a daylight air-to-ground full-frontal attack by a generic combat aircraft flying at low altitude, over a database of 90,000 simulated aircraft images. Assuming a white noise or a fractional Brownian background model, detection performances are very promising.

  1. Multisensor Parallel Largest Ellipsoid Distributed Data Fusion with Unknown Cross-Covariances

    PubMed Central

    Liu, Baoyu; Zhan, Xingqun; Zhu, Zheng H.

    2017-01-01

    As the largest ellipsoid (LE) data fusion algorithm can only be applied to two-sensor system, in this contribution, parallel fusion structure is proposed to introduce the LE algorithm into a multisensor system with unknown cross-covariances, and three parallel fusion structures based on different estimate pairing methods are presented and analyzed. In order to assess the influence of fusion structure on fusion performance, two fusion performance assessment parameters are defined as Fusion Distance and Fusion Index. Moreover, the formula for calculating the upper bounds of actual fused error covariances of the presented multisensor LE fusers is also provided. Demonstrated with simulation examples, the Fusion Index indicates fuser’s actual fused accuracy and its sensitivity to the sensor orders, as well as its robustness to the accuracy of newly added sensors. Compared to the LE fuser with sequential structure, the LE fusers with proposed parallel structures not only significantly improve their properties in these aspects, but also embrace better performances in consistency and computation efficiency. The presented multisensor LE fusers generally have better accuracies than covariance intersection (CI) fusion algorithm and are consistent when the local estimates are weakly correlated. PMID:28661442

  2. Shape Optimization of Bone-Bonding Subperiosteal Devices with Finite Element Analysis.

    PubMed

    Ogasawara, Takeshi; Uezono, Masayoshi; Takakuda, Kazuo; Kikuchi, Masanori; Suzuki, Shoichi; Moriyama, Keiji

    2017-01-01

    Subperiosteal bone-bonding devices have been proposed for less invasive treatments in orthodontics. The device is osseointegrated onto a bone surface without fixation screws and is expected to rapidly attain a bone-bonding strength that successfully meets clinical performance. Hence, the device's optimum shape for rapid and strong bone bonding was examined in this study by finite element analyses. First, a stress analysis was performed for a circular rod device with an orthodontic force parallel to the bone surface, and the estimate of the bone-bonding strength based on the bone fracture criterion was verified with the results of an animal experiment. In total, four cross-sectional rod geometries were investigated: circular (Cr), elliptical (El), semicircular (Sc), and rectangular (Rc). By changing the height of the newly formed bone to mimic the progression of new bone formation, the estimation of the bone-bonding strength was repeated for each geometry. The rod with the Rc cross section exhibited the best performance, followed by those with the Sc, El, and Cr cross sections, from the aspects of the rapid acquisition of strength and the strength itself. Thus, the rectangular cross section is the best for rod-like subperiosteal devices for rapid bone bonding.

  3. Motion adaptive Kalman filter for super-resolution

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Nasse, Fabian; Schröder, Hartmut

    2011-01-01

    Superresolution is a sophisticated strategy to enhance image quality of both low and high resolution video, performing tasks like artifact reduction, scaling and sharpness enhancement in one algorithm, all of them reconstructing high frequency components (above Nyquist frequency) in some way. Especially recursive superresolution algorithms can fulfill high quality aspects because they control the video output using a feed-back loop and adapt the result in the next iteration. In addition to excellent output quality, temporal recursive methods are very hardware efficient and therefore even attractive for real-time video processing. A very promising approach is the utilization of Kalman filters as proposed by Farsiu et al. Reliable motion estimation is crucial for the performance of superresolution. Therefore, robust global motion models are mainly used, but this also limits the application of superresolution algorithm. Thus, handling sequences with complex object motion is essential for a wider field of application. Hence, this paper proposes improvements by extending the Kalman filter approach using motion adaptive variance estimation and segmentation techniques. Experiments confirm the potential of our proposal for ideal and real video sequences with complex motion and further compare its performance to state-of-the-art methods like trainable filters.

  4. Detailed performance and environmental monitoring of aquifer heating and cooling systems

    NASA Astrophysics Data System (ADS)

    Acuna, José; Ahlkrona, Malva; Zandin, Hanna; Singh, Ashutosh

    2016-04-01

    The project intends to quantify the performance and environmental impact of large scale aquifer thermal energy storage, as well as point at recommendations for operating and estimating the environmental footprint of future systems. Field measurements, test of innovative equipment as well as advanced modelling work and analysis will be performed. The following aspects are introduced and covered in the presentation: -Thermal, chemical and microbiological influence of akvifer thermal energy storage systems: measurement and evaluation of real conditions and the influence of one system in operation. -Follow up of energy extraction from aquifer as compared to projected values, recommendations for improvements. -Evaluation of the most used thermal modeling tool for design and calculation of groundwater temperatures, calculations with MODFLOW/MT3DMS -Test and evaluation of optical fiber cables as a way to measure temperatures in aquifer thermal energy storages

  5. Archaeological Survey at Fort Hood, Texas Fiscal Years 1991 and 1992: Cantonment and Belton Lake Periphery Areas

    DTIC Science & Technology

    1993-12-01

    is unable to adequately locate rockshelters. Improvement of the elevation data in the Fort Hood GIS would provide better estimates of slope and aspect...rockshelters. Improvement of the elevation data in the Fort Hood GIS would provide better estimates of slope and aspect and might be expected to improve the...could be chosen and used as a control (non-sites). The development of Geographic Information Systems ( GIS ) allows either or both approaches to be

  6. Eye lens monitoring for interventional radiology personnel: dosemeters, calibration and practical aspects of H p (3) monitoring. A 2015 review.

    PubMed

    Carinou, Eleftheria; Ferrari, Paolo; Bjelac, Olivera Ciraj; Gingaume, Merce; Merce, Marta Sans; O'Connor, Una

    2015-09-01

    A thorough literature review about the current situation on the implementation of eye lens monitoring has been performed in order to provide recommendations regarding dosemeter types, calibration procedures and practical aspects of eye lens monitoring for interventional radiology personnel. Most relevant data and recommendations from about 100 papers have been analysed and classified in the following topics: challenges of today in eye lens monitoring; conversion coefficients, phantoms and calibration procedures for eye lens dose evaluation; correction factors and dosemeters for eye lens dose measurements; dosemeter position and influence of protective devices. The major findings of the review can be summarised as follows: the recommended operational quantity for the eye lens monitoring is H p (3). At present, several dosemeters are available for eye lens monitoring and calibration procedures are being developed. However, in practice, very often, alternative methods are used to assess the dose to the eye lens. A summary of correction factors found in the literature for the assessment of the eye lens dose is provided. These factors can give an estimation of the eye lens dose when alternative methods, such as the use of a whole body dosemeter, are used. A wide range of values is found, thus indicating the large uncertainty associated with these simplified methods. Reduction factors from most common protective devices obtained experimentally and using Monte Carlo calculations are presented. The paper concludes that the use of a dosemeter placed at collar level outside the lead apron can provide a useful first estimate of the eye lens exposure. However, for workplaces with estimated annual equivalent dose to the eye lens close to the dose limit, specific eye lens monitoring should be performed. Finally, training of the involved medical staff on the risks of ionising radiation for the eye lens and on the correct use of protective systems is strongly recommended.

  7. Feasibility of Rapid Multitracer PET Tumor Imaging

    NASA Astrophysics Data System (ADS)

    Kadrmas, D. J.; Rust, T. C.

    2005-10-01

    Positron emission tomography (PET) can characterize different aspects of tumor physiology using various tracers. PET scans are usually performed using only one tracer since there is no explicit signal for distinguishing multiple tracers. We tested the feasibility of rapidly imaging multiple PET tracers using dynamic imaging techniques, where the signals from each tracer are separated based upon differences in tracer half-life, kinetics, and distribution. Time-activity curve populations for FDG, acetate, ATSM, and PTSM were simulated using appropriate compartment models, and noisy dual-tracer curves were computed by shifting and adding the single-tracer curves. Single-tracer components were then estimated from dual-tracer data using two methods: principal component analysis (PCA)-based fits of single-tracer components to multitracer data, and parallel multitracer compartment models estimating single-tracer rate parameters from multitracer time-activity curves. The PCA analysis found that there is information content present for separating multitracer data, and that tracer separability depends upon tracer kinetics, injection order and timing. Multitracer compartment modeling recovered rate parameters for individual tracers with good accuracy but somewhat higher statistical uncertainty than single-tracer results when the injection delay was >10 min. These approaches to processing rapid multitracer PET data may potentially provide a new tool for characterizing multiple aspects of tumor physiology in vivo.

  8. Rotation otolith tilt-translation reinterpretation (ROTTR) hypothesis: a new hypothesis to explain neurovestibular spaceflight adaptation.

    PubMed

    Merfeld, Daniel M

    2003-01-01

    Normally, the nervous system must process ambiguous graviceptor (e.g., otolith) cues to estimate tilt and translation. The neural processes that help perform these estimation processes must adapt upon exposure to weightlessness and readapt upon return to Earth. In this paper we present a review of evidence supporting a new hypothesis that explains some aspects of these adaptive processes. This hypothesis, which we label the rotation otolith tilt-translation reinterpretation (ROTTR) hypothesis, suggests that the neural processes resulting in spaceflight adaptation include deterioration in the ability of the nervous system to use rotational cues to help accurately estimate the relative orientation of gravity ("tilt"). Changes in the ability to estimate gravity then also influence the ability of the nervous system to estimate linear acceleration ("translation"). We explicitly hypothesize that such changes in the ability to estimate "tilt" and "translation" will be measurable upon return to Earth and will, at least partially, explain the disorientation experienced when astronauts return to Earth. In this paper, we present the details and implications of ROTTR, review data related to ROTTR, and discuss the relationship of ROTTR to the influential otolith tilt-translation reinterpretation (OTTR) hypothesis as well as discuss the distinct differences between ROTTR and OTTR.

  9. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, C.; Ensminger, J.; O'Keefe, P.

    This book presents papers on the use of wood fuels in Kenya. Topics considered include domestic energy consumption, historical aspects, the Kenyan economy, ecology, supply and demand, forests, aspects of energy consumption in a pastoral ecosystem, estimation of present and future demand for wood fuels, and energy source development.

  11. Information's role in the estimation of chaotic signals

    NASA Astrophysics Data System (ADS)

    Drake, Daniel Fred

    1998-11-01

    Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.

  12. Estimating and Testing the Sources of Evoked Potentials in the Brain.

    ERIC Educational Resources Information Center

    Huizenga, Hilde M.; Molenaar, Peter C. M.

    1994-01-01

    The source of an event-related brain potential (ERP) is estimated from multivariate measures of ERP on the head under several mathematical and physical constraints on the parameters of the source model. Statistical aspects of estimation are discussed, and new tests are proposed. (SLD)

  13. Geometric optimization of an active magnetic regenerative refrigerator via second-law analysis

    NASA Astrophysics Data System (ADS)

    Li, Peng; Gong, Maoqiong; Wu, Jianfeng

    2008-11-01

    Previous analyses [Z. Yan and J. Chen, J. Appl. Phys. 72, 1 (1992); J. Chen and Z. Yan, ibid., 84, 1791 (1998); Lin et al., Physica B 344, 147 (2004); Yang et al., ibid., 364, 33 (2005); Xia et al., ibid., 381, 246 (2006).] of irreversibilities in magnetic refrigerators overlooked several important losses that could be dominant in a real active magnetic regenerative refrigerator (AMRR). No quantitative expressions have been provided yet to estimate the corresponding entropy generations in real AMRRs. The important geometric parameters of AMRRs, such as the aspect ratio of the active magnetic regenerator and the refrigerant diameter, are still arbitrarily chosen. Expressions for calculating different types of entropy generations in the AMRR were derived and used to optimize the aspect ratio and the refrigerant diameter. An optimal coefficient of performance (15.54) was achieved at an aspect ratio of 6.39 and a refrigerant diameter of 1.1mm for our current system. Further study showed that the dissipative sources (e.g., the fluid friction and the unbalanced magnetic forces) in AMRRs, which were overlooked by previous investigations, could significantly contribute to entropy generations.

  14. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    PubMed

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  15. Design, performance, and grounding aspects of the International Thermonuclear Experimental Reactor ion cyclotron range of frequencies antenna

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durodié, F., E-mail: frederic.durodie@rma.ac.be; Dumortier, P.; Vrancken, M.

    2014-06-15

    ITER's Ion Cyclotron Range of Frequencies (ICRF) system [Lamalle et al., Fusion Eng. Des. 88, 517–520 (2013)] comprises two antenna launchers designed by CYCLE (a consortium of European associations listed in the author affiliations above) on behalf of ITER Organisation (IO), each inserted as a Port Plug (PP) into one of ITER's Vacuum Vessel (VV) ports. Each launcher is an array of 4 toroidal by 6 poloidal RF current straps specified to couple up to 20 MW in total to the plasma in the frequency range of 40 to 55 MHz but limited to a maximum system voltage of 45 kV andmore » limits on RF electric fields depending on their location and direction with respect to, respectively, the torus vacuum and the toroidal magnetic field. A crucial aspect of coupling ICRF power to plasmas is the knowledge of the plasma density profiles in the Scrape-Off Layer (SOL) and the location of the RF current straps with respect to the SOL. The launcher layout and details were optimized and its performance estimated for a worst case SOL provided by the IO. The paper summarizes the estimated performance obtained within the operational parameter space specified by IO. Aspects of the RF grounding of the whole antenna PP to the VV port and the effect of the voids between the PP and the Blanket Shielding Modules (BSM) surrounding the antenna front are discussed. These blanket modules, whose dimensions are of the order of the ICRF wavelengths, together with the clearance gaps between them will constitute a corrugated structure which will interact with the electromagnetic waves launched by ICRF antennas. The conditions in which the grooves constituted by the clearance gaps between the blanket modules can become resonant are studied. Simple analytical models and numerical simulations show that mushroom type structures (with larger gaps at the back than at the front) can bring down the resonance frequencies, which could lead to large voltages in the gaps between the blanket modules and perturb the RF properties of the antenna if they are in the ICRF operating range. The effect on the wave propagation along the wall structure, which is acting as a spatially periodic (toroidally and poloidally) corrugated structure, and hence constitutes a slow wave structure modifying the wall boundary condition, is examined.« less

  16. Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline

    NASA Technical Reports Server (NTRS)

    Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore

    2017-01-01

    We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.

  17. Sampling based State of Health estimation methodology for Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Camci, Fatih; Ozkurt, Celil; Toker, Onur; Atamuradov, Vepa

    2015-03-01

    Storage and management of energy is becoming a more and more important problem every day, especially for electric and hybrid vehicle applications. Li-ion battery is one of the most important technological alternatives for high capacity energy storage and related industrial applications. State of Health (SoH) of Li-ion batteries plays a critical role in their deployment from economic, safety, and availability aspects. Most, if not all, of the studies related to SoH estimation focus on the measurement of a new parameter/physical phenomena related to SoH, or development of new statistical/computational methods using several parameters. This paper presents a new approach for SoH estimation for Li-ion battery systems with multiple battery cells: The main idea is a new circuit topology which enables separation of battery cells into two groups, main and test batteries, whenever a SoH related measurement is to be conducted. All battery cells will be connected to the main battery during the normal mode of operation. When a measurement is needed for SoH estimation, some of the cells will be separated from the main battery, and SoH estimation related measurements will be performed on these units. Compared to classical SoH measurement methods which deal with whole battery system, the proposed method estimates the SoH of the system by separating a small but representative set of cells. While SoH measurements are conducted on these isolated cells, remaining cells in the main battery continue to function in normal mode, albeit in slightly reduced performance levels. Preliminary experimental results are quite promising, and validate the feasibility of the proposed approach. Technical details of the proposed circuit architecture are also summarized in the paper.

  18. Trap configuration and spacing influences parameter estimates in spatial capture-recapture models

    USGS Publications Warehouse

    Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew

    2014-01-01

    An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.

  19. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments

    PubMed Central

    2014-01-01

    Background Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. Results We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Conclusions Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality. PMID:24980787

  20. Estimating causal effects with a non-paranormal method for the design of efficient intervention experiments.

    PubMed

    Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi

    2014-06-30

    Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.

  1. Journal: A Review of Some Tracer-Test Design Equations for Tracer-Mass Estimation and Sample Collection Frequency

    EPA Science Inventory

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estima...

  2. Assessment of the apparent bending stiffness and damping of multilayer plates; modelling and experiment

    NASA Astrophysics Data System (ADS)

    Ege, Kerem; Roozen, N. B.; Leclère, Quentin; Rinaldi, Renaud G.

    2018-07-01

    In the context of aeronautics, automotive and construction applications, the design of light multilayer plates with optimized vibroacoustical damping and isolation performances remains a major industrial challenge and a hot topic of research. This paper focuses on the vibrational behavior of three-layered sandwich composite plates in a broad-band frequency range. Several aspects are studied through measurement techniques and analytical modelling of a steel/polymer/steel plate sandwich system. A contactless measurement of the velocity field of plates using a scanning laser vibrometer is performed, from which the equivalent single layer complex rigidity (apparent bending stiffness and apparent damping) in the mid/high frequency ranges is estimated. The results are combined with low/mid frequency estimations obtained with a high-resolution modal analysis method so that the frequency dependent equivalent Young's modulus and equivalent loss factor of the composite plate are identified for the whole [40 Hz-20 kHz] frequency band. The results are in very good agreement with an equivalent single layer analytical modelling based on wave propagation analysis (model of Guyader). The comparison with this model allows identifying the frequency dependent complex modulus of the polymer core layer through inverse resolution. Dynamical mechanical analysis measurements are also performed on the polymer layer alone and compared with the values obtained through the inverse method. Again, a good agreement between these two estimations over the broad-band frequency range demonstrates the validity of the approach.

  3. Parental and Child Factors Associated with Under-Estimation of Children with Excess Weight in Spain.

    PubMed

    de Ruiter, Ingrid; Olmedo-Requena, Rocío; Jiménez-Moleón, José Juan

    2017-11-01

    Objective Understanding obesity misperception and associated factors can improve strategies to increase obesity identification and intervention. We investigate underestimation of child excess weight with a broader perspective, incorporating perceptions, views, and psychosocial aspects associated with obesity. Methods This study used cross-sectional data from the Spanish National Health Survey in 2011-2012 for children aged 2-14 years who are overweight or obese. Percentages of parental misperceived excess weight were calculated. Crude and adjusted analyses were performed for both child and parental factors analyzing associations with underestimation. Results Two-five year olds have the highest prevalence of misperceived overweight or obesity around 90%. In the 10-14 year old age group approximately 63% of overweight teens were misperceived as normal weight and 35.7 and 40% of obese males and females. Child gender did not affect underestimation, whereas a younger age did. Aspects of child social and mental health were associated with under-estimation, as was short sleep duration. Exercise, weekend TV and videogames, and food habits had no effect on underestimation. Fathers were more likely to misperceive their child´s weight status; however parent's age had no effect. Smokers and parents with excess weight were less likely to misperceive their child´s weight status. Parents being on a diet also decreased odds of underestimation. Conclusions for practice This study identifies some characteristics of both parents and children which are associated with under-estimation of child excess weight. These characteristics can be used for consideration in primary care, prevention strategies and for further research.

  4. Phase estimation with nonunitary interferometers: Information as a metric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahder, Thomas B.

    2011-05-15

    Determining the phase in one arm of a quantum interferometer is discussed taking into account the three nonideal aspects in real experiments: nondeterministic state preparation, nonunitary state evolution due to losses during state propagation, and imperfect state detection. A general expression is written for the probability of a measurement outcome taking into account these three nonideal aspects. As an example of applying the formalism, the classical Fisher information and fidelity (Shannon mutual information between phase and measurements) are computed for few-photon Fock and N00N states input into a lossy Mach-Zehnder interferometer. These three nonideal aspects lead to qualitative differences inmore » phase estimation, such as a decrease in fidelity and Fisher information that depends on the true value of the phase.« less

  5. Image-based aircraft pose estimation: a comparison of simulations and real-world data

    NASA Astrophysics Data System (ADS)

    Breuers, Marcel G. J.; de Reus, Nico

    2001-10-01

    The problem of estimating aircraft pose information from mono-ocular image data is considered using a Fourier descriptor based algorithm. The dependence of pose estimation accuracy on image resolution and aspect angle is investigated through simulations using sets of synthetic aircraft images. Further evaluation shows that god pose estimation accuracy can be obtained in real world image sequences.

  6. Interactive degraded document enhancement and ground truth generation

    NASA Astrophysics Data System (ADS)

    Bal, G.; Agam, G.; Frieder, O.; Frieder, G.

    2008-01-01

    Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1

  7. Computational Investigation of the Performance and Back-Pressure Limits of a Hypersonic Inlet

    NASA Technical Reports Server (NTRS)

    Smart, Michael K.; White, Jeffery A.

    2002-01-01

    A computational analysis of Mach 6.2 operation of a hypersonic inlet with rectangular-to-elliptical shape transition has been performed. The results of the computations are compared with experimental data for cases with and without a manually imposed back-pressure. While the no-back-pressure numerical solutions match the general trends of the data, certain features observed in the experiments did not appear in the computational solutions. The reasons for these discrepancies are discussed and possible remedies are suggested. Most importantly, however, the computational analysis increased the understanding of the consequences of certain aspects of the inlet design. This will enable the performance of future inlets of this class to be improved. Computational solutions with back-pressure under-estimated the back-pressure limit observed in the experiments, but did supply significant insight into the character of highly back-pressured inlet flows.

  8. Relationships of role stressors with organizational citizenship behavior: a meta-analysis.

    PubMed

    Eatough, Erin M; Chang, Chu-Hsiang; Miloslavic, Stephanie A; Johnson, Russell E

    2011-05-01

    Several quantitative reviews have documented the negative relationships that role stressors have with task performance. Surprisingly, much less attention has been directed at the impact of role stressors on other aspects of job performance, such as organizational citizenship behavior (OCB). The goal of this study was to therefore estimate the overall relationships of role stressors (i.e., role ambiguity, conflict, and overload) with OCB. A meta-analysis of 42 existing studies indicated that role ambiguity and role conflict were negatively related to OCB and that these relationships were moderated by the target of OCB, type of organization, OCB rating source, and publication status. As expected, role conflict had a stronger negative relationship with OCB than it did with task performance. Finally, we found support for a path model in which job satisfaction mediated relationships of role stressors with OCB and for a positive direct relationship between role overload and OCB.

  9. Sentinel 2 products and data quality status

    NASA Astrophysics Data System (ADS)

    Clerc, Sebastien; Gascon, Ferran; Bouzinac, Catherine; Touli-Lebreton, Dimitra; Francesconi, Benjamin; Lafrance, Bruno; Louis, Jerome; Alhammoud, Bahjat; Massera, Stephane; Pflug, Bringfried; Viallefont, Francoise; Pessiot, Laetitia

    2017-04-01

    Since July 2015, Sentinel-2A provides high-quality multi-spectral images with 10 m spatial resolution. With the launch of Sentinel-2B scheduled for early March 2017, the mission will create a consistent time series with a revisit time of 5 days. The consistency of the time series is ensured by some specific performance requirements such as multi-temporal spatial co-registration and radiometric stability, routinely monitored by the Sentinel-2 Mission Performance Centre (S2MPC). The products also provide a rich set of metadata and auxiliary data to support higher-level processing. This presentation will focus on the current status of the Sentinel-2 L1C and L2A products, including dissemination and product format aspects. Up-to-date mission performance estimations will be presented. Finally we will provide an outlook on the future evolutions: commissioning tasks for Sentinel-2B, geometric refinement, product format and processing improvements.

  10. A Compact Methodology to Understand, Evaluate, and Predict the Performance of Automatic Target Recognition

    PubMed Central

    Li, Yanpeng; Li, Xiang; Wang, Hongqiang; Chen, Yiping; Zhuang, Zhaowen; Cheng, Yongqiang; Deng, Bin; Wang, Liandong; Zeng, Yonghu; Gao, Lei

    2014-01-01

    This paper offers a compacted mechanism to carry out the performance evaluation work for an automatic target recognition (ATR) system: (a) a standard description of the ATR system's output is suggested, a quantity to indicate the operating condition is presented based on the principle of feature extraction in pattern recognition, and a series of indexes to assess the output in different aspects are developed with the application of statistics; (b) performance of the ATR system is interpreted by a quality factor based on knowledge of engineering mathematics; (c) through a novel utility called “context-probability” estimation proposed based on probability, performance prediction for an ATR system is realized. The simulation result shows that the performance of an ATR system can be accounted for and forecasted by the above-mentioned measures. Compared to existing technologies, the novel method can offer more objective performance conclusions for an ATR system. These conclusions may be helpful in knowing the practical capability of the tested ATR system. At the same time, the generalization performance of the proposed method is good. PMID:24967605

  11. [Depressive symptoms among medical intern students in a Brazilian public university].

    PubMed

    Costa, Edméa Fontes de Oliva; Santana, Ygo Santos; Santos, Ana Teresa Rodrigues de Abreu; Martins, Luiz Antonio Nogueira; Melo, Enaldo Vieira de; Andrade, Tarcísio Matos de

    2012-01-01

    To estimate, among Medical School intern students, the prevalence of depressive symptoms and their severity, as well as associated factors. Cross-sectional study in May 2008, with a representative sample of medical intern students (n = 84) from Universidade Federal de Sergipe (UFS). Beck Depression Inventory (BDI) and a structured questionnaire containing information on sociodemographic variables, teaching-learning process, and personal aspects were used. The exploratory data analysis was performed by descriptive and inferential statistics. Finally, the analysis of multiple variables by logistic regression and the calculation of simple and adjusted ORs with their respective 95% confidence intervals were performed. The general prevalence was 40.5%, with 1.2% (95% CI: 0.0-6.5) of severe depressive symptoms; 4.8% (95% CI: 1.3-11.7) of moderate depressive symptoms; and 34.5% (95% CI: 24.5-45.7) of mild depressive symptoms. The logistic regression revealed the variables with a major impact associated with the emergence of depressive symptoms: thoughts of dropping out (OR 6.24; p = 0.002); emotional stress (OR 7.43;p = 0.0004); and average academic performance (OR 4.74; p = 0.0001). The high prevalence of depressive symptoms in the study population was associated with variables related to the teaching-learning process and personal aspects, suggesting immediate preemptive measures regarding Medical School graduation and student care are required.

  12. Analysis of competition performance in dressage and show jumping of Dutch Warmblood horses.

    PubMed

    Rovere, G; Ducro, B J; van Arendonk, J A M; Norberg, E; Madsen, P

    2016-12-01

    Most Warmblood horse studbooks aim to improve the performance in dressage and show jumping. The Dutch Royal Warmblood Studbook (KWPN) includes the highest score achieved in competition by a horse to evaluate its genetic ability of performance. However, the records collected during competition are associated with some aspects that might affect the quality of the genetic evaluation based on these records. These aspects include the influence of rider, censoring and preselection of the data. The aim of this study was to quantify the impact of rider effect, censoring and preselection on the genetic analysis of competition data of dressage and show jumping of KWPN. Different models including rider effect were evaluated. To assess the impact of censoring, genetic parameters were estimated in data sets that differed in the degree of censoring. The effect of preselection on variance components was analysed by defining a binary trait (sport-status) depending on whether the horse has a competition record or not. This trait was included in a bivariate model with the competition trait and used all horses registered by KWPN since 1984. Results showed that performance in competition for dressage and show jumping is a heritable trait (h 2 ~ 0.11-0.13) and that it is important to account for the effect of rider in the genetic analysis. Censoring had a small effect on the genetic parameter for highest performance achieved by the horse. A moderate heritability obtained for sport-status indicates that preselection has a genetic basis, but the effect on genetic parameters was relatively small. © 2016 Blackwell Verlag GmbH.

  13. Evaluation of the Performance of the Distributed Phased-MIMO Sonar.

    PubMed

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-11

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments.

  14. Evaluation of the Performance of the Distributed Phased-MIMO Sonar

    PubMed Central

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-01

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments. PMID:28085071

  15. Physical limits to biochemical signaling

    NASA Astrophysics Data System (ADS)

    Bialek, William; Setayeshgar, Sima

    2005-07-01

    Many crucial biological processes operate with surprisingly small numbers of molecules, and there is renewed interest in analyzing the impact of noise associated with these small numbers. Twenty-five years ago, Berg and Purcell showed that bacterial chemotaxis, where a single-celled organism must respond to small changes in concentration of chemicals outside the cell, is limited directly by molecule counting noise and that aspects of the bacteria's behavioral and computational strategies must be chosen to minimize the effects of this noise. Here, we revisit and generalize their arguments to estimate the physical limits to signaling processes within the cell and argue that recent experiments are consistent with performance approaching these limits. Author contributions: W.B. and S.S. designed research, performed research, and wrote the paper.†Present address: Department of Physics, Indiana University, Bloomington, IN 47405.

  16. Estimating net surface shortwave radiation from Chinese geostationary meteorological satellite FengYun-2D (FY-2D) data under clear sky.

    PubMed

    Zhang, Xiaoyu; Li, Lingling

    2016-03-21

    Net surface shortwave radiation (NSSR) significantly affects regional and global climate change, and is an important aspect of research on surface radiation budget balance. Many previous studies have proposed methods for estimating NSSR. This study proposes a method to calculate NSSR using FY-2D short-wave channel data. Firstly, a linear regression model is established between the top-of-atmosphere (TOA) broadband albedo (r) and the narrowband reflectivity (ρ1), based on data simulated with MODTRAN 4.2. Secondly, the relationship between surface absorption coefficient (as) and broadband albedo (r) is determined by dividing the surface type into land, sea, or snow&ice, and NSSR can then be calculated. Thirdly, sensitivity analysis is performed for errors associated with sensor noise, vertically integrated atmospheric water content, view zenith angle and solar zenith angle. Finally, validation using ground measurements is performed. Results show that the root mean square error (RMSE) between the estimated and actual r is less than 0.011 for all conditions, and the RMSEs between estimated and real NSSR are 26.60 W/m2, 9.99 W/m2, and 23.40 W/m2, using simulated data for land, sea, and snow&ice surfaces, respectively. This indicates that the proposed method can be used to adequately estimate NSSR. Additionally, we compare field measurements from TaiYuan and ChangWu ecological stations with estimates using corresponding FY-2D data acquired from January to April 2012, on cloud-free days. Results show that the RMSE between the estimated and actual NSSR is 48.56W/m2, with a mean error of -2.23W/m2. Causes of errors also include measurement accuracy and estimations of atmospheric water vertical contents. This method is only suitable for cloudless conditions.

  17. Estimating spatial travel times using automatic vehicle identification data

    DOT National Transportation Integrated Search

    2001-01-01

    Prepared ca. 2001. The paper describes an algorithm that was developed for estimating reliable and accurate average roadway link travel times using Automatic Vehicle Identification (AVI) data. The algorithm presented is unique in two aspects. First, ...

  18. Rainfall: State of the Science

    NASA Astrophysics Data System (ADS)

    Testik, Firat Y.; Gebremichael, Mekonnen

    Rainfall: State of the Science offers the most up-to-date knowledge on the fundamental and practical aspects of rainfall. Each chapter, self-contained and written by prominent scientists in their respective fields, provides three forms of information: fundamental principles, detailed overview of current knowledge and description of existing methods, and emerging techniques and future research directions. The book discusses • Rainfall microphysics: raindrop morphodynamics, interactions, size distribution, and evolution • Rainfall measurement and estimation: ground-based direct measurement (disdrometer and rain gauge), weather radar rainfall estimation, polarimetric radar rainfall estimation, and satellite rainfall estimation • Statistical analyses: intensity-duration-frequency curves, frequency analysis of extreme events, spatial analyses, simulation and disaggregation, ensemble approach for radar rainfall uncertainty, and uncertainty analysis of satellite rainfall products The book is tailored to be an indispensable reference for researchers, practitioners, and graduate students who study any aspect of rainfall or utilize rainfall information in various science and engineering disciplines.

  19. Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data

    NASA Technical Reports Server (NTRS)

    Vanderesch, A. H.

    1972-01-01

    Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.

  20. Sources of uncertainty in annual forest inventory estimates

    Treesearch

    Ronald E. McRoberts

    2000-01-01

    Although design and estimation aspects of annual forest inventories have begun to receive considerable attention within the forestry and natural resources communities, little attention has been devoted to identifying the sources of uncertainty inherent in these systems or to assessing the impact of those uncertainties on the total uncertainties of inventory estimates....

  1. Age estimation by pulp-to-tooth area ratio using cone-beam computed tomography: A preliminary analysis

    PubMed Central

    Rai, Arpita; Acharya, Ashith B.; Naikmasur, Venkatesh G.

    2016-01-01

    Background: Age estimation of living or deceased individuals is an important aspect of forensic sciences. Conventionally, pulp-to-tooth area ratio (PTR) measured from periapical radiographs have been utilized as a nondestructive method of age estimation. Cone-beam computed tomography (CBCT) is a new method to acquire three-dimensional images of the teeth in living individuals. Aims: The present study investigated age estimation based on PTR of the maxillary canines measured in three planes obtained from CBCT image data. Settings and Design: Sixty subjects aged 20–85 years were included in the study. Materials and Methods: For each tooth, mid-sagittal, mid-coronal, and three axial sections—cementoenamel junction (CEJ), one-fourth root level from CEJ, and mid-root—were assessed. PTR was calculated using AutoCAD software after outlining the pulp and tooth. Statistical Analysis Used: All statistical analyses were performed using an SPSS 17.0 software program. Results and Conclusions: Linear regression analysis showed that only PTR in axial plane at CEJ had significant age correlation (r = 0.32; P < 0.05). This is probably because of clearer demarcation of pulp and tooth outline at this level. PMID:28123269

  2. Novel mathematical model to estimate ball impact force in soccer.

    PubMed

    Iga, Takahito; Nunome, Hiroyuki; Sano, Shinya; Sato, Nahoko; Ikegami, Yasuo

    2017-11-22

    To assess ball impact force during soccer kicking is important to quantify from both performance and chronic injury prevention perspectives. We aimed to verify the appropriateness of previous models used to estimate ball impact force and to propose an improved model to better capture the time history of ball impact force. A soccer ball was fired directly onto a force platform (10 kHz) at five realistic kicking ball velocities and ball behaviour was captured by a high-speed camera (5,000 Hz). The time history of ball impact force was estimated using three existing models and two new models. A new mathematical model that took into account a rapid change in ball surface area and heterogeneous ball deformation showed a distinctive advantage to estimate the peak forces and its occurrence times and to reproduce time history of ball impact forces more precisely, thereby reinforcing the possible mechanics of 'footballer's ankle'. Ball impact time was also systematically shortened when ball velocity increases in contrast to practical understanding for producing faster ball velocity, however, the aspect of ball contact time must be considered carefully from practical point of view.

  3. Discussion of the paper ``the use of conditional simulation in nuclear waste site performance assessment,`` by Carol A. Gotway

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Doctor, P.G.

    1993-08-01

    First, we applaud Dr. Gotway for seeking via her paper to expose a wider audience of statisticians to the many interesting and challenging modeling and statistical problems in the environmental area. This well-written paper effective explains the WIPP and the context of the analysis. Dr. Gotway`s paper describes a geostatistical conditional simulation approach combined with deterministic modeling to estimate the cumulative distribution function (cdf) of groundwater travel time (GWTT), information that is needed for estimating the cumulative release of nuclear waste from the repository. We begin our discussion with comments and questions on modeling aspects of Dr. Gotway`s paper. Thenmore » we discuss uncertainty and sensitivity analyses and some of the problems inherent with implementing those techniques including correlations, elicitation of expert opinion, and planning to achieve specified Data Quality Objectives (DQOs).« less

  4. Measuring optical phase digitally in coherent metrology systems

    NASA Astrophysics Data System (ADS)

    Kelly, Damien P.; Ryle, James; Zhao, Liang; Sheridan, John T.

    2017-05-01

    The accurate measurement of optical phase has many applications in metrology. For biological samples, which appear transparent, the phase data provides information about the refractive index of the sample. In speckle metrology, the phase can be used to estimate stress and strains of a rough surface with high sensitivity. In this theoretical manuscript we compare and contrast the properties of two techniques for estimating the phase distribution of a wave field under the paraxial approximation: (I) A digital holographic system, and (II) An idealized phase retrieval system. Both systems use a CCD or CMOS array to measure the intensities of the wave fields that are reflected from or transmitted through the sample of interest. This introduces a numerical aspect to the problem. For the two systems above we examine how numerical calculations can limit the performance of these systems leading to a near-infinite number of possible solutions.

  5. The influence of different space-related physiological variations on exercise capacity determined by oxygen uptake kinetics.

    PubMed

    Stegemann, J

    1992-07-01

    Oxygen uptake kinetics, following defined variations of work load changes allow to estimate the contribution of aerob and anaerob energy supply which is the base for determining work capacity. Under the aspect of long duration missions with application of adequate dosed countermeasures, a reliable estimate of the astronaut's work capacity is important to adjust the necessary inflight training. Since the kinetics of oxygen uptake originate in the working muscle group itself, while measurements are performed at the mouth, various influences within the oxygen transport system might disturb the determinations. There are not only detraining effects but also well-known other influences, such as blood- and fluid shifts induced by weightlessness. They might have an impact on the circulatory system. Some of these factors have been simulated by immersion, blood donation, and changing of the body position.

  6. The influence of different space-related physiological variations on exercise capacity determined by oxygen uptake kinetics

    NASA Astrophysics Data System (ADS)

    Stegemann, J.

    Oxygen uptake kinetics, following defined variations of work load changes allow to estimate the contribution of aerob and anaerob energy supply which is the base for determining work capacity. Under the aspect of long duration missions with application of adequate dosed countermeasures, a reliable estimate of the astronaut's work capacity is important to adjust the necessary inflight training. Since the kinetics of oxygen uptake originate in the working muscle group itself, while measurements are performed at the mouth, various influences within the oxygen transport system might disturb the determinations. There are not only detraining effects but also well-known other influences, such as blood- and fluid shifts induced by weightlessness. They might have an impact on the circulatory system. Some of these factors have been simulated by immersion, blood donation, and changing of the body position.

  7. Key Aspects of the Federal Direct Loan Program's Cost Estimates: Department of Education. Report to Congressional Requesters.

    ERIC Educational Resources Information Center

    Calbom, Linda M.; Ashby, Cornelia M.

    Because of concerns about the Department of Education's reliance on estimates to project costs of the William D. Ford Federal Direct Loan Program (FDLP) and a lack of historical information on which to base those estimates, Congress asked the General Accounting Office (GAO) to review how the department develops its cost estimates for the program,…

  8. Reciprocal Modulation of Cognitive and Emotional Aspects in Pianistic Performances

    PubMed Central

    Higuchi, Marcia K. Kodama; Fornari, José; Del Ben, Cristina M.; Graeff, Frederico G.; Leite, João Pereira

    2011-01-01

    Background High level piano performance requires complex integration of perceptual, motor, cognitive and emotive skills. Observations in psychology and neuroscience studies have suggested reciprocal inhibitory modulation of the cognition by emotion and emotion by cognition. However, it is still unclear how cognitive states may influence the pianistic performance. The aim of the present study is to verify the influence of cognitive and affective attention in the piano performances. Methods and Findings Nine pianists were instructed to play the same piece of music, firstly focusing only on cognitive aspects of musical structure (cognitive performances), and secondly, paying attention solely on affective aspects (affective performances). Audio files from pianistic performances were examined using a computational model that retrieves nine specific musical features (descriptors) – loudness, articulation, brightness, harmonic complexity, event detection, key clarity, mode detection, pulse clarity and repetition. In addition, the number of volunteers' errors in the recording sessions was counted. Comments from pianists about their thoughts during performances were also evaluated. The analyses of audio files throughout musical descriptors indicated that the affective performances have more: agogics, legatos, pianos phrasing, and less perception of event density when compared to the cognitive ones. Error analysis demonstrated that volunteers misplayed more left hand notes in the cognitive performances than in the affective ones. Volunteers also played more wrong notes in affective than in cognitive performances. These results correspond to the volunteers' comments that in the affective performances, the cognitive aspects of piano execution are inhibited, whereas in the cognitive performances, the expressiveness is inhibited. Conclusions Therefore, the present results indicate that attention to the emotional aspects of performance enhances expressiveness, but constrains cognitive and motor skills in the piano execution. In contrast, attention to the cognitive aspects may constrain the expressivity and automatism of piano performances. PMID:21931716

  9. Chandra X-Ray Observatory Pointing Control System Performance During Transfer Orbit and Initial On-Orbit Operations

    NASA Technical Reports Server (NTRS)

    Quast, Peter; Tung, Frank; West, Mark; Wider, John

    2000-01-01

    The Chandra X-ray Observatory (CXO, formerly AXAF) is the third of the four NASA great observatories. It was launched from Kennedy Space Flight Center on 23 July 1999 aboard the Space Shuttle Columbia and was successfully inserted in a 330 x 72,000 km orbit by the Inertial Upper Stage (IUS). Through a series of five Integral Propulsion System burns, CXO was placed in a 10,000 x 139,000 km orbit. After initial on-orbit checkout, Chandra's first light images were unveiled to the public on 26 August, 1999. The CXO Pointing Control and Aspect Determination (PCAD) subsystem is designed to perform attitude control and determination functions in support of transfer orbit operations and on-orbit science mission. After a brief description of the PCAD subsystem, the paper highlights the PCAD activities during the transfer orbit and initial on-orbit operations. These activities include: CXO/IUS separation, attitude and gyro bias estimation with earth sensor and sun sensor, attitude control and disturbance torque estimation for delta-v burns, momentum build-up due to gravity gradient and solar pressure, momentum unloading with thrusters, attitude initialization with star measurements, gyro alignment calibration, maneuvering and transition to normal pointing, and PCAD pointing and stability performance.

  10. Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Mandrake, Lukas; Green, Robert O.

    2013-01-01

    Mapping localized spectral features in large images demands sensitive and robust detection algorithms. Two aspects of large images that can harm matched-filter detection performance are addressed simultaneously. First, multimodal backgrounds may thwart the typical Gaussian model. Second, outlier features can trigger false detections from large projections onto the target vector. Two state-of-the-art approaches are combined that independently address outlier false positives and multimodal backgrounds. The background clustering models multimodal backgrounds, and the mixture tuned matched filter (MT-MF) addresses outliers. Combining the two methods captures significant additional performance benefits. The resulting mixture tuned clutter matched filter (MT-CMF) shows effective performance on simulated and airborne datasets. The classical MNF transform was applied, followed by k-means clustering. Then, each cluster s mean, covariance, and the corresponding eigenvalues were estimated. This yields a cluster-specific matched filter estimate as well as a cluster- specific feasibility score to flag outlier false positives. The technology described is a proof of concept that may be employed in future target detection and mapping applications for remote imaging spectrometers. It is of most direct relevance to JPL proposals for airborne and orbital hyperspectral instruments. Applications include subpixel target detection in hyperspectral scenes for military surveillance. Earth science applications include mineralogical mapping, species discrimination for ecosystem health monitoring, and land use classification.

  11. Assessing respondent-driven sampling.

    PubMed

    Goel, Sharad; Salganik, Matthew J

    2010-04-13

    Respondent-driven sampling (RDS) is a network-based technique for estimating traits in hard-to-reach populations, for example, the prevalence of HIV among drug injectors. In recent years RDS has been used in more than 120 studies in more than 20 countries and by leading public health organizations, including the Centers for Disease Control and Prevention in the United States. Despite the widespread use and growing popularity of RDS, there has been little empirical validation of the methodology. Here we investigate the performance of RDS by simulating sampling from 85 known, network populations. Across a variety of traits we find that RDS is substantially less accurate than generally acknowledged and that reported RDS confidence intervals are misleadingly narrow. Moreover, because we model a best-case scenario in which the theoretical RDS sampling assumptions hold exactly, it is unlikely that RDS performs any better in practice than in our simulations. Notably, the poor performance of RDS is driven not by the bias but by the high variance of estimates, a possibility that had been largely overlooked in the RDS literature. Given the consistency of our results across networks and our generous sampling conditions, we conclude that RDS as currently practiced may not be suitable for key aspects of public health surveillance where it is now extensively applied.

  12. Assessing respondent-driven sampling

    PubMed Central

    Goel, Sharad; Salganik, Matthew J.

    2010-01-01

    Respondent-driven sampling (RDS) is a network-based technique for estimating traits in hard-to-reach populations, for example, the prevalence of HIV among drug injectors. In recent years RDS has been used in more than 120 studies in more than 20 countries and by leading public health organizations, including the Centers for Disease Control and Prevention in the United States. Despite the widespread use and growing popularity of RDS, there has been little empirical validation of the methodology. Here we investigate the performance of RDS by simulating sampling from 85 known, network populations. Across a variety of traits we find that RDS is substantially less accurate than generally acknowledged and that reported RDS confidence intervals are misleadingly narrow. Moreover, because we model a best-case scenario in which the theoretical RDS sampling assumptions hold exactly, it is unlikely that RDS performs any better in practice than in our simulations. Notably, the poor performance of RDS is driven not by the bias but by the high variance of estimates, a possibility that had been largely overlooked in the RDS literature. Given the consistency of our results across networks and our generous sampling conditions, we conclude that RDS as currently practiced may not be suitable for key aspects of public health surveillance where it is now extensively applied. PMID:20351258

  13. Neighbour lists for smoothed particle hydrodynamics on GPUs

    NASA Astrophysics Data System (ADS)

    Winkler, Daniel; Rezavand, Massoud; Rauch, Wolfgang

    2018-04-01

    The efficient iteration of neighbouring particles is a performance critical aspect of any high performance smoothed particle hydrodynamics (SPH) solver. SPH solvers that implement a constant smoothing length generally divide the simulation domain into a uniform grid to reduce the computational complexity of the neighbour search. Based on this method, particle neighbours are either stored per grid cell or for each individual particle, denoted as Verlet list. While the latter approach has significantly higher memory requirements, it has the potential for a significant computational speedup. A theoretical comparison is performed to estimate the potential improvements of the method based on unknown hardware dependent factors. Subsequently, the computational performance of both approaches is empirically evaluated on graphics processing units. It is shown that the speedup differs significantly for different hardware, dimensionality and floating point precision. The Verlet list algorithm is implemented as an alternative to the cell linked list approach in the open-source SPH solver DualSPHysics and provided as a standalone software package.

  14. Impact of Footprint Diameter and Off-Nadir Pointing on the Precision of Canopy Height Estimates from Spaceborne Lidar

    NASA Technical Reports Server (NTRS)

    Pang, Yong; Lefskky, Michael; Sun, Guoqing; Ranson, Jon

    2011-01-01

    A spaceborne lidar mission could serve multiple scientific purposes including remote sensing of ecosystem structure, carbon storage, terrestrial topography and ice sheet monitoring. The measurement requirements of these different goals will require compromises in sensor design. Footprint diameters that would be larger than optimal for vegetation studies have been proposed. Some spaceborne lidar mission designs include the possibility that a lidar sensor would share a platform with another sensor, which might require off-nadir pointing at angles of up to 16 . To resolve multiple mission goals and sensor requirements, detailed knowledge of the sensitivity of sensor performance to these aspects of mission design is required. This research used a radiative transfer model to investigate the sensitivity of forest height estimates to footprint diameter, off-nadir pointing and their interaction over a range of forest canopy properties. An individual-based forest model was used to simulate stands of mixed conifer forest in the Tahoe National Forest (Northern California, USA) and stands of deciduous forests in the Bartlett Experimental Forest (New Hampshire, USA). Waveforms were simulated for stands generated by a forest succession model using footprint diameters of 20 m to 70 m. Off-nadir angles of 0 to 16 were considered for a 25 m diameter footprint diameter. Footprint diameters in the range of 25 m to 30 m were optimal for estimates of maximum forest height (R(sup 2) of 0.95 and RMSE of 3 m). As expected, the contribution of vegetation height to the vertical extent of the waveform decreased with larger footprints, while the contribution of terrain slope increased. Precision of estimates decreased with an increasing off-nadir pointing angle, but off-nadir pointing had less impact on height estimates in deciduous forests than in coniferous forests. When pointing off-nadir, the decrease in precision was dependent on local incidence angle (the angle between the off-nadir beam and a line normal to the terrain surface) which is dependent on the off-nadir pointing angle, terrain slope, and the difference between the laser pointing azimuth and terrain aspect; the effect was larger when the sensor was aligned with the terrain azimuth but when aspect and azimuth are opposed, there was virtually no effect on R2 or RMSE. A second effect of off-nadir pointing is that the laser beam will intersect individual crowns and the canopy as a whole from a different angle which had a distinct effect on the precision of lidar estimates of height, decreasing R2 and increasing RMSE, although the effect was most pronounced for coniferous crowns.

  15. Enhancing Groundwater Cost Estimation with the Interpolation of Water Tables across the United States

    NASA Astrophysics Data System (ADS)

    Rosli, A. U. M.; Lall, U.; Josset, L.; Rising, J. A.; Russo, T. A.; Eisenhart, T.

    2017-12-01

    Analyzing the trends in water use and supply across the United States is fundamental to efforts in ensuring water sustainability. As part of this, estimating the costs of producing or obtaining water (water extraction) and the correlation with water use is an important aspect in understanding the underlying trends. This study estimates groundwater costs by interpolating the depth to water level across the US in each county. We use Ordinary and Universal Kriging, accounting for the differences between aquifers. Kriging generates a best linear unbiased estimate at each location and has been widely used to map ground-water surfaces (Alley, 1993).The spatial covariates included in the universal Kriging were land-surface elevation as well as aquifer information. The average water table is computed for each county using block kriging to obtain a national map of groundwater cost, which we compare with survey estimates of depth to the water table performed by the USDA. Groundwater extraction costs were then assumed to be proportional to water table depth. Beyond estimating the water cost, the approach can provide an indication of groundwater-stress by exploring the historical evolution of depth to the water table using time series information between 1960 and 2015. Despite data limitations, we hope to enable a more compelling and meaningful national-level analysis through the quantification of cost and stress for more economically efficient water management.

  16. Time - motion analysis of professional rugby union players during match-play.

    PubMed

    Deutsch, M U; Kearney, G A; Rehrer, N J

    2007-02-15

    The aim of this study was to quantify the movement patterns of various playing positions during professional rugby union match-play, such that the relative importance of aerobic and anaerobic energy pathways to performance could be estimated. Video analysis was conducted of individual players (n=29) from the Otago Highlanders during six "Super 12" representative fixtures. Each movement was coded as one of six speeds of locomotion (standing still, walking, jogging, cruising, sprinting, and utility), three states of non-running intensive exertion (rucking/mauling, tackling, and scrummaging), and three discrete activities (kicking, jumping, passing). The results indicated significant demands on all energy systems in all playing positions, yet implied a greater reliance on anaerobic glycolytic metabolism in forwards, due primarily to their regular involvement in non-running intense activities such as rucking, mauling, scrummaging, and tackling. Positional group comparisons indicated that while the greatest differences existed between forwards and backs, each positional group had its own unique demands. Front row forwards were mostly involved in activities involving gaining/retaining possession, back row forwards tended to play more of a pseudo back-line role, performing less rucking/mauling than front row forwards, yet being more involved in aspects of broken play such as sprinting and tackling. While outside backs tended to specialize in the running aspects of play, inside backs tended to show greater involvement in confrontational aspects of play such as rucking/mauling and tackling. These results suggest that rugby training and fitness testing should be tailored specifically to positional groups rather than simply differentiating between forwards and backs.

  17. Methodologies for Quantitative Systems Pharmacology (QSP) Models: Design and Estimation.

    PubMed

    Ribba, B; Grimm, H P; Agoram, B; Davies, M R; Gadkar, K; Niederer, S; van Riel, N; Timmis, J; van der Graaf, P H

    2017-08-01

    With the increased interest in the application of quantitative systems pharmacology (QSP) models within medicine research and development, there is an increasing need to formalize model development and verification aspects. In February 2016, a workshop was held at Roche Pharma Research and Early Development to focus discussions on two critical methodological aspects of QSP model development: optimal structural granularity and parameter estimation. We here report in a perspective article a summary of presentations and discussions. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  18. Inertia-gravity wave radiation from the elliptical vortex in the f-plane shallow water system

    NASA Astrophysics Data System (ADS)

    Sugimoto, Norihiko

    2017-04-01

    Inertia-gravity wave (IGW) radiation from the elliptical vortex is investigated in the f-plane shallow water system. The far field of IGW is analytically derived for the case of an almost circular Kirchhoff vortex with a small aspect ratio. Cyclone-anticyclone asymmetry appears at finite values of the Rossby number (Ro) caused by the source originating in the Coriolis acceleration. While the intensity of IGWs from the cyclone monotonically decreases as f increases, that from the anticyclone increases as f increases for relatively smaller f and has a local maximum at intermediate f. A numerical experiment is conducted on a model using a spectral method in an unbounded domain. The numerical results agree quite well with the analytical ones for elliptical vortices with small aspect ratios, implying that the derived analytical forms are useful for the verification of the numerical model. For elliptical vortices with larger aspect ratios, however, significant deviation from the analytical estimates appears. The intensity of IGWs radiated in the numerical simulation is larger than that estimated analytically. The reason is that the source of IGWs is amplified during the time evolution because the shape of the vortex changes from ideal ellipse to elongated with filaments. Nevertheless, cyclone-anticyclone asymmetry similar to the analytical estimate appears in all the range of aspect ratios, suggesting that this asymmetry is a robust feature.

  19. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  20. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Variability of Kelvin wave momentum flux from high-resolution radiosonde and radio occultation data

    NASA Astrophysics Data System (ADS)

    Sjoberg, J. P.; Zeng, Z.; Ho, S. P.; Birner, T.; Anthes, R. A.; Johnson, R. H.

    2017-12-01

    Direct measurement of momentum flux from Kelvin waves in the stratosphere remains challenging. Constraining this flux from observations is an important step towards constraining the flux from models. Here we present results from analyses using linear theory to estimate the Kelvin wave amplitudes and momentum fluxes from both high-resolution radiosondes and from radio occultation (RO) data. These radiosonde data are from a contiguous 11-year span of soundings performed at two Department of Energy Atmospheric Radiation Measurement sites, while the RO data span 14 years from multiple satellite missions. Daily time series of the flux from both sources are found to be in quantitative agreement with previous studies. Climatological analyses of these data reveal the expected seasonal cycle and variability associated with the quasi-biennial oscillation. Though both data sets provide measurements on distinct spatial and temporal scales, the estimated flux from each provides insight into separate but complimentary aspects of how the Kelvin waves affect the stratosphere. Namely, flux derived from radiosonde sites provide details on the regional Kelvin wave variability, while the flux from RO data are zonal mean estimates.

  2. A review on lithium-ion battery ageing mechanisms and estimations for automotive applications

    NASA Astrophysics Data System (ADS)

    Barré, Anthony; Deguilhem, Benjamin; Grolleau, Sébastien; Gérard, Mathias; Suard, Frédéric; Riu, Delphine

    2013-11-01

    Lithium-ion batteries have become the focus of research interest, thanks to their numerous benefits for vehicle applications. One main limitation of these technologies resides in the battery ageing. The effects of battery ageing limit its performance and occur throughout their whole life, whether the battery is used or not, which is a major drawback on real usage. Furthermore, degradations take place in every condition, but in different proportions as usage and external conditions interact to provoke degradations. The ageing phenomena are highly complicated to characterize due to the factors cross-dependence. This paper reviews various aspects of recent research and developments, from different fields, on lithium-ion battery ageing mechanisms and estimations. In this paper is presented a summary of techniques, models and algorithms used for battery ageing estimation (SOH, RUL), going from a detailed electrochemical approach to statistical methods based on data. In order to present the accuracy of currently used methods, their respective characteristics are discussed. Remaining challenges are deeply detailed, along with a discussion about the ideal method resulting from existing methods.

  3. Computational Aspects of N-Mixture Models

    PubMed Central

    Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S

    2015-01-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629

  4. Alphabetic letter identification: Effects of perceivability, similarity, and bias☆

    PubMed Central

    Mueller, Shane T.; Weidemann, Christoph T.

    2012-01-01

    The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report a comprehensive review of literature, spanning more than a century. This review revealed that identification accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and similarity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the results of two new experiments which allow for the simultaneous estimation of these factors, and examine how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification. PMID:22036587

  5. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  6. Considerations in Forest Growth Estimation Between Two Measurements of Mapped Forest Inventory Plots

    Treesearch

    Michael T. Thompson

    2006-01-01

    Several aspects of the enhanced Forest Inventory and Analysis (FIA) program?s national plot design complicate change estimation. The design incorporates up to three separate plot sizes (microplot, subplot, and macroplot) to sample trees of different sizes. Because multiple plot sizes are involved, change estimators designed for polyareal plot sampling, such as those...

  7. Dynamic aspects of apparent attenuation and wave localization in layered media

    USGS Publications Warehouse

    Haney, M.M.; Van Wijk, K.

    2008-01-01

    We present a theory for multiply-scattered waves in layered media which takes into account wave interference. The inclusion of interference in the theory leads to a new description of the phenomenon of wave localization and its impact on the apparent attenuation of seismic waves. We use the theory to estimate the localization length at a CO2 sequestration site in New Mexico at sonic frequencies (2 kHz) by performing numerical simulations with a model taken from well logs. Near this frequency, we find a localization length of roughly 180 m, leading to a localization-induced quality factor Q of 360.

  8. Non-contact true temperature measurements in the microgravity environment

    NASA Technical Reports Server (NTRS)

    Khan, Mansoor A.; Allemand, Charly; Eagar, Thomas W.

    1989-01-01

    The theory developed is shown to be capable of calculating true temperature of any material from radiance measurements at a number of different wavelengths. This theory was also shown to be capable of predicting the uncertainty in these calculated temperatures. An additional advantage of these techniques is that they can estimate the emissivity of the target simultaneously with the temperature. This aspect can prove to be very important when a fast method of generating reflectivity vs. wavelength or emissivity vs. wavelength data is required. Experiments performed on various materials over a range of temperatures and experimental conditions were used to verify the accuracy of this theory.

  9. Solar power satellite system definition study. Volume 3: Operations and systems synthesis, phase 2

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The results of the operations analyses are reported. Some of these analyses examined operations aspects of space vehicle in-space maintenance. Many of the analyses explored in great depth operations concerned the LEO Base cargo handling operations. Personnel transportation operations and cargo packaging were also analyzed. These operations analyses were performed to define the operational requirements for all of the SPS system elements so that equipment and facilities could be synthesized, and to make estimates of the manpower requirements. An overall, integrated, end-to-end description of the SPS operations is presented. The detailed operations analyses, upon which this integrated description was based, are included.

  10. Pointing control for LDR

    NASA Technical Reports Server (NTRS)

    Yam, Y.; Briggs, C.

    1988-01-01

    One important aspect of the LDR control problem is the possible excitations of structural modes due to random disturbances, mirror chopping, and slewing maneuvers. An analysis was performed to yield a first order estimate of the effects of such dynamic excitations. The analysis involved a study of slewing jitters, chopping jitters, disturbance responses, and pointing errors, making use of a simplified planar LDR model which describes the LDR dynamics on a plane perpendicular to the primary reflector. Briefly, the results indicate that the command slewing profile plays an important role in minimizing the resultant jitter, even to a level acceptable without any control action. An optimal profile should therefore be studied.

  11. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  12. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  13. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    NASA Astrophysics Data System (ADS)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.

  14. Aerobic Capacity and Cognitive Control in Elementary School-Age Children

    PubMed Central

    Scudder, Mark R.; Lambourne, Kate; Drollette, Eric S.; Herrmann, Stephen; Washburn, Richard; Donnelly, Joseph E.; Hillman, Charles H.

    2014-01-01

    Purpose The current study examined the relationship between children’s performance on the Progressive Aerobic Cardiovascular Endurance Run (PACER) subtest of the FitnessGram® and aspects of cognitive control that are believed to support academic success. Methods Hierarchical linear regression analyses were conducted on a sample of 2nd and 3rd grade children (n = 397) who completed modified versions of a flanker task and spatial n-back task to assess inhibitory control and working memory, respectively. Results Greater aerobic fitness was significantly related to shorter reaction time and superior accuracy during the flanker task, suggesting better inhibitory control and the facilitation of attention in higher fit children. A similar result was observed for the n-back task such that higher fit children exhibited more accurate target detection and discrimination performance when working memory demands were increased. Conclusion These findings support the positive association between aerobic fitness and multiple aspects of cognitive control in a large sample of children, using a widely implemented and reliable field estimate of aerobic capacity. Importantly, the current results suggest that this relationship is consistent across methods used to assess fitness, which may have important implications for extending this research to more representative samples of children in a variety of experimental contexts. PMID:24743109

  15. A Study on Aspect Ratio of Heat Dissipation Fin for the Heat Dissipation Performance of Ultra Constant Discharge Lamp

    NASA Astrophysics Data System (ADS)

    Ko, Dong Guk; Cong Ge, Jun; Im, Ik Tae; Choi, Nag Jung; Kim, Min Soo

    2018-01-01

    In this study, we analyzed the heat dissipation performance of UCD lamp ballast fin with various aspect ratios. The minimum grid size was 0.02 mm and the number of grid was approximately 11,000. In order to determine the influence of the aspect ratio on the heat dissipation performance of UCD lamp ballast fin, the heat transfer area of the fin was kept constant at 4 mm2. The aspect ratios of the fin were 2 mm: 2 mm (basic model), 1.5 mm: 2.7 mm and 2.7 mm: 1.5 mm, respectively. The heat flux and heat flux time at fin were kept constant at 1×105 W/m2 and 10 seconds, respectively. The heat dissipation performance by the fin was the best at an aspect ratio of 1.5 mm: 2.7 mm.

  16. EDITORIAL: Inverse Problems in Engineering

    NASA Astrophysics Data System (ADS)

    West, Robert M.; Lesnic, Daniel

    2007-01-01

    Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

  17. The Lactate Minimum Test: Concept, Methodological Aspects and Insights for Future Investigations in Human and Animal Models

    PubMed Central

    Messias, Leonardo H. D.; Gobatto, Claudio A.; Beck, Wladimir R.; Manchado-Gobatto, Fúlvia B.

    2017-01-01

    In 1993, Uwe Tegtbur proposed a useful physiological protocol named the lactate minimum test (LMT). This test consists of three distinct phases. Firstly, subjects must perform high intensity efforts to induce hyperlactatemia (phase 1). Subsequently, 8 min of recovery are allowed for transposition of lactate from myocytes (for instance) to the bloodstream (phase 2). Right after the recovery, subjects are submitted to an incremental test until exhaustion (phase 3). The blood lactate concentration is expected to fall during the first stages of the incremental test and as the intensity increases in subsequent stages, to rise again forming a “U” shaped blood lactate kinetic. The minimum point of this curve, named the lactate minimum intensity (LMI), provides an estimation of the intensity that represents the balance between the appearance and clearance of arterial blood lactate, known as the maximal lactate steady state intensity (iMLSS). Furthermore, in addition to the iMLSS estimation, studies have also determined anaerobic parameters (e.g., peak, mean, and minimum force/power) during phase 1 and also the maximum oxygen consumption in phase 3; therefore, the LMT is considered a robust physiological protocol. Although, encouraging reports have been published in both human and animal models, there are still some controversies regarding three main factors: (1) the influence of methodological aspects on the LMT parameters; (2) LMT effectiveness for monitoring training effects; and (3) the LMI as a valid iMLSS estimator. Therefore, the aim of this review is to provide a balanced discussion between scientific evidence of the aforementioned issues, and insights for future investigations are suggested. In summary, further analyses is necessary to determine whether these factors are worthy, since the LMT is relevant in several contexts of health sciences. PMID:28642717

  18. USE OF THE MIXED FLASK CULTURE (MFC) MICROCOSM PROTOCOL TO ESTIMATE THE SURVIVAL AND EFFECTS OF MICROORGANISMS ADDED TO FRESHWATER ECOSYSTEMS

    EPA Science Inventory

    The ability to manipulate an organism's genetic substance offers benefits to many aspects of human health and well-being. oupled with this positive aspect of genetic engineering, however, is a concern about potential adverse effects on human welfare and environmental quality. ive...

  19. Development of a stand-scale forest biodiversity index based on the state forest inventory

    Treesearch

    Diego Van Den Meersschaut; Kris Vandekerkhove

    2000-01-01

    Ecological aspects are increasingly influencing silvicultural management. Estimating forest biodiversity has become one often major tools for evaluating management strategies. A stand-scale forest biodiversity index is developed, based on available data from the state forest inventory. The index combines aspects of forest structure, woody and herbal layer composition,...

  20. Effects of flexibility and aspect ratio on the aerodynamic performance of flapping wings.

    PubMed

    Fu, Junjiang; Liu, Xiaohui; Shyy, Wei; Qiu, Huihe

    2018-03-14

    In the current study, we experimentally investigated the flexibility effects on the aerodynamic performance of flapping wings and the correlation with aspect ratio at angle of attack α  =  45°. The Reynolds number based on the chord length and the wing tip velocity is maintained at Re  =  5.3  ×  10 3 . Our result for compliant wings with an aspect ratio of 4 shows that wing flexibility can offer improved aerodynamic performance compared to that of a rigid wing. Flexible wings are found to offer higher lift-to-drag ratios; in particular, there is significant reduction in drag with little compromise in lift. The mechanism of the flexibility effects on the aerodynamic performance is addressed by quantifying the aerodynamic lift and drag forces, the transverse displacement on the wings and the flow field around the wings. The regime of the effective stiffness that offers improved aerodynamic performance is quantified in a range of about 0.5-10 and it matches the stiffness of insect wings with similar aspect ratios. Furthermore, we find that the aspect ratio of the wing is the predominant parameter determining the flexibility effects of compliant wings. Compliant wings with an aspect ratio of two do not demonstrate improved performance compared to their rigid counterparts throughout the entire stiffness regime investigated. The correlation between wing flexibility effects and the aspect ratio is supported by the stiffness of real insect wings.

  1. The relationship between quality management practices and organisational performance: A structural equation modelling approach

    NASA Astrophysics Data System (ADS)

    Jamaluddin, Z.; Razali, A. M.; Mustafa, Z.

    2015-02-01

    The purpose of this paper is to examine the relationship between the quality management practices (QMPs) and organisational performance for the manufacturing industry in Malaysia. In this study, a QMPs and organisational performance framework is developed according to a comprehensive literature review which cover aspects of hard and soft quality factors in manufacturing process environment. A total of 11 hypotheses have been put forward to test the relationship amongst the six constructs, which are management commitment, training, process management, quality tools, continuous improvement and organisational performance. The model is analysed using Structural Equation Modeling (SEM) with AMOS software version 18.0 using Maximum Likelihood (ML) estimation. A total of 480 questionnaires were distributed, and 210 questionnaires were valid for analysis. The results of the modeling analysis using ML estimation indicate that the fits statistics of QMPs and organisational performance model for manufacturing industry is admissible. From the results, it found that the management commitment have significant impact on the training and process management. Similarly, the training had significant effect to the quality tools, process management and continuous improvement. Furthermore, the quality tools have significant influence on the process management and continuous improvement. Likewise, the process management also has a significant impact to the continuous improvement. In addition the continuous improvement has significant influence the organisational performance. However, the results of the study also found that there is no significant relationship between management commitment and quality tools, and between the management commitment and continuous improvement. The results of the study can be used by managers to prioritize the implementation of QMPs. For instances, those practices that are found to have positive impact on organisational performance can be recommended to managers so that they can allocate resources to improve these practices to get better performance.

  2. How Well are Recent Climate Variability Signals Resolved by Satellite Radiative Flux Estimates?

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Lu, H.-L.

    2004-01-01

    One notable aspect of Earth s climate is that although the planet appears to be very close to radiative balance at top-of-atmosphere (TOA), the atmosphere itself and underlying surface are not. Profound exchanges of energy between the atmosphere and oceans, land and cryosphere occur over a range of time scales. Recent evidence from broadband satellite measurements suggests that even these TOA fluxes contain some detectable variations. Our ability to measure and reconstruct radiative fluxes at the surface and at the top of atmosphere is improving rapidly. Understanding the character of radiative flux estimates and relating them to variations in other energy fluxes and climate state variables is key to improving our understanding of climate. In this work we will evaluate several recently released estimates of radiative fluxes, focusing primarily on surface estimates. The International Satellite Cloud Climatology Project FD radiative flux profiles are available from rnid-1983 to near present and have been constructed by driving the radiative transfer physics from the Goddard Institute for Space Studies (GISS) global model with ISCCP clouds and HlRS operational soundings profiles. Full and clear sky SW and LW fluxes are produced. A similar product from the NASA/GEWEX Surface Radiation Budget Project using different radiative flux codes and thermodynamics from the NASA/Goddard Earth Observing System assimilation model makes a similar calculation of surface fluxes. However this data set currently extends only through 1995. Several estimates of downward LW flux at the surface inferred from microwave data are also examined. Since these products have been evaluated with Baseline Surface Radiation Network data over land we focus over ocean regions and use the DOE/NOAA/NASA Shipboard Ocean Atmospheric Radiation (SOAR) surface flux measurements to characterize performance of these data sets under both clear and cloudy conditions. Some aspects of performance are stratified according to SST and vertical motion regimes. Comparisons to the TRMM/CERES SRB data in 1998 are also interpreted. These radiative fluxes are then analyzed to determine how surface (and TOA) radiative exchanges respond to interannual signals of ENS0 warm and cold events. Our analysis includes regional changes as well as integrated signals over land, ocean and various latitude bands. Changes in water vapor and cloud forcing signatures are prominent on interannual time scales. Prominent signals are also found in the SW fluxes for the Pinatubo volcanic event. These systematic changes in fluxes are related to changes in large-scale circulations and energy transport in the atmosphere and ocean. Some estimates of signal-to-noise and reliability are discussed to place our results in context.

  3. Using hybrid method to evaluate the green performance in uncertainty.

    PubMed

    Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping

    2011-04-01

    Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.

  4. Knowledge modeling tool for evidence-based design.

    PubMed

    Durmisevic, Sanja; Ciftcioglu, Ozer

    2010-01-01

    The aim of this study is to take evidence-based design (EBD) to the next level by activating available knowledge, integrating new knowledge, and combining them for more efficient use by the planning and design community. This article outlines a framework for a performance-based measurement tool that can provide the necessary decision support during the design or evaluation of a healthcare environment by estimating the overall design performance of multiple variables. New knowledge in EBD adds continuously to complexity (the "information explosion"), and it becomes impossible to consider all aspects (design features) at the same time, much less their impact on final building performance. How can existing knowledge and the information explosion in healthcare-specifically the domain of EBD-be rendered manageable? Is it feasible to create a computational model that considers many design features and deals with them in an integrated way, rather than one at a time? The found evidence is structured and readied for computation through a "fuzzification" process. The weights are calculated using an analytical hierarchy process. Actual knowledge modeling is accomplished through a fuzzy neural tree structure. The impact of all inputs on the outcome-in this case, patient recovery-is calculated using sensitivity analysis. Finally, the added value of the model is discussed using a hypothetical case study of a patient room. The proposed model can deal with the complexities of various aspects and the relationships among variables in a coordinated way, allowing existing and new pieces of evidence to be integrated in a knowledge tree structure that facilitates understanding of the effects of various design interventions on overall design performance.

  5. Design and overall performance of four highly loaded, high speed inlet stages for an advanced high-pressure-ratio core compressor

    NASA Technical Reports Server (NTRS)

    Reid, L.; Moore, R. D.

    1978-01-01

    The detailed design and overall performances of four inlet stages for an advanced core compressor are presented. These four stages represent two levels of design total pressure ratio (1.82 and 2.05), two levels of rotor aspect ratio (1.19 and 1.63), and two levels of stator aspect ratio (1.26 and 1.78). The individual stages were tested over the stable operating flow range at 70, 90, and 100 percent of design speeds. The performances of the low aspect ratio configurations were substantially better than those of the high aspect ratio configurations. The two low aspect ratio configurations achieved peak efficiencies of 0.876 and 0.872 and corresponding stage efficiencies of 0.845 and 0.840. The high aspect ratio configurations achieved peak ratio efficiencies of 0.851 and 0.849 and corresponding stage efficiencies of 0.821 and 0.831.

  6. Laser radar cross-section estimation from high-resolution image data.

    PubMed

    Osche, G R; Seeber, K N; Lok, Y F; Young, D S

    1992-05-10

    A methodology for the estimation of ladar cross sections from high-resolution image data of geometrically complex targets is presented. Coherent CO(2) laser radar was used to generate high-resolution amplitude imagery of a UC-8 Buffalo test aircraft at a range of 1.3 km at nine different aspect angles. The average target ladar cross section was synthesized from these data and calculated to be sigma(T) = 15.4 dBsm, which is similar to the expected microwave radar cross sections. The aspect angle dependence of the cross section shows pronounced peaks at nose on and broadside, which are also in agreement with radar results. Strong variations in both the mean amplitude and the statistical distributions of amplitude with the aspect angle have also been observed. The relative mix of diffuse and specular returns causes significant deviations from a simple Lambertian or Swerling II target, especially at broadside where large normal surfaces are present.

  7. JOURNAL CLUB: Quantification of Fetal Dose Reduction if Abdominal CT Is Limited to the Top of the Iliac Crests in Pregnant Patients With Trauma.

    PubMed

    Corwin, Michael T; Seibert, J Anthony; Fananapazir, Ghaneh; Lamba, Ramit; Boone, John M

    2016-04-01

    The purposes of this study were to correlate fetal z-axis location within the maternal abdomen on CT with gestational age and estimate fetal dose reduction of a study limited to the abdomen only, with its lower aspect at the top of the iliac crests, compared with full abdominopelvic CT in pregnant trauma patients. We performed a study of pregnant patients who underwent CT of the abdomen and pelvis for trauma at a single institution over a 10-year period. The inferior aspect of maternal liver, spleen, gallbladder, pancreas, adrenals, and kidneys was recorded as above or below the iliac crests. The distance from the iliac crest to the top of the fetus or gestational sac was determined. The CT images of the limited and full scanning studies were independently reviewed by two blinded radiologists to identify traumatic injuries. Fetal dose profiles, including both scatter and primary radiation, were computed analytically along the central axis of the patient to estimate fetal dose reduction. Linear regression analysis was performed between gestational age and distance of the fetus to the iliac crests. Thirty-five patients were included (mean age, 26.2 years). Gestational age ranged from 5 to 38 weeks, with 5, 19, and 11 gestations in the first, second, and third trimesters, respectively. All solid organs were above the iliac crests in all patients. In three of six patients, traumatic findings in the pelvis would have been missed with the limited study. There was high correlation between gestational age and distance of the fetus to the iliac crests (R(2) = 0.84). The mean gestational age at which the top of the fetus was at the iliac crest was 17.3 weeks. Using the limited scanning study, fetuses at 5, 20, and 40 weeks of gestation would receive an estimated 4.3%, 26.2%, and 59.9% of the dose, respectively, compared with the dose for the full scanning study. In pregnant patients in our series with a history of trauma, CT of the abdomen only was an effective technique to reduce fetal radiation exposure compared with full abdomen and pelvis CT.

  8. Processing EOS MLS Level-2 Data

    NASA Technical Reports Server (NTRS)

    Snyder, W. Van; Wu, Dong; Read, William; Jiang, Jonathan; Wagner, Paul; Livesey, Nathaniel; Schwartz, Michael; Filipiak, Mark; Pumphrey, Hugh; Shippony, Zvi

    2006-01-01

    A computer program performs level-2 processing of thermal-microwave-radiance data from observations of the limb of the Earth by the Earth Observing System (EOS) Microwave Limb Sounder (MLS). The purpose of the processing is to estimate the composition and temperature of the atmosphere versus altitude from .8 to .90 km. "Level-2" as used here is a specialists f term signifying both vertical profiles of geophysical parameters along the measurement track of the instrument and processing performed by this or other software to generate such profiles. Designed to be flexible, the program is controlled via a configuration file that defines all aspects of processing, including contents of state and measurement vectors, configurations of forward models, measurement and calibration data to be read, and the manner of inverting the models to obtain the desired estimates. The program can operate in a parallel form in which one instance of the program acts a master, coordinating the work of multiple slave instances on a cluster of computers, each slave operating on a portion of the data. Optionally, the configuration file can be made to instruct the software to produce files of simulated radiances based on state vectors formed from sets of geophysical data-product files taken as input.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorpe, J. I.; Livas, J.; Maghami, P.

    Arm locking is a proposed laser frequency stabilization technique for the Laser Interferometer Space Antenna (LISA), a gravitational-wave observatory sensitive in the milliHertz frequency band. Arm locking takes advantage of the geometric stability of the triangular constellation of three spacecraft that compose LISA to provide a frequency reference with a stability in the LISA measurement band that exceeds that available from a standard reference such as an optical cavity or molecular absorption line. We have implemented a time-domain simulation of a Kalman-filter-based arm-locking system that includes the expected limiting noise sources as well as the effects of imperfect a priorimore » knowledge of the constellation geometry on which the design is based. We use the simulation to study aspects of the system performance that are difficult to capture in a steady-state frequency-domain analysis such as frequency pulling of the master laser due to errors in estimates of heterodyne frequency. We find that our implementation meets requirements on both the noise and dynamic range of the laser frequency with acceptable tolerances and that the design is sufficiently insensitive to errors in the estimated constellation geometry that the required performance can be maintained for the longest continuous measurement intervals expected for the LISA mission.« less

  10. Technical and cost advantages of silicon carbide telescopes for small-satellite imaging applications

    NASA Astrophysics Data System (ADS)

    Kasunic, Keith J.; Aikens, Dave; Szwabowski, Dean; Ragan, Chip; Tinker, Flemming

    2017-09-01

    Small satellites ("SmallSats") are a growing segment of the Earth imaging and remote sensing market. Designed to be relatively low cost and with performance tailored to specific end-use applications, they are driving changes in optical telescope assembly (OTA) requirements. OTAs implemented in silicon carbide (SiC) provide performance advantages for space applications but have been predominately limited to large programs. A new generation of lightweight and thermally-stable designs is becoming commercially available, expanding the application of SiC to small satellites. This paper reviews the cost and technical advantages of an OTA designed using SiC for small satellite platforms. Taking into account faceplate fabrication quilting and surface distortion after gravity release, an optimized open-back SiC design with a lightweighting of 70% for a 125-mm SmallSat-class primary mirror has an estimated mass area density of 2.8 kg/m2 and an aspect ratio of 40:1. In addition, the thermally-induced surface error of such optimized designs is estimated at λ/150 RMS per watt of absorbed power. Cost advantages of SiC include reductions in launch mass, thermal-management infrastructure, and manufacturing time based on allowable assembly tolerances.

  11. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    NASA Technical Reports Server (NTRS)

    Saxena, Abhinav; Celaya, Jose; Saha, Bhaskar; Saha, Sankalita; Goebel, Kai

    2009-01-01

    Prognostics has taken a center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of the system so that remedial measures may be taken in advance to avoid catastrophic events or unwanted downtimes. Validation of such predictions is an important but difficult proposition and a lack of appropriate evaluation methods renders prognostics meaningless. Evaluation methods currently used in the research community are not standardized and in many cases do not sufficiently assess key performance aspects expected out of a prognostics algorithm. In this paper we introduce several new evaluation metrics tailored for prognostics and show that they can effectively evaluate various algorithms as compared to other conventional metrics. Specifically four algorithms namely; Relevance Vector Machine (RVM), Gaussian Process Regression (GPR), Artificial Neural Network (ANN), and Polynomial Regression (PR) are compared. These algorithms vary in complexity and their ability to manage uncertainty around predicted estimates. Results show that the new metrics rank these algorithms in different manner and depending on the requirements and constraints suitable metrics may be chosen. Beyond these results, these metrics offer ideas about how metrics suitable to prognostics may be designed so that the evaluation procedure can be standardized. 1

  12. An empirical approach for estimating natural regeneration for the Forest Vegetation Simulator

    Treesearch

    Don Vandendriesche

    2010-01-01

    The “partial” establishment model that is available for most Forest Vegetation Simulator (FVS) geographic variants does not provide an estimate of natural regeneration. Users are responsible for supplying this key aspect of stand development. The process presented for estimating natural regeneration begins by summarizing small tree components based on observations from...

  13. High Energy Computed Tomographic Inspection of Munitions

    DTIC Science & Technology

    2016-11-01

    this collection of information is estimated to av erage 1 hour per response, including the time for rev iewing instructions, searching existing data... estimate or any other aspect of this collection of information, including suggestions for reducing the burden to Department of Defense, Washington...UNCLASSIFIED i CONTENTS Page System Background 1 Unique Features 3 Scattering Estimating Device 3 Distortion and Geometric Calibration

  14. Estimating hydraulic properties of the Floridan Aquifer System by analysis of earth-tide, ocean-tide, and barometric effects, Collier and Hendry Counties, Florida

    USGS Publications Warehouse

    Merritt, Michael L.

    2004-01-01

    Aquifers are subjected to mechanical stresses from natural, non-anthropogenic, processes such as pressure loading or mechanical forcing of the aquifer by ocean tides, earth tides, and pressure fluctuations in the atmosphere. The resulting head fluctuations are evident even in deep confined aquifers. The present study was conducted for the purpose of reviewing the research that has been done on the use of these phenomena for estimating the values of aquifer properties, and determining which of the analytical techniques might be useful for estimating hydraulic properties in the dissolved-carbonate hydrologic environment of southern Florida. Fifteen techniques are discussed in this report, of which four were applied.An analytical solution for head oscillations in a well near enough to the ocean to be influenced by ocean tides was applied to data from monitor zones in a well near Naples, Florida. The solution assumes a completely non-leaky confining unit of infinite extent. Resulting values of transmissivity are in general agreement with the results of aquifer performance tests performed by the South Florida Water Management District. There seems to be an inconsistency between results of the amplitude ratio analysis and independent estimates of loading efficiency. A more general analytical solution that takes leakage through the confining layer into account yielded estimates that were lower than those obtained using the non-leaky method, and closer to the South Florida Water Management District estimates. A numerical model with a cross-sectional grid design was applied to explore additional aspects of the problem.A relation between specific storage and the head oscillation observed in a well provided estimates of specific storage that were considered reasonable. Porosity estimates based on the specific storage estimates were consistent with values obtained from measurements on core samples. Methods are described for determining aquifer diffusivity by comparing the time-varying drawdown in an open well with periodic pressure-head oscillations in the aquifer, but the applicability of such methods might be limited in studies of the Floridan aquifer system.

  15. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  16. Factors affecting construction performance: exploratory factor analysis

    NASA Astrophysics Data System (ADS)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  17. Correlation Function Approach for Estimating Thermal Conductivity in Highly Porous Fibrous Materials

    NASA Technical Reports Server (NTRS)

    Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.

    2011-01-01

    Heat transport in highly porous fiber networks is analyzed via two-point correlation functions. Fibers are assumed to be long and thin to allow a large number of crossing points per fiber. The network is characterized by three parameters: the fiber aspect ratio, the porosity and the anisotropy of the structure. We show that the effective thermal conductivity of the system can be estimated from knowledge of the porosity and the correlation lengths of the correlation functions obtained from a fiber structure image. As an application, the effects of the fiber aspect ratio and the network anisotropy on the thermal conductivity is studied.

  18. Temporal auditory aspects in children with poor school performance and associated factors.

    PubMed

    Rezende, Bárbara Antunes; Lemos, Stela Maris Aguiar; Medeiros, Adriane Mesquita de

    2016-01-01

    To investigate the auditory temporal aspects in children with poor school performance aged 7-12 years and their association with behavioral aspects, health perception, school and health profiles, and sociodemographic factors. This is an observational, analytical, transversal study including 89 children with poor school performance aged 7-12 years enrolled in the municipal public schools of a municipality in Minas Gerais state, participants of Specialized Educational Assistance. The first stage of the study was conducted with the subjects' parents aiming to collect information on sociodemographic aspects, health profile, and educational records. In addition, the parents responded to the Strengths and Difficulties Questionnaire (SDQ). The second stage was conducted with the children in order to investigate their health self-perception and analyze the auditory assessment, which consisted of meatoscopy, Transient Otoacoustic Emissions, and tests that evaluated the aspects of simple auditory temporal ordering and auditory temporal resolution. Tests assessing the temporal aspects of auditory temporal processing were considered as response variables, and the explanatory variables were grouped for univariate and multivariate logistic regression analyses. The level of significance was set at 5%. Significant statistical correlation was found between the auditory temporal aspects and the variables age, gender, presence of repetition, and health self-perception. Children with poor school performance presented changes in the auditory temporal aspects. The temporal abilities assessed suggest association with different factors such as maturational process, health self-perception, and school records.

  19. Ranking product aspects through sentiment analysis of online reviews

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Wang, Hongwei; Song, Yuan

    2017-03-01

    The electronic word-of-mouth (e-WOM) is one of the most important among all the factors affecting consumers' behaviours. Opinions towards a product through online reviews will influence purchase decisions of other online consumers by changing their perceptions on the product quality. Furthermore, each product aspect may impact consumers' intentions differently. Thus, sentiment analysis and econometric models are incorporated to examine the relationship between purchase intentions and aspect-opinion pairs, which enable the weight estimation for each product aspect. We first identify product aspects and reduce dimensions to extract aspect-opinion pairs. Next the information gain is calculated for each aspect through entropy theory. Based on sentiment polarity and sentiment strength, we formulate an econometric model by integrating the information gain to measure the aspect's weight. In the experiment, we track 386 digital cameras on Amazon for 39 months, and results show that the aspect weight for digital cameras is detected more precisely than TF-ID and HAC algorithms. The results will bridge product aspects and consumption intention to facilitate e-WOM-based marketing.

  20. Unmasking the component-general and component-specific aspects of primary and secondary memory in the immediate free recall task.

    PubMed

    Gibson, Bradley S; Gondoli, Dawn M

    2018-04-01

    The immediate free recall (IFR) task has been commonly used to estimate the capacities of the primary memory (PM) and secondary memory (SM) components of working memory (WM). Using this method, the correlation between estimates of the PM and SM components has hovered around zero, suggesting that PM and SM represent fully distinct and dissociable components of WM. However, this conclusion has conflicted with more recent studies that have observed moderately strong, positive correlations between PM and SM when separate attention and retrieval tasks are used to estimate these capacities, suggesting that PM and SM represent at least some related capacities. The present study attempted to resolve this empirical discrepancy by investigating the extent to which the relation between estimates of PM and SM might be suppressed by a third variable that operates during the recall portion of the IFR task. This third variable was termed "strength of recency" (SOR) in the present study as it reflected differences in the extent to which individuals used the same experimentally-induced recency recall initiation strategy. As predicted, the present findings showed that the positive correlation between estimates of PM and SM grew from small to medium when the indirect effect of SOR was controlled across two separate sets of studies. This finding is important because it provides stronger support for the distinction between "component-general" and "component-specific" aspects of PM and SM; furthermore, a proof is presented that demonstrates a limitation of using regression techniques to differentiate general and specific aspects of these components.

  1. An Estimation Procedure for the Structural Parameters of the Unified Cognitive/IRT Model.

    ERIC Educational Resources Information Center

    Jiang, Hai; And Others

    L. V. DiBello, W. F. Stout, and L. A. Roussos (1993) have developed a new item response model, the Unified Model, which brings together the discrete, deterministic aspects of cognition favored by cognitive scientists, and the continuous, stochastic aspects of test response behavior that underlie item response theory (IRT). The Unified Model blends…

  2. Wing loading in 15 species of North American owls

    Treesearch

    David H. Johnson

    1997-01-01

    Information on wing morphology is important in understanding the mechanics and energetics of flight and in aspects related to reversed sexual size dimorphism in owls. I summarized wing span, wing area, wing loading, root box, and aspect ratio calculations from the available literature and from 113 owls examined in this study. Wing loading estimates for 15 species...

  3. Deep-water measurements of container ship radiated noise signatures and directionality.

    PubMed

    Gassmann, Martin; Wiggins, Sean M; Hildebrand, John A

    2017-09-01

    Underwater radiated noise from merchant ships was measured opportunistically from multiple spatial aspects to estimate signature source levels and directionality. Transiting ships were tracked via the Automatic Identification System in a shipping lane while acoustic pressure was measured at the ships' keel and beam aspects. Port and starboard beam aspects were 15°, 30°, and 45° in compliance with ship noise measurements standards [ANSI/ASA S12.64 (2009) and ISO 17208-1 (2016)]. Additional recordings were made at a 10° starboard aspect. Source levels were derived with a spherical propagation (surface-affected) or a modified Lloyd's mirror model to account for interference from surface reflections (surface-corrected). Ship source depths were estimated from spectral differences between measurements at different beam aspects. Results were exemplified with a 4870 and a 10 036 twenty-foot equivalent unit container ship at 40%-56% and 87% of service speeds, respectively. For the larger ship, opportunistic ANSI/ISO broadband levels were 195 (surface-affected) and 209 (surface-corrected) dB re 1 μPa 2 1 m. Directionality at a propeller blade rate of 8 Hz exhibited asymmetries in stern-bow (<6 dB) and port-starboard (<9 dB) direction. Previously reported broadband levels at 10° aspect from McKenna, Ross, Wiggins, and Hildebrand [(2012b). J. Acoust. Soc. Am. 131, 92-103] may be ∼12 dB lower than respective surface-affected ANSI/ISO standard derived levels.

  4. ESTIMATING SOLAR RADIATION EXPOSURE IN WETLANDS USING RADIATION MODELS, FIELD DATA, AND GEOGRAPHIC INFORMATION SYSTEMS

    EPA Science Inventory

    This seminar will describe development of methods for the estimation of solar radiation doses in wetlands. The methodology presents a novel approach to incorporating aspects of solar radiation dosimetry that have historically received limited attention. These include effects of a...

  5. Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material.

    PubMed

    Usamentiaga, Rubén; García, Daniel Fernando

    2017-05-18

    Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance.

  6. Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material

    PubMed Central

    Usamentiaga, Rubén; García, Daniel Fernando

    2017-01-01

    Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance. PMID:28524110

  7. Computational Biomathematics: Toward Optimal Control of Complex Biological Systems

    DTIC Science & Technology

    2016-09-26

    The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to...equations seems daunting. However, we are currently working on parameter estimation methods that show some promise. In this approach, we generate data from

  8. Distortions in Distributions of Impact Estimates in Multi-Site Trials: The Central Limit Theorem Is Not Your Friend

    ERIC Educational Resources Information Center

    May, Henry

    2014-01-01

    Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…

  9. Inferring the temperature dependence of population parameters: the effects of experimental design and inference algorithm

    PubMed Central

    Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J

    2014-01-01

    Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important implications in understanding and predicting the effects of temperature on the dynamics of populations. PMID:25558365

  10. THE DYNAMIC INTER-RELATIONSHIP BETWEEN OBESITY AND SCHOOL PERFORMANCE: NEW EMPIRICAL EVIDENCE FROM AUSTRALIA.

    PubMed

    Nghiem, Son; Hoang, Viet-Ngu; Vu, Xuan-Binh; Wilson, Clevo

    2017-12-04

    This paper proposes a new empirical model for examining the relationship between obesity and school performance using the simultaneous equation modelling approach. The lagged effects of both learning and health outcomes were included to capture both the dynamic and inter-relational aspects of the relationship between obesity and school performance. The empirical application of this study used comprehensive data from the first five waves of the Longitudinal Study of Australian Children (LSAC), which commenced in 2004 (wave 1) and was repeated every two years until 2018. The study sample included 10,000 children, equally divided between two cohorts (infants and children) across Australia. The empirical results show that past learning and obesity status are strongly associated with most indicators of school outcomes, including reading, writing, spelling, grammar and numeracy national tests, and scores from the internationally standardized Peabody Picture Vocabulary Test and the Matrix Reasoning Test. The main findings of this study are robust due to the choice of obesity indicator and estimation methods.

  11. Tools for Basic Statistical Analysis

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.

  12. Support vector machine firefly algorithm based optimization of lens system.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Pavlović, Nenad T; Ch, Sudheer; Altameem, Torki A; Gani, Abdullah

    2015-01-01

    Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

  13. Power requirements and environmental impact of a pedelec. A case study based on real-life applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abagnale, Carmelina, E-mail: c.abagnale@unina.it; Cardone, Massimo, E-mail: massimo.cardone@unina.it; Iodice, Paolo, E-mail: paolo.iodice@unina.it

    2015-07-15

    This paper describes the methodologies to appraise the power requests and environmental analysis of an electrically assisted bicycle under real driving conditions, also containing regulations and technical-science-related aspects. For this purpose, in this study, the on-road test program of an electrically assisted bicycle was executed in the urban area of Naples on different test tracks, so a general assessment about its driving behavior under several driving conditions was performed. The power requirements in different typical riding situations were estimated by a procedure based on the experimental kinematic parameters that characterize the driving dynamics collected during the real-life applications. An environmentalmore » analysis was also performed, with a methodology that takes into account the environmental assessment of a moped by measuring the experimental moped exhaust emissions of the regulated pollutants. Starting from the results acquired during the different test samples, besides, an assessment of the electric traction offered by this pedelec on the driving comfort was evaluated for different riding situations. - Highlights: • The power requirements of a pedelec in typical riding conditions were identified. • The estimated electricity consumption for battery recharging was defined. • An environmental valuation of the tested pedelec and of a moped was performed. • Emissions that could be saved utilizing a pedelec instead of a moped were derived.« less

  14. Using GAMM to examine inter-individual heterogeneity in thermal performance curves for Natrix natrix indicates bet hedging strategy by mothers.

    PubMed

    Vickers, Mathew J; Aubret, Fabien; Coulon, Aurélie

    2017-01-01

    The thermal performance curve (TPC) illustrates the dependence on body- and therefore environmental- temperature of many fitness-related aspects of ectotherm ecology and biology including foraging, growth, predator avoidance, and reproduction. The typical thermal performance curve model is linear in its parameters despite the well-known, strong, non-linearity of the response of performance to temperature. In addition, it is usual to consider a single model based on few individuals as descriptive of a species-level response to temperature. To overcome these issues, we used generalized additive mixed modeling (GAMM) to estimate thermal performance curves for 73 individual hatchling Natrix natrix grass snakes from seven clutches, taking advantage of the structure of GAMM to demonstrate that almost 16% of the deviance in thermal performance curves is attributed to inter-individual variation, while only 1.3% is attributable to variation amongst clutches. GAMM allows precise estimation of curve characteristics, which we used to test hypotheses on tradeoffs thought to constrain the thermal performance curve: hotter is better, the specialist-generalist trade off, and resource allocation/acquisition. We observed a negative relationship between maximum performance and performance breadth, indicating a specialist-generalist tradeoff, and a positive relationship between thermal optimum and maximum performance, suggesting "hotter is better". There was a significant difference among matrilines in the relationship between Area Under the Curve and maximum performance - relationship that is an indicator of evenness in acquisition or allocation of resources. As we used unfed hatchlings, the observed matriline effect indicates divergent breeding strategies among mothers, with some mothers provisioning eggs unequally resulting in some offspring being better than others, while other mothers provisioned the eggs more evenly, resulting in even performance throughout the clutch. This observation is reminiscent of bet-hedging strategies, and implies the possibility for intra-clutch variability in the TPCs to buffer N. natrix against unpredictable environmental variability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Large scale affinity calculations of cyclodextrin host-guest complexes: Understanding the role of reorganization in the molecular recognition process

    PubMed Central

    Wickstrom, Lauren; He, Peng; Gallicchio, Emilio; Levy, Ronald M.

    2013-01-01

    Host-guest inclusion complexes are useful models for understanding the structural and energetic aspects of molecular recognition. Due to their small size relative to much larger protein-ligand complexes, converged results can be obtained rapidly for these systems thus offering the opportunity to more reliably study fundamental aspects of the thermodynamics of binding. In this work, we have performed a large scale binding affinity survey of 57 β-cyclodextrin (CD) host guest systems using the binding energy distribution analysis method (BEDAM) with implicit solvation (OPLS-AA/AGBNP2). Converged estimates of the standard binding free energies are obtained for these systems by employing techniques such as parallel Hamitionian replica exchange molecular dynamics, conformational reservoirs and multistate free energy estimators. Good agreement with experimental measurements is obtained in terms of both numerical accuracy and affinity rankings. Overall, average effective binding energies reproduce affinity rank ordering better than the calculated binding affinities, even though calculated binding free energies, which account for effects such as conformational strain and entropy loss upon binding, provide lower root mean square errors when compared to measurements. Interestingly, we find that binding free energies are superior rank order predictors for a large subset containing the most flexible guests. The results indicate that, while challenging, accurate modeling of reorganization effects can lead to ligand design models of superior predictive power for rank ordering relative to models based only on ligand-receptor interaction energies. PMID:25147485

  16. 76 FR 30265 - Fisheries of the Northeastern United States; Monkfish; Amendment 5

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-25

    ....nefmc.org . Written comments regarding the burden-hour estimates or other aspects of the collection-of... are not overfished. Furthermore, the current estimated fishing mortality rate for each stock is below... establishes control rules to specify maximum sustainable yield (MSY), optimum yield (OY), overfishing level...

  17. Enterprise Information Lifecycle Management

    DTIC Science & Technology

    2011-01-01

    Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington...Need for Information Lifecycle Management .......................................................... 6 3.3 Challenges of Information Lifecycle

  18. Maritime Military Decision Making in Environments of Extreme Information Ambiguity: An Initial Exploration

    DTIC Science & Technology

    2005-09-01

    Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for

  19. Endocrine responses in long-duration manned space flight

    NASA Technical Reports Server (NTRS)

    Leach, C. S.; Rambaut, P. C.

    1975-01-01

    Endocrine measurements to assess the physiological cost of the combined stresses of space flight are considered from two aspects. First, fluid and electrolyte balance are correlated with weight loss, changes in the excretion of aldosterone and vasopressin and fluid compartments. The second area involves estimation of the physiological cost of maintaining a given level of performance during space flight by analysis of urinary catecholamines and cortisol. Inter-individual variability is demonstrated for most experimental indices measured. The measured changes are consistent with the hypothesis that a relative increase in thoracic blood volume upon transition to the zero-gravity environment can be interpreted as a true volume expansion resulting in an osmotic diuresis.

  20. Sentinel-4: the geostationary component of the GMES atmosphere monitoring missions

    NASA Astrophysics Data System (ADS)

    Bazalgette Courrèges-Lacoste, G.; Arcioni, M.; Meijer, Y.; Bézy, J.-L.; Bensi, P.; Langen, J.

    2017-11-01

    The implementation of operational atmospheric composition monitoring missions is foreseen in the context of the Global Monitoring for Environment and Security (GMES) initiative. Sentinel-4 will address the geostationary observations and Sentinel-5 the low Earth orbit ones. The two missions are planned to be launched on-board Eumetsat's Meteosat Third Generation (MTG) and Post-EPS satellites, respectively. This paper presents an overview of the GMES Sentinel- 4 mission, which has been assessed at Phase-0 level. It describes the key requirements and outlines the main aspects of the candidate implementation concepts available at completion of Phase-0. The paper will particularly focus on the observation mode, the estimated performance and the related technology developments.

  1. MCNP and GADRAS Comparisons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klasky, Marc Louis; Myers, Steven Charles; James, Michael R.

    To facilitate the timely execution of System Threat Reviews (STRs) for DNDO, and also to develop a methodology for performing STRs, LANL performed comparisons of several radiation transport codes (MCNP, GADRAS, and Gamma-Designer) that have been previously utilized to compute radiation signatures. While each of these codes has strengths, it is of paramount interest to determine the limitations of each of the respective codes and also to identify the most time efficient means by which to produce computational results, given the large number of parametric cases that are anticipated in performing STR's. These comparisons serve to identify regions of applicabilitymore » for each code and provide estimates of uncertainty that may be anticipated. Furthermore, while performing these comparisons, examination of the sensitivity of the results to modeling assumptions was also examined. These investigations serve to enable the creation of the LANL methodology for performing STRs. Given the wide variety of radiation test sources, scenarios, and detectors, LANL calculated comparisons of the following parameters: decay data, multiplicity, device (n,γ) leakages, and radiation transport through representative scenes and shielding. This investigation was performed to understand potential limitations utilizing specific codes for different aspects of the STR challenges.« less

  2. Aspects of operational radiation protection during dismantling of nuclear facilities relevant for the estimation of internal doses.

    PubMed

    Labarta, T

    2007-01-01

    Operational radiation protection of workers during the dismantling of nuclear facilities is based on the same radiation protection principles as that applied in its exploitation period with the objective of ensuring proper implementation of the as-low-as-reasonably-achievable (ALARA) principle. These principles are: prior determination of the nature and magnitude of radiological risk; classification of workplaces and workers depending on the risks; implementation of control measures; monitoring of zones and working conditions, including, if necessary, individual monitoring. From the experiences and the lessons learned during the dismantling processes carried out in Spain, several important aspects in the practical implementation of these principles that directly influence and ensure an adequate prevention of exposures and the estimation of internal doses are pointed out, with special emphasis on the estimation of internal doses due to transuranic intakes.

  3. The discrepancy between emotional vs. rational estimates of body size, actual size, and ideal body ratings: theoretical and clinical implications.

    PubMed

    Thompson, J K; Dolce, J J

    1989-05-01

    Thirty-two asymptomatic college females were assessed on multiple aspects of body image. Subjects' estimation of the size of three body sites (waist, hips, thighs) was affected by instructional protocol. Emotional ratings, based on how they "felt" about their body, elicited ratings that were larger than actual and ideal size measures. Size ratings based on rational instructions were no different from actual sizes, but were larger than ideal ratings. There were no differences between actual and ideal sizes. The results are discussed with regard to methodological issues involved in body image research. In addition, a working hypothesis that differentiates affective/emotional from cognitive/rational aspects of body size estimation is offered to complement current theories of body image. Implications of the findings for the understanding of body image and its relationship to eating disorders are discussed.

  4. Application of the device based on chirping of optical impulses for management of software-defined networks in dynamic mode

    NASA Astrophysics Data System (ADS)

    Vinogradova, Irina L.; Khasansin, Vadim R.; Andrianova, Anna V.; Yantilina, Liliya Z.; Vinogradov, Sergey L.

    2016-03-01

    The analysis of the influence of the physical layer concepts in optical networks on the performance of the whole network. It is concluded that the relevance of the search for new means of transmitting information on a physical level. It is proposed to use an optical chirp overhead transmission between controllers SDN. This article is devoted to research of a creation opportunity of optical neural switchboards controlled in addition by submitted optical radiation. It is supposed, that the managing radiation changes a parameter of refraction of optical environment of the device, and with it and length of a wave of information radiation. For the control by last is used multibeam interferometer. The brief estimation of technical aspects of construction of the device is carried out. The principle of using the device to an extensive network. Simulation of network performance parameters.

  5. Computer code for analyzing the performance of aquifer thermal energy storage systems

    NASA Astrophysics Data System (ADS)

    Vail, L. W.; Kincaid, C. T.; Kannberg, L. D.

    1985-05-01

    A code called Aquifer Thermal Energy Storage System Simulator (ATESSS) has been developed to analyze the operational performance of ATES systems. The ATESSS code provides an ability to examine the interrelationships among design specifications, general operational strategies, and unpredictable variations in the demand for energy. The uses of the code can vary the well field layout, heat exchanger size, and pumping/injection schedule. Unpredictable aspects of supply and demand may also be examined through the use of a stochastic model of selected system parameters. While employing a relatively simple model of the aquifer, the ATESSS code plays an important role in the design and operation of ATES facilities by augmenting experience provided by the relatively few field experiments and demonstration projects. ATESSS has been used to characterize the effect of different pumping/injection schedules on a hypothetical ATES system and to estimate the recovery at the St. Paul, Minnesota, field experiment.

  6. Through-the-earth communication: Experiment results from Billie Mine and Mississippi Chemical Mine

    NASA Astrophysics Data System (ADS)

    Buettner, H. M.; Didwall, E. M.; Bukofzer, D. C.

    1988-06-01

    As part of the Lawrence Livermore National Laboratory (LLNL) effort to evaluate Through-the-Earth Communication (TEC) as an option for military communication systems, experiments were conducted involving transmission, reception, and performance monitoring of digital electromagnetic communication signals propagating through the earth. The two experiments reported on here not only demonstrated that TEC is useful for transmissions at digital rates above a few bits per second, but also provided data on performance parameters with which to evaluate TEC in various military applications. The most important aspect of these experiments is that the bit error rate (BER) is measured rather than just estimated from purely analytic developments. By measuring this important parameter, not only has more credibility been lent to the proof of concept goals of the experiment, but also a means for judging the effects of assumptions in BER theoretical models has been provided.

  7. [Costing nuclear medicine diagnostic procedures].

    PubMed

    Markou, Pavlos

    2005-01-01

    To the Editor: Referring to a recent special report about the cost analysis of twenty-nine nuclear medicine procedures, I would like to clarify some basic aspects for determining costs of nuclear medicine procedure with various costing methodologies. Activity Based Costing (ABC) method, is a new approach in imaging services costing that can provide the most accurate cost data, but is difficult to perform in nuclear medicine diagnostic procedures. That is because ABC requires determining and analyzing all direct and indirect costs of each procedure, according all its activities. Traditional costing methods, like those for estimating incomes and expenses per procedure or fixed and variable costs per procedure, which are widely used in break-even point analysis and the method of ratio-of-costs-to-charges per procedure may be easily performed in nuclear medicine departments, to evaluate the variability and differences between costs and reimbursement - charges.

  8. Evidence of automatic processing in sequence learning using process-dissociation

    PubMed Central

    Mong, Heather M.; McCabe, David P.; Clegg, Benjamin A.

    2012-01-01

    This paper proposes a way to apply process-dissociation to sequence learning in addition and extension to the approach used by Destrebecqz and Cleeremans (2001). Participants were trained on two sequences separated from each other by a short break. Following training, participants self-reported their knowledge of the sequences. A recognition test was then performed which required discrimination of two trained sequences, either under the instructions to call any sequence encountered in the experiment “old” (the inclusion condition), or only sequence fragments from one half of the experiment “old” (the exclusion condition). The recognition test elicited automatic and controlled process estimates using the process dissociation procedure, and suggested both processes were involved. Examining the underlying processes supporting performance may provide more information on the fundamental aspects of the implicit and explicit constructs than has been attainable through awareness testing. PMID:22679465

  9. Dynamic-Active Flow Control - Phase I

    DTIC Science & Technology

    2006-10-18

    effective in controlling the flow. In altering the orifice shape to one with a lower aspect ratio , for example a circular hole, the effect of the...DYNAMIC-ACTIVE FLOW CONTROL - PHASE I By ASHLEY TUCK AND JULIO SORIA 1 Laboratory for Turbulence Research...comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington

  10. Data Fusion of Gridded Snow Products Enhanced with Terrain Covariates and a Simple Snow Model

    NASA Astrophysics Data System (ADS)

    Snauffer, A. M.; Hsieh, W. W.; Cannon, A. J.

    2017-12-01

    Hydrologic planning requires accurate estimates of regional snow water equivalent (SWE), particularly areas with hydrologic regimes dominated by spring melt. While numerous gridded data products provide such estimates, accurate representations are particularly challenging under conditions of mountainous terrain, heavy forest cover and large snow accumulations, contexts which in many ways define the province of British Columbia (BC), Canada. One promising avenue of improving SWE estimates is a data fusion approach which combines field observations with gridded SWE products and relevant covariates. A base artificial neural network (ANN) was constructed using three of the best performing gridded SWE products over BC (ERA-Interim/Land, MERRA and GLDAS-2) and simple location and time covariates. This base ANN was then enhanced to include terrain covariates (slope, aspect and Terrain Roughness Index, TRI) as well as a simple 1-layer energy balance snow model driven by gridded bias-corrected ANUSPLIN temperature and precipitation values. The ANN enhanced with all aforementioned covariates performed better than the base ANN, but most of the skill improvement was attributable to the snow model with very little contribution from the terrain covariates. The enhanced ANN improved station mean absolute error (MAE) by an average of 53% relative to the composing gridded products over the province. Interannual peak SWE correlation coefficient was found to be 0.78, an improvement of 0.05 to 0.18 over the composing products. This nonlinear approach outperformed a comparable multiple linear regression (MLR) model by 22% in MAE and 0.04 in interannual correlation. The enhanced ANN has also been shown to estimate better than the Variable Infiltration Capacity (VIC) hydrologic model calibrated and run for four BC watersheds, improving MAE by 22% and correlation by 0.05. The performance improvements of the enhanced ANN are statistically significant at the 5% level across the province and in four out of five physiographic regions.

  11. Influence of optimized leading-edge deflection and geometric anhedral on the low-speed aerodynamic characteristics of a low-aspect-ratio highly swept arrow-wing configuration. [langley 7 by 10 foot tunnel

    NASA Technical Reports Server (NTRS)

    Coe, P. L., Jr.; Huffman, J. K.

    1979-01-01

    An investigation conducted in the Langley 7 by 10 foot tunnel to determine the influence of an optimized leading-edge deflection on the low speed aerodynamic performance of a configuration with a low aspect ratio, highly swept wing. The sensitivity of the lateral stability derivative to geometric anhedral was also studied. The optimized leading edge deflection was developed by aligning the leading edge with the incoming flow along the entire span. Owing to spanwise variation of unwash, the resulting optimized leading edge was a smooth, continuously warped surface for which the deflection varied from 16 deg at the side of body to 50 deg at the wing tip. For the particular configuration studied, levels of leading-edge suction on the order of 90 percent were achieved. The results of tests conducted to determine the sensitivity of the lateral stability derivative to geometric anhedral indicate values which are in reasonable agreement with estimates provided by simple vortex-lattice theories.

  12. [Genetic aspects of the Stroop test].

    PubMed

    Nánási, Tibor; Katonai, Enikő Rózsa; Sasvári-Székely, Mária; Székely, Anna

    2012-12-01

    Impairment of executive control functions in depression is well documented, and performance on the Stroop Test is one of the most widely used markers to measure the decline. This tool provides reliable quantitative phenotype data that can be used efficiently in candidate gene studies investigating inherited components of executive control. Aim of the present review is to summarize research on genetic factors of Stroop performance. Interestingly, only a few such candidate gene studies have been carried out to date. Twin studies show a 30-60% heritability estimate for the Stroop test, suggesting a significant genetic component. A single genome-wide association study has been carried out on Stroop performance, and it did not show any significant association with any of the tested polymorphisms after correction for multiple testing. Candidate gene studies to date pointed to the polymorphisms of several neurotransmitter systems (dopamine, serotonin, acetylcholine) and to the role of the APOE ε4 allele. Surprisingly, little is known about the genetic role of neurothrophic factors and survival factors. In conclusion, further studies are needed for clarifying the genetic background of Stroop performance, characterizing attentional functions.

  13. Initial Performance of the Attitude Control and Aspect Determination Subsystems on the Chandra Observatory

    NASA Technical Reports Server (NTRS)

    Cameron, R.; Aldcroft, T.; Podgorski, W. A.; Freeman, M. D.

    2000-01-01

    The aspect determination system of the Chandra X-ray Observatory plays a key role in realizing the full potential of Chandra's X-ray optics and detectors. We review the performance of the spacecraft hardware components and sub-systems, which provide information for both real time control of the attitude and attitude stability of the Chandra Observatory and also for more accurate post-facto attitude reconstruction. These flight components are comprised of the aspect camera (star tracker) and inertial reference units (gyros), plus the fiducial lights and fiducial transfer optics which provide an alignment null reference system for the science instruments and X-ray optics, together with associated thermal and structural components. Key performance measures will be presented for aspect camera focal plane data, gyro performance both during stable pointing and during maneuvers, alignment stability and mechanism repeatability.

  14. Effect of Aspect Ratio on the Low-Speed Lateral Control Characteristics of Untapered Low-Aspect-Ratio Wings Equipped with Flap and with Retractable Ailerons

    NASA Technical Reports Server (NTRS)

    Fischel, Jack; Naeseth, Rodger L; Hagerman, John R; O'Hare, William M

    1952-01-01

    A low-speed wind-tunnel investigation was made to determine the lateral control characteristics of a series of untapered low-aspect-ratio wings. Sealed flap ailerons of various spans and spanwise locations were investigated on unswept wings of aspect ratios 1.13, 1.13, 4.13, and 6.13; and various projections of 0.60-semispan retractable ailerons were investigated on the unsweptback wings of aspect ratios 1.13, 2.13, and 4.13 and on a 45 degree sweptback wing. The retractable ailerons investigated on the unswept wings spanned the outboard stations of each wing; whereas the plain and stepped retractable ailerons investigated on the sweptback wing were located at various spanwise stations. Design charts based on experimental results are presented for estimating the flap aileron effectiveness for low-aspect-ratio, untapered, unswept.

  15. Aerodynamic and heat transfer analysis of the low aspect ratio turbine

    NASA Astrophysics Data System (ADS)

    Sharma, O. P.; Nguyen, P.; Ni, R. H.; Rhie, C. M.; White, J. A.

    1987-06-01

    The available two- and three-dimensional codes are used to estimate external heat loads and aerodynamic characteristics of a highly loaded turbine stage in order to demonstrate state-of-the-art methodologies in turbine design. By using data for a low aspect ratio turbine, it is found that a three-dimensional multistage Euler code gives good averall predictions for the turbine stage, yielding good estimates of the stage pressure ratio, mass flow, and exit gas angles. The nozzle vane loading distribution is well predicted by both the three-dimensional multistage Euler and three-dimensional Navier-Stokes codes. The vane airfoil surface Stanton number distributions, however, are underpredicted by both two- and three-dimensional boundary value analysis.

  16. Morbidity and mortality of vermiculite miners and millers exposed to tremolite-actinolite: Part I. Exposure estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amandus, H.E.; Wheeler, R.; Jankovic, J.

    1987-01-01

    The vermiculite ore and concentrate of a mine and mill near Libby, Montana, was found to be contaminated with fibrous tremolite-actinolite. Of 599 fibers (length greater than 5 microns and width greater than 0.45 micron) counted in eight airborne membrane filter samples, 96% had an aspect ratio greater than 10 and 16% had an aspect ratio greater than 50. Additionally, 73% of the fibers were longer than 10 microns, 36% were longer than 20 microns, and 10% were longer than 40 microns. Estimates of exposure before 1964 in the dry mill were 168 fibers/cc for working areas, 182 fibers/cc formore » sweepers, 88 fibers/cc for skipping, and 13 fibers/cc for the quality control laboratory. In 1964-1971, exposure estimates for these areas were 33, 36, 17, and 3 fibers/cc, respectively. Estimates of exposures in the mine before 1971 ranged from 9-23 fibers/cc for drillers and were less than 2 fibers/cc for nondrilling jobs. All 8-hr TWA job exposure estimates decreased from 1972-1976, and from 1977-1982 were less than 1 fiber/cc.« less

  17. The morbidity and mortality of vermiculite miners and millers exposed to tremolite-actinolite: Part I. Exposure estimates.

    PubMed

    Amandus, H E; Wheeler, R; Jankovic, J; Tucker, J

    1987-01-01

    The vermiculite ore and concentrate of a mine and mill near Libby, Montana, was found to be contaminated with fibrous tremolite-actinolite. Of 599 fibers (length greater than 5 microns and width greater than 0.45 micron) counted in eight airborne membrane filter samples, 96% had an aspect ratio greater than 10 and 16% had an aspect ratio greater than 50. Additionally, 73% of the fibers were longer than 10 microns, 36% were longer than 20 microns, and 10% were longer than 40 microns. Estimates of exposure before 1964 in the dry mill were 168 fibers/cc for working areas, 182 fibers/cc for sweepers, 88 fibers/cc for skipping, and 13 fibers/cc for the quality control laboratory. In 1964-1971, exposure estimates for these areas were 33, 36, 17, and 3 fibers/cc, respectively. Estimates of exposures in the mine before 1971 ranged from 9-23 fibers/cc for drillers and were less than 2 fibers/cc for nondrilling jobs. All 8-hr TWA job exposure estimates decreased from 1972-1976, and from 1977-1982 were less than 1 fiber/cc.

  18. On Searching Available Channels with Asynchronous MAC-Layer Spectrum Sensing

    NASA Astrophysics Data System (ADS)

    Jiang, Chunxiao; Ma, Xin; Chen, Canfeng; Ma, Jian; Ren, Yong

    Dynamic spectrum access has become a focal issue recently, in which identifying the available spectrum plays a rather important role. Lots of work has been done concerning secondary user (SU) synchronously accessing primary user's (PU's) network. However, on one hand, SU may have no idea about PU's communication protocols; on the other, it is possible that communications among PU are not based on synchronous scheme at all. In order to address such problems, this paper advances a strategy for SU to search available spectrums with asynchronous MAC-layer sensing. With this method, SUs need not know the communication mechanisms in PU's network when dynamically accessing. We will focus on four aspects: 1) strategy for searching available channels; 2) vacating strategy when PUs come back; 3) estimation of channel parameters; 4) impact of SUs' interference on PU's data rate. The simulations show that our search strategy not only can achieve nearly 50% less interference probability than equal allocation of total search time, but also well adapts to time-varying channels. Moreover, access by our strategies can attain 150% more access time than random access. The moment matching estimator shows good performance in estimating and tracing time-varying channels.

  19. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  20. Estimation of Transport and Kinetic Parameters of Vanadium Redox Batteries Using Static Cells

    DOE PAGES

    Lee, Seong Beom; Pratt, III, Harry D.; Anderson, Travis M.; ...

    2018-03-27

    Mathematical models of Redox Flow Batteries (RFBs) can be used to analyze cell performance, optimize battery operation, and control the energy storage system efficiently. Among many other models, physics-based electrochemical models are capable of predicting internal states of the battery, such as temperature, state-of-charge, and state-of-health. In the models, estimating parameters is an important step that can study, analyze, and validate the models using experimental data. A common practice is to determine these parameters either through conducting experiments or based on the information available in the literature. However, it is not easy to investigate all proper parameters for the modelsmore » through this way, and there are occasions when important information, such as diffusion coefficients and rate constants of ions, has not been studied. Also, the parameters needed for modeling charge-discharge are not always available. In this paper, an efficient way to estimate parameters of physics-based redox battery models will be proposed. Furthermore, this paper also demonstrates that the proposed approach can study and analyze aspects of capacity loss/fade, kinetics, and transport phenomena of the RFB system.« less

  1. In the Blink of an Eye: Relating Positive-Feedback Sensitivity to Striatal Dopamine D2-Like Receptors through Blink Rate

    PubMed Central

    Groman, Stephanie M.; James, Alex S.; Seu, Emanuele; Tran, Steven; Clark, Taylor A.; Harpster, Sandra N.; Crawford, Maverick; Burtner, Joanna Lee; Feiler, Karen; Roth, Robert H.; Elsworth, John D.; London, Edythe D.

    2014-01-01

    For >30 years, positron emission tomography (PET) has proven to be a powerful approach for measuring aspects of dopaminergic transmission in the living human brain; this technique has revealed important relationships between dopamine D2-like receptors and dimensions of normal behavior, such as human impulsivity, and psychopathology, particularly behavioral addictions. Nevertheless, PET is an indirect estimate that lacks cellular and functional resolution and, in some cases, is not entirely pharmacologically specific. To identify the relationships between PET estimates of D2-like receptor availability and direct in vitro measures of receptor number, affinity, and function, we conducted neuroimaging and behavioral and molecular pharmacological assessments in a group of adult male vervet monkeys. Data gathered from these studies indicate that variation in D2-like receptor PET measurements is related to reversal-learning performance and sensitivity to positive feedback and is associated with in vitro estimates of the density of functional dopamine D2-like receptors. Furthermore, we report that a simple behavioral measure, eyeblink rate, reveals novel and crucial links between neuroimaging assessments and in vitro measures of dopamine D2 receptors. PMID:25339755

  2. Tailored Excitation for Frequency Response Measurement Applied to the X-43A Flight Vehicle

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan

    2007-01-01

    An important aspect of any flight research project is assessing aircraft stability and flight control performance. In some programs this assessment is accomplished through the estimation of the in-flight vehicle frequency response. This estimation has traditionally been a lengthy task requiring separate swept sine inputs for each control axis at a constant flight condition. Hypersonic vehicles spend little time at any specific flight condition while they are decelerating. Accordingly, it is difficult to use traditional methods to calculate the vehicle frequency response and stability margins for this class of vehicle. A technique has been previously developed to significantly reduce the duration of the excitation input by tailoring the input to excite only the frequency range of interest. Reductions in test time were achieved by simultaneously applying tailored excitation signals to multiple control loops, allowing a quick estimate of the frequency response of a particular aircraft. This report discusses the flight results obtained from applying a tailored excitation input to the X-43A longitudinal and lateral-directional control loops during the second and third flights. The frequency responses and stability margins obtained from flight data are compared with preflight predictions.

  3. The geometry and volume of melt beneath Ethiopia

    NASA Astrophysics Data System (ADS)

    Kendall, J. M.; Hammond, J. O. S.

    2016-12-01

    A range of seismic measurements can be used to map melt distribution in the crust and uppermost mantle. These include seismic P- and S-wave velocities derived from surface- and body-wave tomography, Vp/Vs ratios obtained from receiver functions, and estimates of seismic anisotropy and attenuation. The most obvious melt parameter that seismic data might be sensitive to is volume fraction. However, such data are more sensitive to the aspect ratio of melt inclusions, which is controlled by the melt wetting angle or in other words the shape of the melt inclusion. To better understand this we perform numerical modelling, varying the shape and amount of melt, to show how various seismic phases are effected by melt. To consider the effects on seismic anisotropy we assume that the melt can be stored in pockets of melt that are either horizontally or vertically aligned (e.g., sills versus dykes). We then consider a range of seismic observations from the rifting environment of Ethiopia. Recent studies of P- and S-wave tomography, Rayleigh and Love waves, and Pn or wide angle P-wave refractions provide provide complimentary constraints on melt volume, orientation and inclusion aspect ratio. Furthermore, receiver functions and shear-wave splitting in body waves show strong anisotropy in this region and can be used to constrain the strike of vertically-aligned partial melt. We show that melt in the mantle beneath Ethiopia is likely stored in low aspect ratio disk-like inclusions, suggesting melt is not in textural equilibrium. We estimate that 2-7% vertically aligned melt is stored beneath the Main Ethiopian Rift, >6% horizontally and vertically aligned melt is stored beneath the Afar-region of the Red Sea Rift and 1-6% horizontally aligned melt is stored beneath the Danakil microplate. This supports ideas of strong shear-derived segregation of melt in narrow parts of the rift and large volumes of melt beneath Afar.

  4. Gender-, Race-, and Income-Based Stereotype Threat: The Effects of Multiple Stigmatized Aspects of Identity on Math Performance and Working Memory Function

    ERIC Educational Resources Information Center

    Tine, Michele; Gotlieb, Rebecca

    2013-01-01

    This study compared the relative impact of gender-, race-, and income-based stereotype threat and examined if individuals with multiple stigmatized aspects of identity experience a larger stereotype threat effect on math performance and working memory function than people with one stigmatized aspect of identity. Seventy-one college students of the…

  5. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series

    PubMed Central

    Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael

    2014-01-01

    Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems. PMID:25068489

  6. Modeling particle number concentrations along Interstate 10 in El Paso, Texas

    PubMed Central

    Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias

    2014-01-01

    Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294

  7. Relative range error evaluation of terrestrial laser scanners using a plate, a sphere, and a novel dual-sphere-plate target.

    PubMed

    Muralikrishnan, Bala; Rachakonda, Prem; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cheok, Geraldine; Cournoyer, Luc

    2017-12-01

    Terrestrial laser scanners (TLS) are a class of 3D imaging systems that produce a 3D point cloud by measuring the range and two angles to a point. The fundamental measurement of a TLS is range. Relative range error is one component of the overall range error of TLS and its estimation is therefore an important aspect in establishing metrological traceability of measurements performed using these systems. Target geometry is an important aspect to consider when realizing the relative range tests. The recently published ASTM E2938-15 mandates the use of a plate target for the relative range tests. While a plate target may reasonably be expected to produce distortion free data even at far distances, the target itself needs careful alignment at each of the relative range test positions. In this paper, we discuss relative range experiments performed using a plate target and then address the advantages and limitations of using a sphere target. We then present a novel dual-sphere-plate target that draws from the advantages of the sphere and the plate without the associated limitations. The spheres in the dual-sphere-plate target are used simply as fiducials to identify a point on the surface of the plate that is common to both the scanner and the reference instrument, thus overcoming the need to carefully align the target.

  8. The aspect ratio effects on the performances of GaN-based light-emitting diodes with nanopatterned sapphire substrates

    NASA Astrophysics Data System (ADS)

    Kao, Chien-Chih; Su, Yan-Kuin; Lin, Chuing-Liang; Chen, Jian-Jhong

    2010-07-01

    The nanopatterned sapphire substrates (NPSSs) with aspect ratio that varied from 2.00 to 2.50 were fabricated by nanoimprint lithography. We could improve the epitaxial film quality and enhance the light extraction efficiency by NPSS technique. In this work, the aspect ratio effects on the performances of GaN-based light-emitting diodes (LEDs) with NPSS were investigated. The light output enhancement of GaN-based LEDs with NPSS was increased from 11% to 27% as the aspect ratio of the NPSS increases from 2.00 to 2.50. Owing to the same improvement of crystalline quality by using various aspect ratios of NPSS, these results indicated that the aspect ratio of the NPSS is strongly related to the light extraction efficiency.

  9. INFLUENCE OF SCALE RATIO, ASPECT RATIO, AND PLANFORM ON THE PERFORMANCE OF SUPERCAVITATING HYDROFOILS.

    DTIC Science & Technology

    performance of supercavitating hydrofoils. No appreciable scale effect was found for scale ratios up to 3 in the fully-cavitating flow region. The...overall performance of the hydrofoil by increasing the aspect ratio above 3, and (2) moderate taper ratio seems to be advantageous in view of the overall performance of supercavitating hydrofoils. (Author)

  10. Chapter 8: Uncertainty assessment for quantifying greenhouse gas sources and sinks

    Treesearch

    Jay Breidt; Stephen M. Ogle; Wendy Powers; Coeli Hoover

    2014-01-01

    Quantifying the uncertainty of greenhouse gas (GHG) emissions and reductions from agriculture and forestry practices is an important aspect of decision�]making for farmers, ranchers and forest landowners as the uncertainty range for each GHG estimate communicates our level of confidence that the estimate reflects the actual balance of GHG exchange between...

  11. GPS-based Microenvironment Tracker (MicroTrac) Model to Estimate Time-Location of Individuals for Air Pollution Exposure Assessments: Model Evaluation in Central North Carolina

    EPA Science Inventory

    A critical aspect of air pollution exposure assessment is the estimation of the time spent by individuals in various microenvironments (ME). Accounting for the time spent in different ME with different pollutant concentrations can reduce exposure misclassifications, while failure...

  12. Comparing AMSR-E soil moisture estimates to the extended record of the U.S. Climate Reference Network (USCRN)

    USDA-ARS?s Scientific Manuscript database

    Soil moisture plays an integral role in various aspects ranging from multi-scale hydrologic modeling to agricultural decision analysis to multi-scale hydrologic modeling, from climate change assessments to drought prediction and prevention. The broad availability of soil moisture estimates has only...

  13. How robust are the estimated effects of air pollution on health? Accounting for model uncertainty using Bayesian model averaging.

    PubMed

    Pannullo, Francesca; Lee, Duncan; Waclawski, Eugene; Leyland, Alastair H

    2016-08-01

    The long-term impact of air pollution on human health can be estimated from small-area ecological studies in which the health outcome is regressed against air pollution concentrations and other covariates, such as socio-economic deprivation. Socio-economic deprivation is multi-factorial and difficult to measure, and includes aspects of income, education, and housing as well as others. However, these variables are potentially highly correlated, meaning one can either create an overall deprivation index, or use the individual characteristics, which can result in a variety of pollution-health effects. Other aspects of model choice may affect the pollution-health estimate, such as the estimation of pollution, and spatial autocorrelation model. Therefore, we propose a Bayesian model averaging approach to combine the results from multiple statistical models to produce a more robust representation of the overall pollution-health effect. We investigate the relationship between nitrogen dioxide concentrations and cardio-respiratory mortality in West Central Scotland between 2006 and 2012. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Carbonate pore system evaluation using the velocity-porosity-pressure relationship, digital image analysis, and differential effective medium theory

    NASA Astrophysics Data System (ADS)

    Lima Neto, Irineu A.; Misságia, Roseane M.; Ceia, Marco A.; Archilha, Nathaly L.; Oliveira, Lucas C.

    2014-11-01

    Carbonate reservoirs exhibit heterogeneous pore systems and a wide variety of grain types, which affect the rock's elastic properties and the reservoir parameter relationships. To study the Albian carbonates in the Campos Basin, a methodology is proposed to predict the amount of microporosity and the representative aspect ratio of these inclusions. The method assumes three pore-space scales in two representative inclusion scenarios: 1) a macro-mesopore median aspect ratio from the thin-section digital image analysis (DIA) and 2) a microporosity aspect ratio predicted based on the measured P-wave velocities. Through a laboratory analysis of 10 grainstone core samples of the Albian age, the P- and S-wave velocities (Vp and Vs) are evaluated at effective pressures of 0-10 MPa. The analytical theories in the proposed methodology are functions of the aspect ratios from the differential effective medium (DEM) theory, the macro-mesopore system recognized from the DIA, the amount of microporosity determined by the difference between the porosities estimated from laboratorial helium-gas and the thin-section petrographic images, and the P-wave velocities under dry effective pressure conditions. The DIA procedure is applied to estimate the local and global parameters, and the textural implications concerning ultrasonic velocities and image resolution. The macro-mesopore inclusions contribute to stiffer rocks and higher velocities, whereas the microporosity inclusions contribute to softer rocks and lower velocities. We observe a high potential for this methodology, which uses the microporosity aspect ratio inverted from Vp to predict Vs with a good agreement. The results acceptably characterize the Albian grainstones. The representative macro-mesopore aspect ratio is 0.5, and the inverted microporosity aspect ratio ranges from 0.01 to 0.07. The effective pressure induced an effect of slight porosity reduction during the triaxial tests, mainly in the microporosity inclusions, slightly changing the amount and the aspect ratio of the microporosity.

  15. Personality Traits, Facets and Cognitive Performance: Age Differences in Their Relations

    PubMed Central

    Graham, Eileen K.; Lachman, Margie E.

    2014-01-01

    Personality traits and cognitive performance are related, but little work has examined how these associations vary by personality facet or age. 154 adults aged 22 to 84 completed the Brief Test of Adult Cognition by Telephone (BTACT) and the NEO Five Factor Personality Inventory. Hierarchical multiple regression analyses showed negative emotional aspects of personality (neuroticism, depression) were associated with lower reasoning, and social aspects of personality (assertiveness) were associated with faster reaction time, yet lower reasoning. The association between neuroticism and performance was found primarily among younger adults. In older adulthood, better performance was associated with positive emotional aspects of personality. We discuss how personality may have different associations with performance across age and the implications for possible interventions. PMID:24821992

  16. Sasebo, A Case Study in Optimizing Official Vehicles

    DTIC Science & Technology

    2012-12-01

    is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of...that were estimated to reduce federal budget deficits by a total of at least $2.1 trillion over the 2012–2021 period…At least another $1.2 trillion

  17. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  18. Incorporating population-level variation in thermal performance into predictions of geographic range shifts.

    PubMed

    Angert, Amy L; Sheth, Seema N; Paul, John R

    2011-11-01

    Determining how species' geographic ranges are governed by current climates and how they will respond to rapid climatic change poses a major biological challenge. Geographic ranges are often spatially fragmented and composed of genetically differentiated populations that are locally adapted to different thermal regimes. Tradeoffs between different aspects of thermal performance, such as between tolerance to high temperature and tolerance to low temperature or between maximal performance and breadth of performance, suggest that the performance of a given population will be a subset of that of the species. Therefore, species-level projections of distribution might overestimate the species' ability to persist at any given location. However, current approaches to modeling distributions often do not consider variation among populations. Here, we estimated genetically-based differences in thermal performance curves for growth among 12 populations of the scarlet monkeyflower, Mimulus cardinalis, a perennial herb of western North America. We inferred the maximum relative growth rate (RGR(max)), temperature optimum (T(opt)), and temperature breadth (T(breadth)) for each population. We used these data to test for tradeoffs in thermal performance, generate mechanistic population-level projections of distribution under current and future climates, and examine how variation in aspects of thermal performance influences forecasts of range shifts. Populations differed significantly in RGR(max) and had variable, but overlapping, estimates of T(opt) and T(breadth). T(opt) declined with latitude and increased with temperature of origin, consistent with tradeoffs between performances at low temperatures versus those at high temperatures. Further, T(breadth) was negatively related to RGR(max), as expected for a specialist-generalist tradeoff. Parameters of the thermal performance curve influenced properties of projected distributions. For both current and future climates, T(opt) was negatively related to latitudinal position, while T(breadth) was positively related to projected range size. The magnitude and direction of range shifts also varied with T(opt) and T(breadth), but sometimes in unexpected ways. For example, the fraction of habitat remaining suitable increased with T(opt) but decreased with T(breadth). Northern limits of all populations were projected to shift north, but the magnitude of shift decreased with T(opt) and increased with T(breadth). Median latitude was projected to shift north for populations with high T(breadth) and low T(opt), but south for populations with low T(breadth) and high T(opt). Distributions inferred by integrating population-level projections did not differ from a species-level projection that ignored variation among populations. However, the species-level approach masked the potential array of divergent responses by populations that might lead to genotypic sorting within the species' range. Thermal performance tradeoffs among populations within the species' range had important, but sometimes counterintuitive, effects on projected responses to climatic change. © The Author 2011. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved.

  19. Landfills as critical infrastructures: analysis of observational datasets after 12 years of non-invasive monitoring

    NASA Astrophysics Data System (ADS)

    Scozzari, Andrea; Raco, Brunella; Battaglini, Raffaele

    2016-04-01

    This work presents the results of more than ten years of observations, performed on a regular basis, on a municipal solid waste disposal located in Italy. Observational data are generated by the combination of non-invasive techniques, involving the direct measurement of biogas release to the atmosphere and thermal infrared imaging. In fact, part of the generated biogas tends to escape from the landfill surface even when collecting systems are installed and properly working. Thus, methodologies for estimating the behaviour of a landfill system by means of direct and/or indirect measurement systems have been developed in the last decades. It is nowadays known that these infrastructures produce more than 20% of the total anthropogenic methane released to the atmosphere, justifying the need for a systematic and efficient monitoring of such infrastructures. During the last 12 years, observational data regarding a solid waste disposal site located in Tuscany (Italy) have been collected on a regular basis. The collected datasets consist in direct measurements of gas flux with the accumulation chamber method, combined with the detection of thermal anomalies by infrared radiometry. This work discusses the evolution of the estimated performance of the landfill system, its trends, the benefits and the critical aspects of such relatively long-term monitoring activity.

  20. Improved Estimation and Interpretation of Correlations in Neural Circuits

    PubMed Central

    Yatsenko, Dimitri; Josić, Krešimir; Ecker, Alexander S.; Froudarakis, Emmanouil; Cotton, R. James; Tolias, Andreas S.

    2015-01-01

    Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150–350 cells) in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive ‘excitatory’ interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative ‘inhibitory’ interactions were less selective. Because of its superior performance, this ‘sparse+latent’ estimator likely provides a more physiologically relevant representation of the functional connectivity in densely sampled recordings than the sample correlation matrix. PMID:25826696

  1. Crash protectiveness to occupant injury and vehicle damage: An investigation on major car brands.

    PubMed

    Huang, Helai; Li, Chunyang; Zeng, Qiang

    2016-01-01

    This study sets out to investigate vehicles' crash protectiveness on occupant injury and vehicle damage, which can be deemed as an extension of the traditional crash worthiness. A Bayesian bivariate hierarchical ordered logistic (BVHOL) model is developed to estimate the occupant protectiveness (OP) and vehicle protectiveness (VP) of 23 major car brands in Florida, with considering vehicles' crash aggressivity and controlling external factors. The proposed model not only takes over the strength of the existing hierarchical ordered logistic (HOL) model, i.e. specifying the order characteristics of crash outcomes and cross-crash heterogeneities, but also accounts for the correlation between the two crash responses, driver injury and vehicle damage. A total of 7335 two-vehicle-crash records with 14,670 cars involved in Florida are used for the investigation. From the estimation results, it's found that most of the luxury cars such as Cadillac, Volvo and Lexus possess excellent OP and VP while some brands such as KIA and Saturn perform very badly in both aspects. The ranks of the estimated safety performance indices are even compared to the counterparts in Huang et al. study [Huang, H., Hu, S., Abdel-Aty, M., 2014. Indexing crash worthiness and crash aggressivity by major car brands. Safety Science 62, 339-347]. The results show that the rank of occupant protectiveness index (OPI) is relatively coherent with that of crash worthiness index, but the ranks of crash aggressivity index in both studies is more different from each other. Meanwhile, a great discrepancy between the OPI rank and that of vehicle protectiveness index is found. What's more, the results of control variables and hyper-parameters estimation as well as comparison to HOL models with separate or identical threshold errors, demonstrate the validity and advancement of the proposed model and the robustness of the estimated OP and VP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Neuro-genetic non-invasive temperature estimation: intensity and spatial prediction.

    PubMed

    Teixeira, César A; Ruano, M Graça; Ruano, António E; Pereira, Wagner C A

    2008-06-01

    The existence of proper non-invasive temperature estimators is an essential aspect when thermal therapy applications are envisaged. These estimators must be good predictors to enable temperature estimation at different operational situations, providing better control of the therapeutic instrumentation. In this work, radial basis functions artificial neural networks were constructed to access temperature evolution on an ultrasound insonated medium. The employed models were radial basis functions neural networks with external dynamics induced by their inputs. Both the most suited set of model inputs and number of neurons in the network were found using the multi-objective genetic algorithm. The neural models were validated in two situations: the operating ones, as used in the construction of the network; and in 11 unseen situations. The new data addressed two new spatial locations and a new intensity level, assessing the intensity and space prediction capacity of the proposed model. Good performance was obtained during the validation process both in terms of the spatial points considered and whenever the new intensity level was within the range of applied intensities. A maximum absolute error of 0.5 degrees C+/-10% (0.5 degrees C is the gold-standard threshold in hyperthermia/diathermia) was attained with low computationally complex models. The results confirm that the proposed neuro-genetic approach enables foreseeing temperature propagation, in connection to intensity and space parameters, thus enabling the assessment of different operating situations with proper temperature resolution.

  3. Concepts for Conducting Warfare in Cyberspace

    DTIC Science & Technology

    2018-04-20

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect of this collection of... information , including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information

  4. Political Revolution And Social Communication Technologies

    DTIC Science & Technology

    2017-12-01

    this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden...estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington headquarters Services

  5. Navy Multiband Terminal (NMT)

    DTIC Science & Technology

    2013-12-01

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to

  6. High Assurance Software

    DTIC Science & Technology

    2013-10-22

    CONGRESSIONAL ) HIGH ASSURANCE SOFTWARE WILLIAM MAHONEY UNIVERSITY OF NEBRASKA 10/22/2013 Final Report DISTRIBUTION A: Distribution approved for ...0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to

  7. A Regression Study of Demand, Cost and Pricing Public Library Circulation Services.

    ERIC Educational Resources Information Center

    Stratton, Peter J.

    This paper examines three aspects of the public library's circulation service: (1) a demand function for the service is estimated; (2) a long-run unit circulation cost curve is developed; and (3) using the economist's notion of "efficiency," a general model for the pricing of the circulation service is presented. The estimated demand…

  8. Accuracy of plant specimen disease severity estimates: concepts, history, methods, ramifications and challenges for the future

    USDA-ARS?s Scientific Manuscript database

    Knowledge of the extent of the symptoms of a plant disease, generally referred to as severity, is key to both fundamental and applied aspects of plant pathology. Most commonly, severity is obtained visually and the accuracy of each estimate (closeness to the actual value) by individual raters is par...

  9. A Generalization of Pythagoras's Theorem and Application to Explanations of Variance Contributions in Linear Models. Research Report. ETS RR-14-18

    ERIC Educational Resources Information Center

    Carlson, James E.

    2014-01-01

    Many aspects of the geometry of linear statistical models and least squares estimation are well known. Discussions of the geometry may be found in many sources. Some aspects of the geometry relating to the partitioning of variation that can be explained using a little-known theorem of Pappus and have not been discussed previously are the topic of…

  10. Associated with aerospace vehicles development of methodologies for the estimation of thermal properties

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.

    1994-01-01

    Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.

  11. Fifth Annual Flight Mechanics/Estimation Theory Symposium

    NASA Technical Reports Server (NTRS)

    Teles, J. (Editor)

    1980-01-01

    Various aspects of astrodynamics are considered including orbit calculations and trajectory determination. Other topics dealing with remote sensing systems, satellite navigation, and attitude control are included.

  12. Assessing the relevance of ecotoxicological studies for regulatory decision making.

    PubMed

    Rudén, Christina; Adams, Julie; Ågerstrand, Marlene; Brock, Theo Cm; Poulsen, Veronique; Schlekat, Christian E; Wheeler, James R; Henry, Tala R

    2017-07-01

    Regulatory policies in many parts of the world recognize either the utility of or the mandate that all available studies be considered in environmental or ecological hazard and risk assessment (ERA) of chemicals, including studies from the peer-reviewed literature. Consequently, a vast array of different studies and data types need to be considered. The first steps in the evaluation process involve determining whether the study is relevant to the ERA and sufficiently reliable. Relevance evaluation is typically performed using existing guidance but involves application of "expert judgment" by risk assessors. In the present paper, we review published guidance for relevance evaluation and, on the basis of the practical experience within the group of authors, we identify additional aspects and further develop already proposed aspects that should be considered when conducting a relevance assessment for ecotoxicological studies. From a regulatory point of view, the overarching key aspect of relevance concerns the ability to directly or indirectly use the study in ERA with the purpose of addressing specific protection goals and ultimately regulatory decision making. Because ERA schemes are based on the appropriate linking of exposure and effect estimates, important features of ecotoxicological studies relate to exposure relevance and biological relevance. Exposure relevance addresses the representativeness of the test substance, environmental exposure media, and exposure regime. Biological relevance deals with the environmental significance of the test organism and the endpoints selected, the ecological realism of the test conditions simulated in the study, as well as a mechanistic link of treatment-related effects for endpoints to the protection goal identified in the ERA. In addition, uncertainties associated with relevance should be considered in the assessment. A systematic and transparent assessment of relevance is needed for regulatory decision making. The relevance aspects also need to be considered by scientists when designing, performing, and reporting ecotoxicological studies to facilitate their use in ERA. Integr Environ Assess Manag 2017;13:652-663. © 2016 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC). © 2016 The Authors. Integrated Environmental Assessment and Management Published by Wiley Periodicals, Inc. on behalf of Society of Environmental Toxicology & Chemistry (SETAC).

  13. Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Hao, Huiyan; Jafari, Roozbeh; Kehtarnavaz, Nasser

    2017-05-01

    This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.

  14. Reflectivities of uniform and broken marine stratiform clouds

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.

    1990-01-01

    Plane-parallel radiative transfer models are often used to estimate the effects of clouds on the earth's energy budget and to retrieve cloud properties from satellite observations. An attempt is made to assess the performance of such models by using AVHRR data collected during the FIRE MARINE Stratus IFO to determine the reflectivities and, in particular, the anisotropy of the reflected radiances for the clouds observed during the field experiment. The intent is to determine the anisotropy for conditions that are overcast and to compare this anisotropy with that produced by the same cloud when broken. The observations are used to quantify aspects of the differences between reflection by plane-parallel clouds and non-planar clouds expected on the basis of theoretical studies.

  15. Examining corporate reputation judgments with generalizability theory.

    PubMed

    Highhouse, Scott; Broadfoot, Alison; Yugo, Jennifer E; Devendorf, Shelba A

    2009-05-01

    The researchers used generalizability theory to examine whether reputation judgments about corporations function in a manner consistent with contemporary theory in the corporate-reputation literature. University professors (n = 86) of finance, marketing, and human resources management made repeated judgments about the general reputations of highly visible American companies. Minimal variability in the judgments is explained by items, time, persons, and field of specialization. Moreover, experts from the different specializations reveal considerable agreement in how they weigh different aspects of corporate performance in arriving at their global reputation judgments. The results generally support the theory of the reputation construct and suggest that stable estimates of global reputation can be achieved with a small number of items and experts. (c) 2009 APA, all rights reserved.

  16. Extreme risk assessment based on normalized historic loss data

    NASA Astrophysics Data System (ADS)

    Eichner, Jan

    2017-04-01

    Natural hazard risk assessment and risk management focuses on the expected loss magnitudes of rare and extreme events. Such large-scale loss events typically comprise all aspects of compound events and accumulate losses from multiple sectors (including knock-on effects). Utilizing Munich Re's NatCatSERVICE direct economic loss data, we beriefly recap a novel methodology of peril-specific loss data normalization which improves the stationarity properties of highly non-stationary historic loss data (due to socio-economic growth of assets prone to destructive forces), and perform extreme value analysis (peaks-over-threshold method) to come up with return level estimates of e.g. 100-yr loss event scenarios for various types of perils, globally or per continent, and discuss uncertainty in the results.

  17. Overactive Bladder Syndrome: Evaluation and Management.

    PubMed

    Leron, Elad; Weintraub, Adi Y; Mastrolia, Salvatore A; Schwarzman, Polina

    2018-03-01

    Overactive bladder (OAB) syndrome is a chronic medical condition which has a major influence on the quality of life in a significant amount of the population. OAB affects performance of daily activities and has an estimated prevalence of 16.5%. Many sufferers do not seek medical help. Moreover, many family physicians and even gynecologists are not familiar with this issue. Usually patients suffer from OAB in advanced age. Nocturia is reported as the most bothersome symptom in the elderly population. The aim of our review was to discuss all aspects of this challenging disorder and suggest tools for assessment and management strategies. Practitioners can easily overlook urinary complains if they not directly queried. We would like to encourage practitioners to give more attention to this issue.

  18. Interagency Workshop on Lighter than Air Vehicles, Monterey, Calif., September 9-13, 1974, Proceedings

    NASA Technical Reports Server (NTRS)

    Vittek, J. F., Jr.

    1975-01-01

    Papers are presented which review modern lighter-than-air (LTA) airship design concepts and LTA structures and materials technology, as well as perform economic and market analyses for assessment of the viability of future LTA development programs. Potential applications of LTA vehicles are examined. Some of the topics covered include preliminary estimates of operating costs for LTA transports, an economic comparison of three heavy lift airborn systems, boundary layer control for airships, computer aided flexible envelope designs, state-of-the-art of metalclad airships, aspects of hybrid-Zeppelins, the LTA vehicle as a total cargo system, unmanned powered balloons, and a practical concept for powered or tethered weight-lifting LTA vehicles. Individual items are announced in this issue.

  19. Improving performance with knowledge management

    NASA Astrophysics Data System (ADS)

    Kim, Sangchul

    2018-06-01

    People and organization are unable to easily locate their experience and knowledge, so meaningful data is usually fragmented, unstructured, not up-to-date and largely incomplete. Poor knowledge management (KM) leaves a company weak to their knowledge-base - or intellectual capital - walking out of the door each year, that is minimum estimated at 10%. Knowledge management (KM) can be defined as an emerging set of organizational design and operational principles, processes, organizational structures, applications and technologies that helps knowledge workers dramatically leverage their creativity and ability to deliver business value and to reap finally a competitive advantage. Then, this paper proposed various method and software starting with an understanding of the enterprise aspect, and gave inspiration to those who wanted to use KM.

  20. On a variational approach to some parameter estimation problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.

    1985-01-01

    Examples (1-D seismic, large flexible structures, bioturbation, nonlinear population dispersal) in which a variation setting can provide a convenient framework for convergence and stability arguments in parameter estimation problems are considered. Some of these examples are 1-D seismic, large flexible structures, bioturbation, and nonlinear population dispersal. Arguments for convergence and stability via a variational approach of least squares formulations of parameter estimation problems for partial differential equations is one aspect of the problem considered.

  1. Principles of estimation of Radiative danger

    NASA Astrophysics Data System (ADS)

    Korogodin, V. I.

    1990-08-01

    The main principles of the estimation of Radiative danger has been discussed. Two main particularities of the danger were pointed out: negatve consequencies of small doses, which does not lead to radiation sickness, but lead to disfunctions of sanguine organs and thin intestines; absolute estimation of biological anomalies, which was forwarded by A.D. Sakharov (1921-1989). The ethic aspects of the use of Nuclear weapons on the fate of Human civilization were pointed out by A.D. Sakharov (1921-1990).

  2. 3D change detection in staggered voxels model for robotic sensing and navigation

    NASA Astrophysics Data System (ADS)

    Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.

    2016-05-01

    3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.

  3. Advanced Small Modular Reactor Economics Status Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Thomas J.

    2014-10-01

    This report describes the data collection work performed for an advanced small modular reactor (AdvSMR) economics analysis activity at the Oak Ridge National Laboratory. The methodology development and analytical results are described in separate, stand-alone documents as listed in the references. The economics analysis effort for the AdvSMR program combines the technical and fuel cycle aspects of advanced (non-light water reactor [LWR]) reactors with the market and production aspects of SMRs. This requires the collection, analysis, and synthesis of multiple unrelated and potentially high-uncertainty data sets from a wide range of data sources. Further, the nature of both economic andmore » nuclear technology analysis requires at least a minor attempt at prediction and prognostication, and the far-term horizon for deployment of advanced nuclear systems introduces more uncertainty. Energy market uncertainty, especially the electricity market, is the result of the integration of commodity prices, demand fluctuation, and generation competition, as easily seen in deregulated markets. Depending on current or projected values for any of these factors, the economic attractiveness of any power plant construction project can change yearly or quarterly. For long-lead construction projects such as nuclear power plants, this uncertainty generates an implied and inherent risk for potential nuclear power plant owners and operators. The uncertainty in nuclear reactor and fuel cycle costs is in some respects better understood and quantified than the energy market uncertainty. The LWR-based fuel cycle has a long commercial history to use as its basis for cost estimation, and the current activities in LWR construction provide a reliable baseline for estimates for similar efforts. However, for advanced systems, the estimates and their associated uncertainties are based on forward-looking assumptions for performance after the system has been built and has achieved commercial operation. Advanced fuel materials and fabrication costs have large uncertainties based on complexities of operation, such as contact-handled fuel fabrication versus remote handling, or commodity availability. Thus, this analytical work makes a good faith effort to quantify uncertainties and provide qualifiers, caveats, and explanations for the sources of these uncertainties. The overall result is that this work assembles the necessary information and establishes the foundation for future analyses using more precise data as nuclear technology advances.« less

  4. Tehuantepec and Morelos-Puebla earthquakes lived and reported by the Servicio Sismológico Nacional, Mexico

    NASA Astrophysics Data System (ADS)

    Perez-Campos, X.

    2017-12-01

    On September 2017, Mexico experienced two significant inslab earthquakes with only 11 days apart from each other. Both caused severe damage in the epicentral states: Chiapas, Oaxaca, Puebla, Morelos, and Mexico City. In all senses, they tested the capabilities of the Servicio Sismológico Nacional (SSN, Mexican National Seismological Service), from the acquisition, processing, and reporting systems (both, automatic and manual), to social network and media response. In this work, we present the various aspects of the performance of the SSN and the results obtained real-time and the days after. The first earthquake occurred on 8 September within the Gulf of Tehuantepec. The SSN estimated its magnitude as Mww8.2, from W-phase inversion of local and regional data. Forty days later, it has had more than 7750 aftershocks with magnitudes larger than 2.5, making restless to inhabitants in the epicentral area. A preliminary hypo-DD relocation of the aftershocks shows two parallel SE-NW alignments. The mainshock seemed to have triggered seismicity in central Mexico, an effect previously observed by Singh et al. (1998) for coastal earthquakes. Barely 11 days had passed since this major quake. The SSN was in the middle of an intense aftershock sequence and conducting several outreach activities due to the anniversary of the 19 September 1985 (Mw8.0) earthquake, when the second quake hit. SSN located its epicenter at the border of the states of Morelos and Puebla and estimated its magnitude as Mww7.1. In this case, SSN identified only eight aftershocks, which was a similar behavior for previous inslab earthquakes in the region. Important aspects that these events have highlighted are the media and social network responses. Immediately after the first quake, SSN faced misinformation due to viral videos and social media messages predicting massive earthquakes and their relation to a solar storm that took place days before. Outreach to the public and the media became essential. Despite the excellent performance of SSN during this seismic crisis, there are challenges and areas of opportunity that should be addressed shortly; for instance, automatization of detection, location and magnitude estimation of small earthquakes, and a new approach to interact with social media.

  5. Obesity, arterial function and arterial structure – a systematic review and meta‐analysis

    PubMed Central

    Ne, J. Y. A.; Cai, T. Y.; Celermajer, D. S.; Caterson, I. D.; Gill, T.; Lee, C. M. Y.

    2017-01-01

    Summary Objective Obesity is an established risk factor for cardiovascular disease. The mechanisms by which obesity affects cardiovascular risk have not been fully elucidated. This paper reports a comprehensive systematic review and meta‐analysis on obesity and two key aspects of vascular health using gold‐standard non‐invasive measures – arterial endothelial function (brachial flow‐mediated dilatation) and subclinical atherosclerosis (carotid intima‐media thickness). Methods Electronic searches for ‘Obesity and flow‐mediated dilatation’ and ‘Obesity and intima‐media thickness’ were performed using Ovid Medline and Embase databases. A meta‐analysis was undertaken for brachial flow‐mediated dilatation and carotid intima‐media thickness to obtain pooled estimates for adults with obesity and those with healthy weight. Results Of the 5,810 articles retrieved, 19 studies on flow‐mediated dilatation and 19 studies on intima‐media thickness were included. Meta‐analysis demonstrated that obesity was associated with lower flow‐mediated dilatation (−1.92 % [95% CI −2.92, −0.92], P = 0.0002) and greater carotid intima‐media thickness (0.07 mm [95% CI 0.05, 0.08], P < 0.0001). Conclusions Obesity is associated with poorer arterial endothelial function and increased subclinical atherosclerosis, consistent with these aspects of vascular health at least partially contributing to the increased risk of cardiovascular events in adults with obesity. These estimated effect sizes will enable vascular health benefits in response to weight loss treatment to be put in greater perspective, both in the research setting and potentially also clinical practice. PMID:28702212

  6. Target Strength of Southern Resident Killer Whales (Orcinus orca): Measurement and Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jinshan; Deng, Zhiqun; Carlson, Thomas J.

    2012-04-04

    A major criterion for tidal power licensing in Washington’s Puget Sound is the management of the risk of injury to killer whales due to collision with moving turbine blades. An active monitoring system is being proposed for killer whale detection, tracking, and alerting that links to and triggers temporary turbine shutdown when there is risk of collision. Target strength (TS) modeling of the killer whale is critical to the design and application of any active monitoring system. A 1996 study performed a high-resolution measurement of acoustic reflectivity as a function of frequency of a female bottlenose dolphin (2.2 m length)more » at broadside aspect and TS as a function of incident angle at 67 kHz frequency. Assuming that killer whales share similar morphology structure with the bottlenose dolphin, we extrapolated the TS of an adult killer whale 7.5 m in length at 67 kHz frequency with -8 dB at broadside aspect and -28 dB at tail side. The backscattering data from three Southern Resident killer whales were analyzed to obtain the TS measurement. These data were collected at Lime Kiln State Park using a split-beam system deployed from a boat. The TS of the killer whale at higher frequency (200 kHz) was estimated based on a three-layer model for plane wave reflection from the lung of the whale. The TS data of killer whales were in good agreement with our model. In this paper, we also discuss and explain possible causes for measurement estimation error.« less

  7. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  8. Importance of Preserving Cross-correlation in developing Statistically Downscaled Climate Forcings and in estimating Land-surface Fluxes and States

    NASA Astrophysics Data System (ADS)

    Das Bhowmik, R.; Arumugam, S.

    2015-12-01

    Multivariate downscaling techniques exhibited superiority over univariate regression schemes in terms of preserving cross-correlations between multiple variables- precipitation and temperature - from GCMs. This study focuses on two aspects: (a) develop an analytical solutions on estimating biases in cross-correlations from univariate downscaling approaches and (b) quantify the uncertainty in land-surface states and fluxes due to biases in cross-correlations in downscaled climate forcings. Both these aspects are evaluated using climate forcings available from both historical climate simulations and CMIP5 hindcasts over the entire US. The analytical solution basically relates the univariate regression parameters, co-efficient of determination of regression and the co-variance ratio between GCM and downscaled values. The analytical solutions are compared with the downscaled univariate forcings by choosing the desired p-value (Type-1 error) in preserving the observed cross-correlation. . For quantifying the impacts of biases on cross-correlation on estimating streamflow and groundwater, we corrupt the downscaled climate forcings with different cross-correlation structure.

  9. Assessing REM Sleep in Mice Using Video Data

    PubMed Central

    McShane, Blakeley B.; Galante, Raymond J.; Biber, Michael; Jensen, Shane T.; Wyner, Abraham J.; Pack, Allan I.

    2012-01-01

    Study Objectives: Assessment of sleep and its substages in mice currently requires implantation of chronic electrodes for measurement of electroencephalogram (EEG) and electromyogram (EMG). This is not ideal for high-throughput screening. To address this deficiency, we present a novel method based on digital video analysis. This methodology extends previous approaches that estimate sleep and wakefulness without EEG/EMG in order to now discriminate rapid eye movement (REM) from non-REM (NREM) sleep. Design: Studies were conducted in 8 male C57BL/6J mice. EEG/EMG were recorded for 24 hours and manually scored in 10-second epochs. Mouse behavior was continuously recorded by digital video at 10 frames/second. Six variables were extracted from the video for each 10-second epoch (i.e., intraepoch mean of velocity, aspect ratio, and area of the mouse and intraepoch standard deviation of the same variables) and used as inputs for our model. Measurements and Results: We focus on estimating features of REM (i.e., time spent in REM, number of bouts, and median bout length) as well as time spent in NREM and WAKE. We also consider the model's epoch-by-epoch scoring performance relative to several alternative approaches. Our model provides good estimates of these features across the day both when averaged across mice and in individual mice, but the epoch-by-epoch agreement is not as good. Conclusions: There are subtle changes in the area and shape (i.e., aspect ratio) of the mouse as it transitions from NREM to REM, likely due to the atonia of REM, thus allowing our methodology to discriminate these two states. Although REM is relatively rare, our methodology can detect it and assess the amount of REM sleep. Citation: McShane BB; Galante RJ; Biber M; Jensen ST; Wyner AJ; Pack AI. Assessing REM sleep in mice using video data. SLEEP 2012;35(3):433-442. PMID:22379250

  10. A statistically compiled test battery for feasible evaluation of knee function after rupture of the Anterior Cruciate Ligament - derived from long-term follow-up data.

    PubMed

    Schelin, Lina; Tengman, Eva; Ryden, Patrik; Häger, Charlotte

    2017-01-01

    Clinical test batteries for evaluation of knee function after injury to the Anterior Cruciate Ligament (ACL) should be valid and feasible, while reliably capturing the outcome of rehabilitation. There is currently a lack of consensus as to which of the many available assessment tools for knee function that should be included. The present aim was to use a statistical approach to investigate the contribution of frequently used tests to avoid redundancy, and filter them down to a proposed comprehensive and yet feasible test battery for long-term evaluation after ACL injury. In total 48 outcome variables related to knee function, all potentially relevant for a long-term follow-up, were included from a cross-sectional study where 70 ACL-injured (17-28 years post injury) individuals were compared to 33 controls. Cluster analysis and logistic regression were used to group variables and identify an optimal test battery, from which a summarized estimator of knee function representing various functional aspects was derived. As expected, several variables were strongly correlated, and the variables also fell into logical clusters with higher within-correlation (max ρ = 0.61) than between clusters (max ρ = 0.19). An extracted test battery with just four variables assessing one-leg balance, isokinetic knee extension strength and hop performance (one-leg hop, side hop) were mathematically combined to an estimator of knee function, which acceptably classified ACL-injured individuals and controls. This estimator, derived from objective measures, correlated significantly with self-reported function, e.g. Lysholm score (ρ = 0.66; p<0.001). The proposed test battery, based on a solid statistical approach, includes assessments which are all clinically feasible, while also covering complementary aspects of knee function. Similar test batteries could be determined for earlier phases of ACL rehabilitation or to enable longitudinal monitoring. Such developments, established on a well-grounded consensus of measurements, would facilitate comparisons of studies and enable evidence-based rehabilitation.

  11. Optimization of Grade 100 High-Performance Steel Butt Welds [Tech Brief

    DOT National Transportation Integrated Search

    2013-11-01

    In the development of the high performance steels (HPS) grades, three advisory groups were formed to oversee and guide the development of the steels in terms of their design, welding, and corrosion aspects. One of the aspects considered by the Weldin...

  12. Novel Formulation of Adaptive MPC as EKF Using ANN Model: Multiproduct Semibatch Polymerization Reactor Case Study.

    PubMed

    Kamesh, Reddi; Rani, Kalipatnapu Yamuna

    2017-12-01

    In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.

  13. Planning and task management in Parkinson's disease: differential emphasis in dual-task performance.

    PubMed

    Bialystok, Ellen; Craik, Fergus I M; Stefurak, Taresa

    2008-03-01

    Seventeen patients diagnosed with Parkinson's disease completed a complex computer-based task that involved planning and management while also performing an attention-demanding secondary task. The tasks were performed concurrently, but it was necessary to switch from one to the other. Performance was compared to a group of healthy age-matched control participants and a group of young participants. Parkinson's patients performed better than the age-matched controls on almost all measures and as well as the young controls in many cases. However, the Parkinson's patients achieved this by paying relatively less attention to the secondary task and focusing attention more on the primary task. Thus, Parkinson's patients can apparently improve their performance on some aspects of a multidimensional task by simplifying task demands. This benefit may occur as a consequence of their inflexible exaggerated attention to some aspects of a complex task to the relative neglect of other aspects.

  14. Performance of resonant radar target identification algorithms using intra-class weighting functions

    NASA Astrophysics Data System (ADS)

    Mustafa, A.

    The use of calibrated resonant-region radar cross section (RCS) measurements of targets for the classification of large aircraft is discussed. Errors in the RCS estimate of full scale aircraft flying over an ocean, introduced by the ionospheric variability and the sea conditions were studied. The Weighted Target Representative (WTR) classification algorithm was developed, implemented, tested and compared with the nearest neighbor (NN) algorithm. The WTR-algorithm has a low sensitivity to the uncertainty in the aspect angle of the unknown target returns. In addition, this algorithm was based on the development of a new catalog of representative data which reduces the storage requirements and increases the computational efficiency of the classification system compared to the NN-algorithm. Experiments were designed to study and evaluate the characteristics of the WTR- and the NN-algorithms, investigate the classifiability of targets and study the relative behavior of the number of misclassifications as a function of the target backscatter features. The classification results and statistics were shown in the form of performance curves, performance tables and confusion tables.

  15. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  16. A Guide for Estimation of Aeroacoustic Loads on Flight Vehicle Surfaces

    DTIC Science & Technology

    1977-02-01

    Nozzle aspect ratio correction of one-third octave band sound pressure levels of USB noise . 122 31. Impingement angle correction of one-third octave...breech weapons ....................... 175 IX •: •-•,..i .•,z. •... LIST OF FIGURES (Cont.) page Figure 61. Rectangular cavity ...and a nozzle aspect ratio of 4.0, and without a deflector. Obtain the corrected one-third octave band level SPL from the baseline level, from " b

  17. Estimating taxonomic diversity, extinction rates, and speciation rates from fossil data using capture-recapture models

    USGS Publications Warehouse

    Nichols, J.D.; Pollock, K.H.

    1983-01-01

    Capture-recapture models can be used to estimate parameters of interest from paleobiological data when encouter probabilities are unknown and variable over time. These models also permit estimation of sampling variances and goodness-of-fit tests are available for assessing the fit of data to most models. The authors describe capture-recapture models which should be useful in paleobiological analyses and discuss the assumptions which underlie them. They illustrate these models with examples and discuss aspects of study design.

  18. Determining Source Attenuation History to Support Closure by Natural Attenuation

    DTIC Science & Technology

    2013-11-01

    of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any other...aspect of this collection of information , including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for

  19. Expanding UCR’s Interdisciplinary Materials Science and Engineering Faculty

    DTIC Science & Technology

    2018-02-27

    1 REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect of this collection of... information , including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information

  20. Microstructure Analyses of Detonation Diamond Nanoparticles

    DTIC Science & Technology

    2012-05-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing...data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information Send comments regarding this...burden estimate or any other aspect of this collection of information, including suggestions for reducing the burden to Department of Defense

  1. Alaska Native Parkinson’s Disease Registry

    DTIC Science & Technology

    2008-11-01

    OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of...information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this

  2. A Scalable, Reconfigurable, and Dependable Time-Triggered Architecture

    DTIC Science & Technology

    2003-07-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...existing data sources, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of...information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this

  3. Communication Breakdown: DHS Operations During a Cyber Attack

    DTIC Science & Technology

    2010-12-01

    is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of...Presidential Directive, Malware, National Exercise, Quadrennial Homeland Security Review , Trusted Internet Connections, Zero-Day Exploits 16. PRICE CODE 17

  4. Evaluation of Microcomposite Coatings for Chrome Replacement

    DTIC Science & Technology

    2009-09-02

    Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time...of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing...this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204

  5. Adaptive Campaigning Applied: Australian Army Operations in Iraq and Afghanistan

    DTIC Science & Technology

    2011-05-01

    of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing this collection of information . Send comments regarding this burden estimate or any...other aspect of this collection of information , including suggestions for reducing this burden to Washington Headquarters Services, Directorate for

  6. The Augmented Reality Sandtable (ARES) Research Strategy

    DTIC Science & Technology

    2018-02-01

    ii REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing the collection information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  7. Modeling Human Behavior with Fuzzy and Soft Computing Methods

    DTIC Science & Technology

    2017-12-13

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is estimated to average 1...completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of this collection of... information , including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services, Directorate for Information

  8. 2016 Quantum Science Gordon Research Conference

    DTIC Science & Technology

    2018-01-10

    PERSON 19b. TELEPHONE NUMBER Jun Ye 611102 c. THIS PAGE The public reporting burden for this collection of information is estimated to average 1 hour...completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215

  9. Optimizing Human Input in Social Network Analysis

    DTIC Science & Technology

    2018-01-23

    of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any...other aspect of this collection of information , including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for

  10. Two Invariants of Human-Swarm Interaction

    DTIC Science & Technology

    2018-01-16

    for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden...estimate or any other aspect of this collection of information , including suggestions for reducing the burden, to Department of Defense, Washington

  11. Exceptional Information: Recognizing Threats and Exploiting Opportunities

    DTIC Science & Technology

    2017-06-09

    EXCEPTIONAL INFORMATION : RECOGNIZING THREATS AND EXPLOITING OPPORTUNITIES A thesis presented to the Faculty of the U.S. Army...No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this

  12. Coupling Considerations in Assembly Language. Revision 1

    DTIC Science & Technology

    2018-02-13

    reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments...regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing the burden, to the Department of

  13. Inclusion of Disaster Resiliency in City/Neighborhood Comprehensive Plans

    DTIC Science & Technology

    2017-09-01

    collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or...any other aspect of this collection of information , including suggestions for reducing this burden, to Washington headquarters Services, Directorate

  14. Unitary Transformations in 3 D Vector Representation of Qutrit States

    DTIC Science & Technology

    2018-03-12

    Representation of Qutrit States Vinod K Mishra Computational and Information Sciences Directorate, ARL Approved for public... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection information . Send comments regarding this burden estimate or any other aspect

  15. Developing Scene Understanding Neural Software for Realistic Autonomous Outdoor Missions

    DTIC Science & Technology

    2017-09-01

    Information Sciences Directorate, ARL Approved for public release; distribution is unlimited. ii REPORT DOCUMENTATION PAGE...Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...collection information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing

  16. State of the Recruiting Market (Briefing charts)

    DTIC Science & Technology

    2010-01-26

    Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per...reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including...suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis

  17. Joint Direct Attack Munition (JDAM)

    DTIC Science & Technology

    2013-12-01

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to

  18. Climate change and water scarcity effects on the rural income distribution in the Mediterranean

    NASA Astrophysics Data System (ADS)

    Quiroga, Sonia; Suárez, Cristina

    2015-04-01

    This paper examines the effects of climate change and water scarcity on the agricultural outputs in the Mediterranean region. By now the effects of water scarcity as a response to climate change or policy restrictions has been analyzed with response functions considering the direct effects on crop productivity. Here we consider a complementary indirect effect on social distribution of incomes which is essential in the long term. We estimate crop production functions for a range of Mediterranean crops in Spain and we use a decomposition of the Gini coefficient to estimate the impact of climate change and water scarcity on yield disparities. This social aspect is important for climate change policies since it can be determinant for the public acceptation of certain adaptation measures in a context of water scarcity. We provide the empirical estimations for the marginal effects on the two considered direct and indirect impacts. In our estimates we consider both bio-physical and socio-economic aspects to conclude that there are long term implications on both competitiveness and social disparities. We find disparities in the adaptation strategies depending on the crop and the region analyzed.

  19. Aspects of Cognitive Functioning in Adults with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Perkins, Elizabeth A.; Small, Brent J.

    2006-01-01

    Recently, more attention is being given to identifying aging-related and dementia-related pathological changes in performance and cognition among persons with intellectual disabilities (ID). This literature review examines age-related differences in specific aspects of cognitive functioning and cognitive performance of people with ID and…

  20. Instructional Aides: Employment, Payroll Procedures, Supervision, Performance Appraisal, Legal Aspects.

    ERIC Educational Resources Information Center

    Nielsen, Earl T.

    Designed to assist school administrators in their efforts to secure, train, and retain the most qualified instructional aides available, the monograph discusses procedures for employment, payroll processing, aide supervision, performance appraisal, and legal aspects involved in the hiring of instructional aides. Specific topics include…

  1. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method.

    PubMed

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-04-27

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction.

  2. An Improved BeiDou-2 Satellite-Induced Code Bias Estimation Method

    PubMed Central

    Fu, Jingyang; Li, Guangyun; Wang, Li

    2018-01-01

    Different from GPS, GLONASS, GALILEO and BeiDou-3, it is confirmed that the code multipath bias (CMB), which originate from the satellite end and can be over 1 m, are commonly found in the code observations of BeiDou-2 (BDS) IGSO and MEO satellites. In order to mitigate their adverse effects on absolute precise applications which use the code measurements, we propose in this paper an improved correction model to estimate the CMB. Different from the traditional model which considering the correction values are orbit-type dependent (estimating two sets of values for IGSO and MEO, respectively) and modeling the CMB as a piecewise linear function with a elevation node separation of 10°, we estimate the corrections for each BDS IGSO + MEO satellite on one hand, and a denser elevation node separation of 5° is used to model the CMB variations on the other hand. Currently, the institutions such as IGS-MGEX operate over 120 stations which providing the daily BDS observations. These large amounts of data provide adequate support to refine the CMB estimation satellite by satellite in our improved model. One month BDS observations from MGEX are used for assessing the performance of the improved CMB model by means of precise point positioning (PPP). Experimental results show that for the satellites on the same orbit type, obvious differences can be found in the CMB at the same node and frequency. Results show that the new correction model can improve the wide-lane (WL) ambiguity usage rate for WL fractional cycle bias estimation, shorten the WL and narrow-lane (NL) time to first fix (TTFF) in PPP ambiguity resolution (AR) as well as improve the PPP positioning accuracy. With our improved correction model, the usage of WL ambiguity is increased from 94.1% to 96.0%, the WL and NL TTFF of PPP AR is shorten from 10.6 to 9.3 min, 67.9 to 63.3 min, respectively, compared with the traditional correction model. In addition, both the traditional and improved CMB model have a better performance in these aspects compared with the model which does not account for the CMB correction. PMID:29702559

  3. The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model.

    PubMed

    Ozawa, Masayoshi; Iwadate, Kimiharu; Matsumoto, Sari; Asakura, Kumiko; Ochiai, Eriko; Maebashi, Kyoko

    2013-11-01

    Rigor mortis is an important phenomenon to estimate the postmortem interval in forensic medicine. Rigor mortis is affected by temperature. We measured stiffness of rat muscles using a liquid paraffin model to monitor the mechanical aspects of rigor mortis at five temperatures (37, 25, 10, 5 and 0°C). At 37, 25 and 10°C, the progression of stiffness was slower in cooler conditions. At 5 and 0°C, the muscle stiffness increased immediately after the muscles were soaked in cooled liquid paraffin and then muscles gradually became rigid without going through a relaxed state. This phenomenon suggests that it is important to be careful when estimating the postmortem interval in cold seasons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Control-surface hinge-moment calculations for a high-aspect-ratio supercritical wing

    NASA Technical Reports Server (NTRS)

    Perry, B., III

    1978-01-01

    The hinge moments, at selected flight conditions, resulting from deflecting two trailing edge control surfaces (one inboard and one midspan) on a high aspect ratio, swept, fuel conservative wing with a supercritical airfoil are estimated. Hinge moment results obtained from procedures which employ a recently developed transonic analysis are given. In this procedure a three dimensional inviscid transonic aerodynamics computer program is combined with a two dimensional turbulent boundary layer program in order to obtain an interacted solution. These results indicate that trends of the estimated hinge moment as a function of deflection angle are similar to those from experimental hinge moment measurements made on wind tunnel models with swept supercritical wings tested at similar values of free stream Mach number and angle of attack.

  5. Control-surface hinge-moment calculations for a high-aspect-ratio supercritical wing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, B.I.

    1978-09-01

    The hinge moments, at selected flight conditions, resulting from deflecting two trailing edge control surfaces (one inboard and one midspan) on a high aspect ratio, swept, fuel conservative wing with a supercritical airfoil are estimated. Hinge moment results obtained from procedures which employ a recently developed transonic analysis are given. In this procedure a three dimensional inviscid transonic aerodynamics computer program is combined with a two dimensional turbulent boundary layer program in order to obtain an interacted solution. These results indicate that trends of the estimated hinge moment as a function of deflection angle are similar to those from experimentalmore » hinge moment measurements made on wind tunnel models with swept supercritical wings tested at similar values of free stream Mach number and angle of attack.« less

  6. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  7. Genetic evaluation of aspects of temperament in Nellore-Angus calves.

    PubMed

    Riley, D G; Gill, C A; Herring, A D; Riggs, P K; Sawyer, J E; Lunt, D K; Sanders, J O

    2014-08-01

    The objective of this work was to estimate heritability of each of 5 subjectively measured aspects of temperament of cattle and the genetic correlations of pairs of those traits. From 2003 to 2013, Nellore-Angus F2 and F3 calves (n = 1,816) were evaluated for aspects of temperament at an average 259 d of age, which was approximately 2 mo after weaning. Calves were separated from a group and subjectively scored from 1 (calm, good temperament) to 9 (wild, poor temperament) for aggressiveness (willingness to hit an evaluator), nervousness, flightiness, gregariousness (willingness to separate from the group), and a distinct overall score by 4 evaluators. Data were analyzed using threshold and linear models with additive genetic random effects. Two-trait animal models (nonthreshold) included the additive genetic covariance for pairs of traits and were used to estimate additive genetic correlations. Contemporary groups (n = 104) represented calves penned together for evaluation on given evaluation days. Heifers had greater (worse) means for all traits than steers (P < 0.05). The regression of score on age in days was included in final models for flightiness (P = 0.05; -0.006 ± 0.003) and gregariousness (P = 0.025; -0.007 ± 0.003). Estimates of heritability were large (0.51, 0.4, 0.45, 0.49, and 0.47 for aggressiveness, nervousness, flightiness, gregariousness, and overall temperament, respectively; SE = 0.07 for each). The ability to use this methodology to distinctly separate different aspects of calf temperament appeared to be limited, as estimates of additive genetic correlations were near unity for all pairs of traits; estimates of phenotypic correlation ranged from 0.88 ± 0.01 to 0.99 ± 0.002 for pairs of traits. Distinct subsequent analyses indicated a significant negative relationship of 4 of the various temperament scores with weight at weaning (regression coefficients ranged from -0.008 ± 0.002 for nervousness, flightiness, and gregariousness to -0.003 ± 0.002 for aggressiveness). In subsequent analyses, the regression of temperament trait on sequence of evaluation within a pen was highly significant and solutions ranged from 0.05 ± 0.007 for aggressiveness to 0.08 ± 0.007 for all other traits. The apparent large additive genetic variance for any one of these traits may be useful in identification of genes responsible for differences in cattle temperament.

  8. Skylab mission report, second visit. [postflight analysis of engineering, experimentation, and medical aspects

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An evaluation is presented of the operational and engineering aspects of the second Skylab flight. Other areas described include: the performance of experimental hardware; the crew's evaluation of the flight; medical aspects; and hardware anomalies.

  9. Hygienic food handling behaviors: attempting to bridge the intention-behavior gap using aspects from temporal self-regulation theory.

    PubMed

    Fulham, Elizabeth; Mullan, Barbara

    2011-06-01

    An estimated 25% of the populations of both the United States and Australia suffer from foodborne illness every year, generally as a result of incorrect food handling practices. The aim of the current study was to determine through the application of the theory of planned behavior what motivates these behaviors and to supplement the model with two aspects of temporal self-regulation theory--behavioral prepotency and executive function--in an attempt to bridge the "intention-behavior gap." A prospective 1-week design was utilized to investigate the prediction of food hygiene using the theory of planned behavior with the additional variables of behavioral prepotency and executive function. One hundred forty-nine undergraduate psychology students completed two neurocognitive executive function tasks and a self-report questionnaire assessing theory of planned behavior variables, behavioral prepotency, and intentions to perform hygienic food handling behaviors. A week later, behavior was assessed via a follow-up self-report questionnaire. It was found that subjective norm and perceived behavioral control predicted intentions and intentions predicted behavior. However, behavioral prepotency was found to be the strongest predictor of behavior, over and above intentions, suggesting that food hygiene behavior is habitual. Neither executive function measure of self-regulation predicted any additional variance. These results provide support for the utility of the theory of planned behavior in this health domain, but the augmentation of the theory with two aspects of temporal self-regulation theory was only partially successful.

  10. Optimal Observations for Variational Data Assimilation

    NASA Technical Reports Server (NTRS)

    Koehl, Armin; Stammer, Detlef

    2003-01-01

    An important aspect of Ocean state estimation is the design of an observing system that allows the efficient study of climate aspects in the ocean. A solution of the design problem is presented here in terms of optimal observations that emerge as nondimensionalized singular vectors of the modified data resolution matrix. The actual computation is feasible only for scalar quantities in the limit of large observational errors. In the framework of a lo resolution North Atlantic primitive equation model it is demonstrated that such optimal observations when applied to determining the strength of the volume and heat transport across the Greenland-Scotland ridge, perform significantly better than traditional section data. On seasonal to inter-annual time-scales optimal observations are located primarily along the continental shelf and information about heat-transport, wind stress and stratification is being communicated via boundary waves and advective processes. On time-scales of about a month, sea surface height observations appear to be more efficient in reconstructing the cross-ridge heat transport than hydrographic observations. Optimal observations also provide a tool for understanding how the ocean state is effected by anomalies of integral quantities such as meridional heat transport.

  11. Metrological approach to quantitative analysis of clinical samples by LA-ICP-MS: A critical review of recent studies.

    PubMed

    Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta

    2018-05-15

    Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Emotional reactivity and awareness of task performance in Alzheimer's disease.

    PubMed

    Mograbi, Daniel C; Brown, Richard G; Salas, Christian; Morris, Robin G

    2012-07-01

    Lack of awareness about performance in tasks is a common feature of Alzheimer's disease. Nevertheless, clinical anecdotes have suggested that patients may show emotional or behavioural responses to the experience of failure despite reporting limited awareness, an aspect which has been little explored experimentally. The current study investigated emotional reactions to success or failure in tasks despite unawareness of performance in Alzheimer's disease. For this purpose, novel computerised tasks which expose participants to systematic success or failure were used in a group of Alzheimer's disease patients (n=23) and age-matched controls (n=21). Two experiments, the first with reaction time tasks and the second with memory tasks, were carried out, and in each experiment two parallel tasks were used, one in a success condition and one in a failure condition. Awareness of performance was measured comparing participant estimations of performance with actual performance. Emotional reactivity was assessed with a self-report questionnaire and rating of filmed facial expressions. In both experiments the results indicated that, relative to controls, Alzheimer's disease patients exhibited impaired awareness of performance, but comparable differential reactivity to failure relative to success tasks, both in terms of self-report and facial expressions. This suggests that affective valence of failure experience is processed despite unawareness of task performance, which might indicate implicit processing of information in neural pathways bypassing awareness. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. [Statistical (Poisson) motor unit number estimation. Methodological aspects and normal results in the extensor digitorum brevis muscle of healthy subjects].

    PubMed

    Murga Oporto, L; Menéndez-de León, C; Bauzano Poley, E; Núñez-Castaín, M J

    Among the differents techniques for motor unit number estimation (MUNE) there is the statistical one (Poisson), in which the activation of motor units is carried out by electrical stimulation and the estimation performed by means of a statistical analysis based on the Poisson s distribution. The study was undertaken in order to realize an approximation to the MUNE Poisson technique showing a coprehensible view of its methodology and also to obtain normal results in the extensor digitorum brevis muscle (EDB) from a healthy population. One hundred fourteen normal volunteers with age ranging from 10 to 88 years were studied using the MUNE software contained in a Viking IV system. The normal subjects were divided into two age groups (10 59 and 60 88 years). The EDB MUNE from all them was 184 49. Both, the MUNE and the amplitude of the compound muscle action potential (CMAP) were significantly lower in the older age group (p< 0.0001), showing the MUNE a better correlation with age than CMAP amplitude ( 0.5002 and 0.4142, respectively p< 0.0001). Statistical MUNE method is an important way for the assessment to the phisiology of the motor unit. The value of MUNE correlates better with the neuromuscular aging process than CMAP amplitude does.

  14. The Priority of Road Rehabilitation in Karanganyar Regency Using IRI Estimation from Roadroid

    NASA Astrophysics Data System (ADS)

    Achmadi, F.; Suprapto, M.; Setyawan, A.

    2017-02-01

    The IRI (International Roughness Index) is a road roughness index commonly obtained from measured longitudinal road profiles. This is one of the functional performance a surface of road pavement. Therefore, needs to be done evaluation and monitoring periodically to getting priority of road rehabilitation right on target. The IRI standard has commonly been used worldwide for evaluating road system. The Roadroid is an application to measure road quality with a website to view road quality. It is designed for Android smartphones, so we can easily measure and monitor the road and also use the camera for GPS-tagged photo. By using the built-in vibration sensor in smartphones, it is possible to collect IRI value which can be an indicator road conditions. This study attempts to explain the priority of road rehabilitation in Karanganyar Regency. The location of the study focused on a collector street (primary, secondary and locally road). The result of IRI estimation will be combined with other aspects that influences; land use, policy, the connectivity of road and traffic average daily. Based on IRI estimation using Roadroid, the road conditions in Karanganyar Regency can be described 59,60% were good (IRI<4,5) 21,30% fair (4,512).

  15. Cellular signaling identifiability analysis: a case study.

    PubMed

    Roper, Ryan T; Pia Saccomani, Maria; Vicini, Paolo

    2010-05-21

    Two primary purposes for mathematical modeling in cell biology are (1) simulation for making predictions of experimental outcomes and (2) parameter estimation for drawing inferences from experimental data about unobserved aspects of biological systems. While the former purpose has become common in the biological sciences, the latter is less common, particularly when studying cellular and subcellular phenomena such as signaling-the focus of the current study. Data are difficult to obtain at this level. Therefore, even models of only modest complexity can contain parameters for which the available data are insufficient for estimation. In the present study, we use a set of published cellular signaling models to address issues related to global parameter identifiability. That is, we address the following question: assuming known time courses for some model variables, which parameters is it theoretically impossible to estimate, even with continuous, noise-free data? Following an introduction to this problem and its relevance, we perform a full identifiability analysis on a set of cellular signaling models using DAISY (Differential Algebra for the Identifiability of SYstems). We use our analysis to bring to light important issues related to parameter identifiability in ordinary differential equation (ODE) models. We contend that this is, as of yet, an under-appreciated issue in biological modeling and, more particularly, cell biology. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  16. Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps

    PubMed Central

    Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi

    2015-01-01

    Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003

  17. MicroRadarNet: A network of weather micro radars for the identification of local high resolution precipitation patterns

    NASA Astrophysics Data System (ADS)

    Turso, S.; Paolella, S.; Gabella, M.; Perona, G.

    2013-01-01

    In this paper, MicroRadarNet, a novel micro radar network for continuous, unattended meteorological monitoring is presented. Key aspects and constraints are introduced. Specific design strategies are highlighted, leading to the technological implementations of this wireless, low-cost, low power consumption sensor network. Raw spatial and temporal datasets are processed on-board in real-time, featuring a consistent evaluation of the signals from the sensors and optimizing the data loads to be transmitted. Network servers perform the final post-elaboration steps on the data streams coming from each unit. Final network products are meteorological mappings of weather events, monitored with high spatial and temporal resolution, and lastly served to the end user through any Web browser. This networked approach is shown to imply a sensible reduction of the overall operational costs, including management and maintenance aspects, if compared to the traditional long range monitoring strategy. Adoption of the TITAN storm identification and nowcasting engine is also here evaluated for in-loop integration within the MicroRadarNet data processing chain. A brief description of the engine workflow is provided, to present preliminary feasibility results and performance estimates. The outcomes were not so predictable, taking into account relevant operational differences between a Western Alps micro radar scenario and the long range radar context in the Denver region of Colorado. Finally, positive results from a set of case studies are discussed, motivating further refinements and integration activities.

  18. Legal Aspects of Evaluating Teacher Performance.

    ERIC Educational Resources Information Center

    Beckham, Joseph C.

    Chapter 14 in a book on school law concerns the legal aspects of evaluating teacher performance. Careful analysis of recent decisions makes it clear the courts will compel uniform standards and unprecedented rigor in teacher evaluation practices. Particularly in the consideration of equitable standards, state and federal courts are relying on…

  19. Sure I'm Sure: Prefrontal Oscillations Support Metacognitive Monitoring of Decision Making.

    PubMed

    Wokke, Martijn E; Cleeremans, Axel; Ridderinkhof, K Richard

    2017-01-25

    Successful decision making critically involves metacognitive processes such as monitoring and control of our decision process. Metacognition enables agents to modify ongoing behavior adaptively and determine what to do next in situations in which external feedback is not (immediately) available. Despite the importance of metacognition for many aspects of life, little is known about how our metacognitive system operates or about what kind of information is used for metacognitive (second-order) judgments. In particular, it remains an open question whether metacognitive judgments are based on the same information as first-order decisions. Here, we investigated the relationship between metacognitive performance and first-order task performance by recording EEG signals while participants were asked to make a "diagnosis" after seeing a sample of fictitious patient data (a complex pattern of colored moving dots of different sizes). To assess metacognitive performance, participants provided an estimate about the quality of their diagnosis on each trial. Results demonstrate that the information that contributes to first-order decisions differs from the information that supports metacognitive judgments. Further, time-frequency analyses of EEG signals reveal that metacognitive performance is associated specifically with prefrontal theta-band activity. Together, our findings are consistent with a hierarchical model of metacognition and suggest a crucial role for prefrontal oscillations in metacognitive performance. Monitoring and control of our decision process (metacognition) is a crucial aspect of adaptive decision making. Crucially, metacognitive skills enable us to adjust ongoing behavior and determine future decision making when immediate feedback is not available. In the present study, we constructed a "diagnosis task" that allowed us to assess in what way first-order task performance and metacognition are related to each other. Results demonstrate that the contribution of sensory evidence (size, color, and motion direction) differs between first- and second-order decision making. Further, our results indicate that metacognitive performance specifically is orchestrated by means of prefrontal theta oscillations. Together, our findings suggest a hierarchical model of metacognition. Copyright © 2017 the authors 0270-6474/17/370781-09$15.00/0.

  20. Costs and cost-effectiveness of HIV community services: quantity and quality of studies published 1986-2011.

    PubMed

    Beck, Eduard J; Fasawe, Olufunke; Ongpin, Patricia; Ghys, Peter; Avilla, Carlos; De Lay, Paul

    2013-06-01

    Community services comprise an important part of a country's HIV response. English language cost and cost-effectiveness studies of HIV community services published between 1986 and 2011 were reviewed but only 74 suitable studies were identified, 66% of which were performed in five countries. Mean study scores by continent varied from 42 to 69% of the maximum score, reflecting variation in topics covered and the quality of coverage: 38% of studies covered key and 11% other vulnerable populations - a country's response is most effective and efficient if these populations are identified given they are key to a successful response. Unit costs were estimated using different costing methods and outcomes. Community services will need to routinely collect and analyze information on their use, cost, outcome and impact using standardized costing methods and outcomes. Cost estimates need to be disaggregated into relevant cost items and stratified by severity and existing comorbidities. Expenditure tracking and costing of services are complementary aspects of the health sector 'resource cycle' that feed into a country's investment framework and the development and implementation of national strategic plans.

  1. Estimated prevalence of halitosis: a systematic review and meta-regression analysis.

    PubMed

    Silva, Manuela F; Leite, Fábio R M; Ferreira, Larissa B; Pola, Natália M; Scannapieco, Frank A; Demarco, Flávio F; Nascimento, Gustavo G

    2018-01-01

    This study aims to conduct a systematic review to determine the prevalence of halitosis in adolescents and adults. Electronic searches were performed using four different databases without restrictions: PubMed, Scopus, Web of Science, and SciELO. Population-based observational studies that provided data about the prevalence of halitosis in adolescents and adults were included. Additionally, meta-analyses, meta-regression, and sensitivity analyses were conducted to synthesize the evidence. A total of 584 articles were initially found and considered for title and abstract evaluation. Thirteen articles met inclusion criteria. The combined prevalence of halitosis was found to be 31.8% (95% CI 24.6-39.0%). Methodological aspects such as the year of publication and the socioeconomic status of the country where the study was conducted seemed to influence the prevalence of halitosis. Our results demonstrated that the estimated prevalence of halitosis was 31.8%, with high heterogeneity between studies. The results suggest a worldwide trend towards a rise in halitosis prevalence. Given the high prevalence of halitosis and its complex etiology, dental professionals should be aware of their roles in halitosis prevention and treatment.

  2. Review of hybrid laminar flow control systems

    NASA Astrophysics Data System (ADS)

    Krishnan, K. S. G.; Bertram, O.; Seibel, O.

    2017-08-01

    The aeronautic community always strived for fuel efficient aircraft and presently, the need for ecofriendly aircraft is even more, especially with the tremendous growth of air traffic and growing environmental concerns. Some of the important drivers for such interests include high fuel prices, less emissions requirements, need for more environment friendly aircraft to lessen the global warming effects. Hybrid laminar flow control (HLFC) technology is promising and offers possibility to achieve these goals. This technology was researched for decades for its application in transport aircraft, and it has achieved a new level of maturity towards integration and safety and maintenance aspects. This paper aims to give an overview of HLFC systems research and associated flight tests in the past years both in the US and in Europe. The review makes it possible to distinguish between the successful approaches and the less successful or outdated approaches in HLFC research. Furthermore, the technology status shall try to produce first estimations regarding the mass, power consumption and performance of HLFC systems as well as estimations regarding maintenance requirements and possible subsystem definitions.

  3. A Hierarchical Bayesian Model for Crowd Emotions

    PubMed Central

    Urizar, Oscar J.; Baig, Mirza S.; Barakova, Emilia I.; Regazzoni, Carlo S.; Marcenaro, Lucio; Rauterberg, Matthias

    2016-01-01

    Estimation of emotions is an essential aspect in developing intelligent systems intended for crowded environments. However, emotion estimation in crowds remains a challenging problem due to the complexity in which human emotions are manifested and the capability of a system to perceive them in such conditions. This paper proposes a hierarchical Bayesian model to learn in unsupervised manner the behavior of individuals and of the crowd as a single entity, and explore the relation between behavior and emotions to infer emotional states. Information about the motion patterns of individuals are described using a self-organizing map, and a hierarchical Bayesian network builds probabilistic models to identify behaviors and infer the emotional state of individuals and the crowd. This model is trained and tested using data produced from simulated scenarios that resemble real-life environments. The conducted experiments tested the efficiency of our method to learn, detect and associate behaviors with emotional states yielding accuracy levels of 74% for individuals and 81% for the crowd, similar in performance with existing methods for pedestrian behavior detection but with novel concepts regarding the analysis of crowds. PMID:27458366

  4. Number Sense and Mathematics: Which, When and How?

    PubMed Central

    2017-01-01

    Individual differences in number sense correlate with mathematical ability and performance, although the presence and strength of this relationship differs across studies. Inconsistencies in the literature may stem from heterogeneity of number sense and mathematical ability constructs. Sample characteristics may also play a role as changes in the relationship between number sense and mathematics may differ across development and cultural contexts. In this study, 4,984 16-year-old students were assessed on estimation ability, one aspect of number sense. Estimation was measured using 2 different tasks: number line and dot-comparison. Using cognitive and achievement data previously collected from these students at ages 7, 9, 10, 12, and 14, the study explored for which of the measures and when in development these links are observed, and how strong these links are and how much these links are moderated by other cognitive abilities. The 2 number sense measures correlated modestly with each other (r = .22), but moderately with mathematics at age 16. Both measures were also associated with earlier mathematics; but this association was uneven across development and was moderated by other cognitive abilities. PMID:28758784

  5. Applying machine learning methods for characterization of hexagonal prisms from their 2D scattering patterns - an investigation using modelled scattering data

    NASA Astrophysics Data System (ADS)

    Salawu, Emmanuel Oluwatobi; Hesse, Evelyn; Stopford, Chris; Davey, Neil; Sun, Yi

    2017-11-01

    Better understanding and characterization of cloud particles, whose properties and distributions affect climate and weather, are essential for the understanding of present climate and climate change. Since imaging cloud probes have limitations of optical resolution, especially for small particles (with diameter < 25 μm), instruments like the Small Ice Detector (SID) probes, which capture high-resolution spatial light scattering patterns from individual particles down to 1 μm in size, have been developed. In this work, we have proposed a method using Machine Learning techniques to estimate simulated particles' orientation-averaged projected sizes (PAD) and aspect ratio from their 2D scattering patterns. The two-dimensional light scattering patterns (2DLSP) of hexagonal prisms are computed using the Ray Tracing with Diffraction on Facets (RTDF) model. The 2DLSP cover the same angular range as the SID probes. We generated 2DLSP for 162 hexagonal prisms at 133 orientations for each. In a first step, the 2DLSP were transformed into rotation-invariant Zernike moments (ZMs), which are particularly suitable for analyses of pattern symmetry. Then we used ZMs, summed intensities, and root mean square contrast as inputs to the advanced Machine Learning methods. We created one random forests classifier for predicting prism orientation, 133 orientation-specific (OS) support vector classification models for predicting the prism aspect-ratios, 133 OS support vector regression models for estimating prism sizes, and another 133 OS Support Vector Regression (SVR) models for estimating the size PADs. We have achieved a high accuracy of 0.99 in predicting prism aspect ratios, and a low value of normalized mean square error of 0.004 for estimating the particle's size and size PADs.

  6. Equation of state and QCD transition at finite temperature

    NASA Astrophysics Data System (ADS)

    Bazavov, A.; Bhattacharya, T.; Cheng, M.; Christ, N. H.; Detar, C.; Ejiri, S.; Gottlieb, Steven; Gupta, R.; Heller, U. M.; Huebner, K.; Jung, C.; Karsch, F.; Laermann, E.; Levkova, L.; Miao, C.; Mawhinney, R. D.; Petreczky, P.; Schmidt, C.; Soltz, R. A.; Soeldner, W.; Sugar, R.; Toussaint, D.; Vranas, P.

    2009-07-01

    We calculate the equation of state in 2+1 flavor QCD at finite temperature with physical strange quark mass and almost physical light quark masses using lattices with temporal extent Nτ=8. Calculations have been performed with two different improved staggered fermion actions, the asqtad and p4 actions. Overall, we find good agreement between results obtained with these two O(a2) improved staggered fermion discretization schemes. A comparison with earlier calculations on coarser lattices is performed to quantify systematic errors in current studies of the equation of state. We also present results for observables that are sensitive to deconfining and chiral aspects of the QCD transition on Nτ=6 and 8 lattices. We find that deconfinement and chiral symmetry restoration happen in the same narrow temperature interval. In an appendix we present a simple parametrization of the equation of state that can easily be used in hydrodynamic model calculations. In this parametrization we include an estimate of current uncertainties in the lattice calculations which arise from cutoff and quark mass effects.

  7. Comparison of Biocorrosion due to Desulfovibrio desulfuricans and Desulfotomaculum nigrificans Bacteria

    NASA Astrophysics Data System (ADS)

    Lata, Suman; Sharma, Chhaya; Singh, Ajay K.

    2013-02-01

    One observes several species of sulfate-reducing bacteria in nature. Presence of these species in a media may cause microbial influenced corrosion (MIC) of materials differently. To investigate this aspect of MIC, corrosion tests were performed on three types of stainless steels. The tests were done in modified Baar's media inoculated separately by the two species of SRB namely Desulfovibrio desulfuricans (DD) and Desulfotomaculum nigrificans (DN). Electrochemical and immersion tests were performed to assess the extent of uniform and localized corrosion of these steels. Biofilms formed on the corroded samples were analyzed for estimating various components of its extracellular polymeric substances. Hydrogenase enzyme of these bacteria was tested to determine its nature and activity. Higher degree of corrosivity was observed in case of media inoculated with DD as compared to DN. More active nature of hydrogenase enzyme, its location in the periplasmic phase in DD and higher fraction of carbohydrate in biofilm formed due to DD have been suggested to be responsible for higher degree of corrosivity caused by them.

  8. Launch vehicle design and GNC sizing with ASTOS

    NASA Astrophysics Data System (ADS)

    Cremaschi, Francesco; Winter, Sebastian; Rossi, Valerio; Wiegand, Andreas

    2018-03-01

    The European Space Agency (ESA) is currently involved in several activities related to launch vehicle designs (Future Launcher Preparatory Program, Ariane 6, VEGA evolutions, etc.). Within these activities, ESA has identified the importance of developing a simulation infrastructure capable of supporting the multi-disciplinary design and preliminary guidance navigation and control (GNC) design of different launch vehicle configurations. Astos Solutions has developed the multi-disciplinary optimization and launcher GNC simulation and sizing tool (LGSST) under ESA contract. The functionality is integrated in the Analysis, Simulation and Trajectory Optimization Software for space applications (ASTOS) and is intended to be used from the early design phases up to phase B1 activities. ASTOS shall enable the user to perform detailed vehicle design tasks and assessment of GNC systems, covering all aspects of rapid configuration and scenario management, sizing of stages, trajectory-dependent estimation of structural masses, rigid and flexible body dynamics, navigation, guidance and control, worst case analysis, launch safety analysis, performance analysis, and reporting.

  9. Clinical and epidemiological aspects of cornea transplant patients of a reference hospital 1

    PubMed Central

    Cruz, Giovanna Karinny Pereira; de Azevedo, Isabelle Campos; Carvalho, Diana Paula de Souza Rego Pinto; Vitor, Allyne Fortes; Santos, Viviane Euzébia Pereira; Ferreira, Marcos Antonio

    2017-01-01

    ABSTRACT Objective: clinically characterizing cornea transplant patients and their distribution according to indicated and post-operative conditions of cornea transplantation, as well as estimating the average waiting time. Method: a cross-sectional, descriptive and analytical study performed for all cornea transplants performed at a reference service (n=258). Data were analyzed using Statistical Package for the Social Sciences, version 20.0. Results: the main indicator for cornea transplant was keratoconus. The mean waiting time for the transplant was approximately 5 months and 3 weeks for elective transplants and 9 days for urgent cases. An association between the type of corneal disorder with gender, age, previous surgery, eye classification, glaucoma and anterior graft failure were found. Conclusion: keratoconus was the main indicator for cornea transplant. Factors such as age, previous corneal graft failure (retransplantation), glaucoma, cases of surgeries prior to cornea transplant (especially cataract surgery) may be related to the onset corneal endothelium disorders. PMID:28614429

  10. Optimization of the excitation light sheet in selective plane illumination microscopy

    PubMed Central

    Gao, Liang

    2015-01-01

    Selective plane illumination microscopy (SPIM) allows rapid 3D live fluorescence imaging on biological specimens with high 3D spatial resolution, good optical sectioning capability and minimal photobleaching and phototoxic effect. SPIM gains its advantage by confining the excitation light near the detection focal plane, and its performance is determined by the ability to create a thin, large and uniform excitation light sheet. Several methods have been developed to create such an excitation light sheet for SPIM. However, each method has its own strengths and weaknesses, and tradeoffs must be made among different aspects in SPIM imaging. In this work, we present a strategy to select the excitation light sheet among the latest SPIM techniques, and to optimize its geometry based on spatial resolution, field of view, optical sectioning capability, and the sample to be imaged. Besides the light sheets discussed in this work, the proposed strategy is also applicable to estimate the SPIM performance using other excitation light sheets. PMID:25798312

  11. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    PubMed

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  12. Contribution of individual waste fractions to the environmental impacts from landfilling of municipal solid waste.

    PubMed

    Manfredi, Simone; Tonini, Davide; Christensen, Thomas H

    2010-03-01

    A number of LCA-based studies have reported on the environmental performance of landfilling of mixed waste, but little is known about the relative contributions of individual waste fractions to the overall impact potentials estimated for the mixed waste. In this paper, an empirical model has been used to estimate the emissions to the environment from landfilling of individual waste fractions. By means of the LCA-model EASEWASTE, the emissions estimated have been used to quantify how much of the overall impact potential for each impact category is to be attributed to the individual waste fractions. Impact potentials are estimated for 1 tonne of mixed waste disposed off in a conventional landfill with bottom liner, leachate collection and treatment and gas collection and utilization for electricity generation. All the environmental aspects are accounted for 100 years after disposal and several impact categories have been considered, including standard categories, toxicity-related categories and groundwater contamination. Amongst the standard and toxicity-related categories, the highest potential impact is estimated for human toxicity via soil (HTs; 12 mPE/tonne). This is mostly caused by leaching of heavy metals from ashes (e.g. residues from roads cleaning and vacuum cleaning bags), batteries, paper and metals. On the other hand, substantial net environmental savings are estimated for the categories Global Warming (GW; -31 mPE/tonne) and Eco-Toxicity in water chronic (ETwc; -53 mPE/tonne). These savings are mostly determined by the waste fractions characterized by a high content of biogenic carbon (paper, organics, other combustible waste). These savings are due to emissions from energy generation avoided by landfill gas utilization, and by the storage of biogenic carbon in the landfill due to incomplete waste degradation. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Footbridge system identification using wireless inertial measurement units for force and response measurements

    NASA Astrophysics Data System (ADS)

    Brownjohn, James Mark William; Bocian, Mateusz; Hester, David; Quattrone, Antonino; Hudson, William; Moore, Daniel; Goh, Sushma; Lim, Meng Sun

    2016-12-01

    With the main focus on safety, design of structures for vibration serviceability is often overlooked or mismanaged, resulting in some high profile structures failing publicly to perform adequately under human dynamic loading due to walking, running or jumping. A standard tool to inform better design, prove fitness for purpose before entering service and design retrofits is modal testing, a procedure that typically involves acceleration measurements using an array of wired sensors and force generation using a mechanical shaker. A critical but often overlooked aspect is using input (force) to output (response) relationships to enable estimation of modal mass, which is a key parameter directly controlling vibration levels in service. This paper describes the use of wireless inertial measurement units (IMUs), designed for biomechanics motion capture applications, for the modal testing of a 109 m footbridge. IMUs were first used for an output-only vibration survey to identify mode frequencies, shapes and damping ratios, then for simultaneous measurement of body accelerations of a human subject jumping to excite specific vibrations modes and build up bridge deck accelerations at the jumping location. Using the mode shapes and the vertical acceleration data from a suitable body landmark scaled by body mass, thus providing jumping force data, it was possible to create frequency response functions and estimate modal masses. The modal mass estimates for this bridge were checked against estimates obtained using an instrumented hammer and known mass distributions, showing consistency among the experimental estimates. Finally, the method was used in an applied research application on a short span footbridge where the benefits of logistical and operational simplicity afforded by the highly portable and easy to use IMUs proved extremely useful for an efficient evaluation of vibration serviceability, including estimation of modal masses.

  14. Regression without truth with Markov chain Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga

    2017-03-01

    Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.

  15. A comparative study of volumetric breast density estimation in digital mammography and magnetic resonance imaging: results from a high-risk population

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Xing, Ye; Bakic, Predrag R.; Conant, Emily F.; Maidment, Andrew D. A.

    2010-03-01

    We performed a study to compare methods for volumetric breast density estimation in digital mammography (DM) and magnetic resonance imaging (MRI) for a high-risk population of women. DM and MRI images of the unaffected breast from 32 women with recently detected abnormalities and/or previously diagnosed breast cancer (age range 31-78 yrs, mean 50.3 yrs) were retrospectively analyzed. DM images were analyzed using QuantraTM (Hologic Inc). The MRI images were analyzed using a fuzzy-C-means segmentation algorithm on the T1 map. Both methods were compared to Cumulus (Univ. Toronto). Volumetric breast density estimates from DM and MRI are highly correlated (r=0.90, p<=0.001). The correlation between the volumetric and the area-based density measures is lower and depends on the training background of the Cumulus software user (r=0.73-84, p<=0.001). In terms of absolute values, MRI provides the lowest volumetric estimates (mean=14.63%), followed by the DM volumetric (mean=22.72%) and area-based measures (mean=29.35%). The MRI estimates of the fibroglandular volume are statistically significantly lower than the DM estimates for women with very low-density breasts (p<=0.001). We attribute these differences to potential partial volume effects in MRI and differences in the computational aspects of the image analysis methods in MRI and DM. The good correlation between the volumetric and the area-based measures, shown to correlate with breast cancer risk, suggests that both DM and MRI volumetric breast density measures can aid in breast cancer risk assessment. Further work is underway to fully-investigate the association between volumetric breast density measures and breast cancer risk.

  16. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes

    PubMed Central

    Ding, Quan; Besio, Walter G.

    2015-01-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2n. An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n. PMID:26693200

  17. Improving the accuracy of Laplacian estimation with novel multipolar concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Ding, Quan; Besio, Walter G

    2016-02-01

    Conventional electroencephalography with disc electrodes has major drawbacks including poor spatial resolution, selectivity and low signal-to-noise ratio that are critically limiting its use. Concentric ring electrodes, consisting of several elements including the central disc and a number of concentric rings, are a promising alternative with potential to improve all of the aforementioned aspects significantly. In our previous work, the tripolar concentric ring electrode was successfully used in a wide range of applications demonstrating its superiority to conventional disc electrode, in particular, in accuracy of Laplacian estimation. This paper takes the next step toward further improving the Laplacian estimation with novel multipolar concentric ring electrodes by completing and validating a general approach to estimation of the Laplacian for an ( n + 1)-polar electrode with n rings using the (4 n + 1)-point method for n ≥ 2 that allows cancellation of all the truncation terms up to the order of 2 n . An explicit formula based on inversion of a square Vandermonde matrix is derived to make computation of multipolar Laplacian more efficient. To confirm the analytic result of the accuracy of Laplacian estimate increasing with the increase of n and to assess the significance of this gain in accuracy for practical applications finite element method model analysis has been performed. Multipolar concentric ring electrode configurations with n ranging from 1 ring (bipolar electrode configuration) to 6 rings (septapolar electrode configuration) were directly compared and obtained results suggest the significance of the increase in Laplacian accuracy caused by increase of n .

  18. Impact of Attention-Deficit Hyperactivity Disorder on School Performance: What are the Effects of Medication?

    PubMed

    Baweja, Raman; Mattison, Richard E; Waxmonsky, James G

    2015-12-01

    Attention-deficit hyperactivity disorder (ADHD) affects an estimated 5-7 % of schoolchildren worldwide. School functioning and academic achievement are frequently impaired by ADHD and represent one of the main reasons children start ADHD medication. Multiple potential causal pathways exist between ADHD and impaired school performance. In this review, we decompose school performance into three components and assess the impact of ADHD and its treatments on academic performance (assessed by grade point average [GPA], time on-task, percentage of work completed as well as percent completed correctly), academic skills (as measured by achievement tests and cognitive measures), and academic enablers (such as study skills, motivation, engagement, classroom behavior and interpersonal skills). Most studies examined only the short-term effects of medication on school performance. In these, ADHD medications have been observed to improve some aspects of school performance, with the largest impact on measures of academic performance such as seatwork productivity and on-task performance. In a subset of children, these benefits may translate into detectable improvements in GPA and achievement testing. However, limited data exists to support whether these changes are sustained over years. Optimizing medication effects requires periodic reassessment of school performance, necessitating a collaborative effort involving patients, parents, school staff and prescribers. Even with systematic reassessment, behavioral-based treatments and additional school-based services may be needed to maximize academic performance for the many youth with ADHD and prominent impairments in school performance.

  19. Science and society test VIII: The arms race revisited

    NASA Astrophysics Data System (ADS)

    Hafemeister, David W.

    1983-03-01

    Approximate numerical estimates are developed in order to quantify a variety of aspects of the arms race. The results of these calculations are consistent with either direct observations or with more sophisticated calculations. This paper will cover some of the following aspects of the arms race: (1) the electromagnetic pulse (EMP); (2) spy satellites; (3) ICBM accuracy; (4) NAVSTAR global positioning satellites; (5) particle and laser beam weapons; (6) the neutron bomb; and (7) war games.

  20. Progress in Open Rotor Research: A U.S. Perspective

    NASA Technical Reports Server (NTRS)

    Van Zante, Dale E.

    2015-01-01

    In response to the 1970s oil crisis, NASA created the Advanced Turboprop Project (ATP) to mature technologies for high-speed propellers to enable large reductions in fuel burn relative to turbofan engines of that era. Both single rotation and contra-rotation concepts were designed and tested in ground based facilities as well as flight. Some novel concepts configurations that were not well publicized at the time, were proposed as part of the effort. The high-speed propeller concepts did provide fuel burn savings, albeit with some acoustics and structural challenges to overcome. When fuel prices fell, the business case for radical new engine configurations collapsed and the research emphasis returned to high bypass ducted configurations. With rising oil prices and increased environmental concerns there is renewed interest in high-speed propeller based engine architectures. Contemporary analysis tools for aerodynamics and aeroacoustics have enabled a new era of blade designs that have both high efficiency and acceptable noise characteristics. A recent series of tests in the U.S. have characterized the aerodynamic performance and noise from these modern contra-rotating propeller designs. Additionally the installation and noise shielding aspects for conventional airframes and blended wing bodies have been studied. Historical estimates of propfan performance have relied on legacy propeller performance and acoustics data. Current system studies make use of the modern propeller data with higher fidelity installation effects data to estimate the performance of a contemporary aircraft system with favorable results. This paper presents the current state of high-speed propeller open rotor research within the U.S. from an overall viewpoint of the various efforts ongoing. The current projections for the technology are presented.

  1. Contract-Based Integration of Cyber-Physical Analyses

    DTIC Science & Technology

    2014-10-14

    Conference on Embedded Software Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the ...data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this

  2. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  3. Coupled Ocean-Atmosphere Dynamics and Predictability of MJO’s

    DTIC Science & Technology

    2012-09-30

    chlorophyll modulation by the MJO Previous studies analyzed ocean color satellite data and suggested that the primary mechanism of surface...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of

  4. Coupled Ocean-Atmosphere Dynamics and Predictability of MJO’s

    DTIC Science & Technology

    2012-09-30

    mechanisms of surface chlorophyll modulation by the MJO Previous studies analyzed ocean color satellite data and suggested that the primary mechanism of...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the... data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this

  5. 412th Test Engineering Group Vision for Future Knowledge Management (KM)

    DTIC Science & Technology

    2018-05-17

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect of this collection of... information , including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  6. Management of Noncompressible Hemorrhage Using Vena Cava Ultrasound

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-15-1-0709 TITLE: Management of Noncompressible Hemorrhage Using Vena Cava Ultrasound PRINCIPAL INVESTIGATOR: Donald...No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this

  7. Characterizing Candidate Oncogenes at 8q21 in Breast Cancer

    DTIC Science & Technology

    2008-03-01

    this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden...estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense, Washington

  8. Applications of Electromagnetic Waves to Problems in Nondestructive Testing and Target Identification

    DTIC Science & Technology

    2014-09-09

    public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send...comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing the burden, to the

  9. Defence Science and Technology Strategy. Science and Technology for a Secure Canada

    DTIC Science & Technology

    2006-12-01

    Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments...regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington

  10. Spectral Analysis for DIAL and Lidar Detection of TATP

    DTIC Science & Technology

    2008-08-13

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 The public reporting burden for this collection of information is. estimated to average 1...hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and...completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of

  11. Far Infrared Photonic Crystals Operating in the Reststrahl Region

    DTIC Science & Technology

    2007-08-20

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and...completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information

  12. Overcoming Resistance to Trastuzumab in HER2-Amplified Breast Cancers

    DTIC Science & Technology

    2011-08-01

    Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing this collection of...information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing

  13. Second-Order Active NLO Chromophores for DNA Based Electro-Optics Materials

    DTIC Science & Technology

    2010-09-21

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour...per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and...completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information

  14. Attribution In Influence: Relative Power And The Use Of Attribution

    DTIC Science & Technology

    2017-12-01

    reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments...regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington

  15. Conversion of Clinical Data from the NABISH I and II into FITBIR

    DTIC Science & Technology

    2017-10-01

    OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this...burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson

  16. Pathomechanics of Post-Traumatic OA Development in the Military Following Articular Fracture

    DTIC Science & Technology

    2017-10-01

    Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the...collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for...reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215

  17. 6th International Workshop on Model Reduction in Reactive Flow

    DTIC Science & Technology

    2018-01-01

    RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Yiguang Ju 611102 c. THIS PAGE The public reporting burden for this collection of information is estimated...needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of this collection...of information , including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and

  18. Role of the U.S. Government in the Cybersecurity of Private Entities

    DTIC Science & Technology

    2017-12-01

    reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding...this burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington

  19. Correlation Immunity, Avalanche Features, and Other Cryptographic Properties of Generalized Boolean Functions

    DTIC Science & Technology

    2017-09-01

    information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect...of this collection of information , including suggestions for reducing this burden to Washington headquarters Services, Directorate for Information

  20. Bioinspired Surface Treatments for Improved Decontamination: Handling andDecontamination Considerations

    DTIC Science & Technology

    2018-03-16

    this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing this collection of information . Send comments regarding this burden...estimate or any other aspect of this collection of information , including suggestions for reducing this burden to Department of Defense, Washington

  1. Optimizing Sparse Representations of Kinetic Distributions via Information Theory

    DTIC Science & Technology

    2017-07-31

    for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden...estimate or any other aspect of this collection of information , including suggestions for reducing the burden, to Department of Defense, Washington

  2. How The Democratization Of Technology Enhances Intelligence-Led Policing And Serves The Community

    DTIC Science & Technology

    2017-12-01

    reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments...regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington

  3. Navy And Marine Corps IT/IS Acquisition: A Way Forward

    DTIC Science & Technology

    2017-12-01

    reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding...this burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington

  4. Comparison of Airway Control Methods and Ventilation Success with an Automatic Resuscitator

    DTIC Science & Technology

    2015-10-08

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour...completing and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  5. DoD Software Intensive Systems Development: A Hit and Miss Process

    DTIC Science & Technology

    2015-05-01

    searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments...Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington

  6. Defense Advanced Research Projects Agency (DARPA) Network Archive (DNA)

    DTIC Science & Technology

    2008-12-01

    therefore decided for an iterative development process even within such a small project. The first iteration consisted of conducting specific...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington

  7. The Unified Agenda: Implications for Rulemaking Transparency and Participation

    DTIC Science & Technology

    2009-07-20

    R40713 Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson

  8. Epigenetic Regulation of microRNA Expression: Targeting the Triple-Negative Breast Cancer Phenotype

    DTIC Science & Technology

    2011-10-01

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing this collection of information . Send comments regarding this burden estimate or any...other aspect of this collection of information , including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services

  9. War in Afghanistan: Campaign Progress, Political Strategy, and Issues for Congress

    DTIC Science & Technology

    2013-08-29

    Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson

  10. Human Systems Integration (HSI) in Acquisition. HSI Domain Guide

    DTIC Science & Technology

    2009-08-01

    job simulation that includes posture data , force parameters, and anthropometry . Output includes the percentage of men and women who have the strength...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of

  11. Parietal lobe critically supports successful paired immediate and single-item delayed memory for targets.

    PubMed

    Krumm, Sabine; Kivisaari, Sasa L; Monsch, Andreas U; Reinhardt, Julia; Ulmer, Stephan; Stippich, Christoph; Kressig, Reto W; Taylor, Kirsten I

    2017-05-01

    The parietal lobe is important for successful recognition memory, but its role is not yet fully understood. We investigated the parietal lobes' contribution to immediate paired-associate memory and delayed item-recognition memory separately for hits (targets) and correct rejections (distractors). We compared the behavioral performance of 56 patients with known parietal and medial temporal lobe dysfunction (i.e. early Alzheimer's Disease) to 56 healthy control participants in an immediate paired and delayed single item object memory task. Additionally, we performed voxel-based morphometry analyses to investigate the functional-neuroanatomic relationships between performance and voxel-based estimates of atrophy in whole-brain analyses. Behaviorally, all participants performed better identifying targets than rejecting distractors. The voxel-based morphometry analyses associated atrophy in the right ventral parietal cortex with fewer correct responses to familiar items (i.e. hits) in the immediate and delayed conditions. Additionally, medial temporal lobe integrity correlated with better performance in rejecting distractors, but not in identifying targets, in the immediate paired-associate task. Our findings suggest that the parietal lobe critically supports successful immediate and delayed target recognition memory, and that the ventral aspect of the parietal cortex and the medial temporal lobe may have complementary preferences for identifying targets and rejecting distractors, respectively, during recognition memory. Copyright © 2017. Published by Elsevier Inc.

  12. A review on breeding and genetic strategies in Iranian buffaloes (Bubalus bubalis).

    PubMed

    Safari, Abbas; Ghavi Hossein-Zadeh, Navid; Shadparvar, Abdol Ahad; Abdollahi Arpanahi, Rostam

    2018-04-01

    The aim of current study was to review breeding progress and update information on genetic strategies in Iranian buffaloes. Iranian buffalo is one of the vital domestic animals throughout north, north-west, south and south-west of Iran with measurable characteristics both in milk and meat production. The species plays an important role in rural economy of the country due to its unique characteristics such as resistance to diseases and parasites, having long productive lifespan and showing higher capability of consuming low-quality forage. In Iran, total production of milk and meat devoted to buffaloes are 293,000 and 24,700 tons, respectively. Selection activities and milk yield recording are carrying out by the central government through the Animal Breeding Centre of Iran. The main breeding activities of Iranian buffaloes included the estimation of genetic parameters and genetic trends for performance traits using different models and methods, estimation of economic values and selection criteria and analysis of population structure. Incorporating different aspects of dairy buffalo management together with improved housing, nutrition, breeding and milking, is known to produce significant improvements in buffalo production. Therefore, identifying genetic potential of Iranian buffaloes, selection of superior breeds, improving nutritional management and reproduction and developing the education and increasing the skills of practical breeders can be useful in order to enhance the performance and profitability of Iranian buffaloes.

  13. Adaptive neuro-fuzzy methodology for noise assessment of wind turbine.

    PubMed

    Shamshirband, Shahaboddin; Petković, Dalibor; Hashim, Roslan; Motamedi, Shervin

    2014-01-01

    Wind turbine noise is one of the major obstacles for the widespread use of wind energy. Noise tone can greatly increase the annoyance factor and the negative impact on human health. Noise annoyance caused by wind turbines has become an emerging problem in recent years, due to the rapid increase in number of wind turbines, triggered by sustainable energy goals set forward at the national and international level. Up to now, not all aspects of the generation, propagation and perception of wind turbine noise are well understood. For a modern large wind turbine, aerodynamic noise from the blades is generally considered to be the dominant noise source, provided that mechanical noise is adequately eliminated. The sources of aerodynamic noise can be divided into tonal noise, inflow turbulence noise, and airfoil self-noise. Many analytical and experimental acoustical studies performed the wind turbines. Since the wind turbine noise level analyzing by numerical methods or computational fluid dynamics (CFD) could be very challenging and time consuming, soft computing techniques are preferred. To estimate noise level of wind turbine, this paper constructed a process which simulates the wind turbine noise levels in regard to wind speed and sound frequency with adaptive neuro-fuzzy inference system (ANFIS). This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  14. Adaptive Neuro-Fuzzy Methodology for Noise Assessment of Wind Turbine

    PubMed Central

    Shamshirband, Shahaboddin; Petković, Dalibor; Hashim, Roslan; Motamedi, Shervin

    2014-01-01

    Wind turbine noise is one of the major obstacles for the widespread use of wind energy. Noise tone can greatly increase the annoyance factor and the negative impact on human health. Noise annoyance caused by wind turbines has become an emerging problem in recent years, due to the rapid increase in number of wind turbines, triggered by sustainable energy goals set forward at the national and international level. Up to now, not all aspects of the generation, propagation and perception of wind turbine noise are well understood. For a modern large wind turbine, aerodynamic noise from the blades is generally considered to be the dominant noise source, provided that mechanical noise is adequately eliminated. The sources of aerodynamic noise can be divided into tonal noise, inflow turbulence noise, and airfoil self-noise. Many analytical and experimental acoustical studies performed the wind turbines. Since the wind turbine noise level analyzing by numerical methods or computational fluid dynamics (CFD) could be very challenging and time consuming, soft computing techniques are preferred. To estimate noise level of wind turbine, this paper constructed a process which simulates the wind turbine noise levels in regard to wind speed and sound frequency with adaptive neuro-fuzzy inference system (ANFIS). This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method. PMID:25075621

  15. Effects of operational decisions on the diffusion of epidemic disease: A system dynamics modeling of the MERS-CoV outbreak in South Korea.

    PubMed

    Shin, Nina; Kwag, Taewoo; Park, Sangwook; Kim, Yon Hui

    2017-05-21

    We evaluated the nosocomial outbreak of Middle East Respiratory Syndrome (MERS) Coronavirus (CoV) in the Republic of Korea, 2015, from a healthcare operations management perspective. Establishment of healthcare policy in South Korea provides patients' freedom to select and visit multiple hospitals. Current policy enforces hospitals preference for multi-patient rooms to single-patient rooms, to lower financial burden. Existing healthcare systems tragically contributed to 186 MERS outbreak cases, starting from single "index patient" into three generations of secondary infections. By developing a macro-level health system dynamics model, we provide empirical knowledge to examining the case from both operational and financial perspectives. In our simulation, under base infectivity scenario, high emergency room occupancy circumstance contributed to an estimated average of 101 (917%) more infected patients, compared to when in low occupancy circumstance. Economic patient room design showed an estimated 702% increase in the number of infected patients, despite the overall 98% savings in total expected costs compared to optimal room design. This study provides first time, system dynamics model, performance measurements from an operational perspective. Importantly, the intent of this study was to provide evidence to motivate public, private, and government healthcare administrators' recognition of current shortcomings, to optimize performance as a whole system, rather than mere individual aspects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Physical and mathematical aspects of blood-glucose- and insulin-level kinetics in patients with coronary heart disease and high risk of its development

    NASA Astrophysics Data System (ADS)

    Denisova, Tatyana P.; Malinova, Lidia I.; Malinov, Igor A.

    2001-05-01

    The intravenous glucose tolerance test was performed to estimate the kinetics of blood glucose and insulin levels. Glucose was injected in individual standardized dose (0.5 g. per 1 kg of body weight). Three groups of patients were checked up: 1) patients with coronary heart disease verified by cicatricial alterations in myocardium found by electrocardiographic and echocardiographic methods; 2) children of patients with transmural myocardial infarction practically healthy at the moment of study; 3) persons practically healthy at the moment of study without any indications on cardiovascular diseases and non-insulin dependent diabetes mellitus among all ancestors and relatives who frequently were long-livers. Last groups didn't differ by age and sex. Peripheral blood glucose level, immunoreactive and free insulin (tested by muscular tissue) were studied just before glucose injection (on an empty stomach) and 4 times after it. The received discrete data were approximated by high degree polynomials, the estimation of blood glucose and insulin time functions symmetric was performed. The deceleration of degradation of insulin circulating in peripheral blood and the time decrease of second phase of insulin secretion were analytically established. This fact proves the complicated mechanism of insulin alterations in atherosclerosis, consisting not only of insulin resistance of peripheral tissues but of decrease of plastic processes in insulin- generating cells.

  17. Estimating the Effects of Astronaut Career Ionizing Radiation Dose Limits on Manned Interplanetary Flight Programs

    NASA Technical Reports Server (NTRS)

    Koontz, Steven L.; Rojdev, Kristina; Valle, Gerard D.; Zipay, John J.; Atwell, William S.

    2013-01-01

    The Hybrid Inflatable DSH combined with electric propulsion and high power solar-electric power systems offer a near TRL-now solution to the space radiation crew dose problem that is an inevitable aspect of long term manned interplanetary flight. Spreading program development and launch costs over several years can lead to a spending plan that fits with NASA's current and future budgetary limitations, enabling early manned interplanetary operations with space radiation dose control, in the near future while biomedical research, nuclear electric propulsion and active shielding research and development proceed in parallel. Furthermore, future work should encompass laboratory validation of HZETRN calculations, as previous laboratory investigations have not considered large shielding thicknesses and the calculations presented at these thicknesses are currently performed via extrapolation.

  18. A Robust Method to Detect Zero Velocity for Improved 3D Personal Navigation Using Inertial Sensors

    PubMed Central

    Xu, Zhengyi; Wei, Jianming; Zhang, Bo; Yang, Weijun

    2015-01-01

    This paper proposes a robust zero velocity (ZV) detector algorithm to accurately calculate stationary periods in a gait cycle. The proposed algorithm adopts an effective gait cycle segmentation method and introduces a Bayesian network (BN) model based on the measurements of inertial sensors and kinesiology knowledge to infer the ZV period. During the detected ZV period, an Extended Kalman Filter (EKF) is used to estimate the error states and calibrate the position error. The experiments reveal that the removal rate of ZV false detections by the proposed method increases 80% compared with traditional method at high walking speed. Furthermore, based on the detected ZV, the Personal Inertial Navigation System (PINS) algorithm aided by EKF performs better, especially in the altitude aspect. PMID:25831086

  19. Concept design of a time-of-flight spectrometer for the measurement of the energy of alpha particles.

    PubMed

    García-Toraño, E

    2018-04-01

    The knowledge of the energies of the alpha particles emitted in the radioactive decay of a nuclide is a key factor in the construction of its decay scheme. Virtually all existing data are based on a few absolute measurements made by magnetic spectrometry (MS), to which most other MS measurements are traced. An alternative solution would be the use of time-of-flight detectors. This paper discusses the main aspects to be considered in the design of such detectors, and the performances that could be reasonably expected. Based on the concepts discussed here, it is estimated that an energy resolution about 2.5keV may be attainable with a good quality source. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Hotol and Saenger are good political trump cards

    NASA Astrophysics Data System (ADS)

    Ruppe, Harry O.

    Political and technological aspects of proposals for ESA reusable and/or SSTO launch vehicles (LVs) are examined in a critical review. The lack of reliable performance and cost estimates for such unconventional LV designs as Hotol, Saenger II, LART, ADV, and EARL is pointed out, and it is argued that progress toward the ESA goal of greater European space autonomy could be seriously endangered by abandoning or underfunding the current Ariane/Hermes LV program. The cost and reliability of expendable and reusable LV systems are discussed; two-stage and hybrid air-breathing engine concepts are compared; and the need for fundamental in-depth planning studies based on presently available technology or realistic projections is stressed. Long-term funding of such research at about 5 percent of present Ariane/Hermes levels is recommended.

  1. 77 FR 58767 - Definitions Relating to Electronic Orders and Prescriptions for Controlled Substances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-24

    ... of Certified Public Accountants (AICPA) Statement of Auditing Standards (SAS) 70 criteria. Signing... professional service performed by a qualified certified public accountant to evaluate one or more aspects of... performed by a qualified certified public accountant to evaluate one or more aspects of Web sites. [75 FR...

  2. Performance Measurement in Helicopter Training and Operations.

    ERIC Educational Resources Information Center

    Prophet, Wallace W.

    For almost 15 years, HumRRO Division No. 6 has conducted an active research program on techniques for measuring the flight performance of helicopter trainees and pilots. This program addressed both the elemental aspects of flying (i.e., maneuvers) and the mission- or goal-oriented aspects. A variety of approaches has been investigated, with the…

  3. Specifying and Refining a Measurement Model for a Simulation-Based Assessment. CSE Report 619.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    2004-01-01

    The challenges of modeling students' performance in simulation-based assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance in a complex assessment. This paper describes a Bayesian approach to modeling and estimating…

  4. Dynamic estimator for determining operating conditions in an internal combustion engine

    DOEpatents

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-01-05

    Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.

  5. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  6. Analysis of thin fractures with GPR: from theory to practice

    NASA Astrophysics Data System (ADS)

    Arosio, Diego; Zanzi, Luigi; Longoni, Laura; Papini, Monica

    2017-04-01

    Whenever we perform a GPR survey to investigate a rocky medium, being the ultimate purpose of the survey either to study the stability of a rock slope or to determine the soundness of a quarried rock block, we would like mainly to detect any fracture within the investigated medium and, possibly, to estimate the parameters of the fractures, namely thickness and filling material. In most of the practical cases, rock fracture thicknesses are very small when compared to the wavelength of the electromagnetic radiation generated by the GPR systems. In such cases, fractures are to be considered as thin beds, i.e. two interfaces whose distance is smaller than GPR resolving capability, and the reflected signal is the sum of the electromagnetic reverberation within the bed. According to this, fracture parameters are encoded in the thin bed complex response and in this work we propose a methodology based on deterministic deconvolution to process amplitude and phase information in the frequency domain to estimate fracture parameters. We first present some theoretical aspects related to thin bed response and a sensitivity analysis concerning fracture thickness and filling. Secondly, we deal with GPR datasets collected both during laboratory experiments and in the facilities of quarrying activities. In the lab tests fractures were simulated by placing materials with known electromagnetic parameters and controlled thickness in between two small marble blocks, whereas field GPR surveys were performed on bigger quarried ornamental stone blocks before they were submitted to the cutting process. We show that, with basic pre-processing and the choice of a proper deconvolving signal, results are encouraging although an ambiguity between thickness and filling estimates exists when no a-priori information is available. Results can be improved by performing CMP radar surveys that are able to provide additional information (i.e., variation of thin bed response versus offset) at the expense of acquisition effort and of more complex and tricky pre-processing sequences.

  7. An algebraic aspect of Pareto mixture parameter estimation using censored sample: A Bayesian approach.

    PubMed

    Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya

    2018-04-27

    Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.

  8. Quality assessment and control of finite element solutions

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Babuska, Ivo

    1987-01-01

    Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.

  9. Cumulus cloud model estimates of trace gas transports

    NASA Technical Reports Server (NTRS)

    Garstang, Michael; Scala, John; Simpson, Joanne; Tao, Wei-Kuo; Thompson, A.; Pickering, K. E.; Harris, R.

    1989-01-01

    Draft structures in convective clouds are examined with reference to the results of the NASA Amazon Boundary Layer Experiments (ABLE IIa and IIb) and calculations based on a multidimensional time dependent dynamic and microphysical numerical cloud model. It is shown that some aspects of the draft structures can be calculated from measurements of the cloud environment. Estimated residence times in the lower regions of the cloud based on surface observations (divergence and vertical velocities) are within the same order of magnitude (about 20 min) as model trajectory estimates.

  10. The utilization of the climatic chamber to evaluate the influence of ambient conditions on endocrine, nervous and immune systems of rats.

    PubMed

    Baran, Arkadiusz; Jakiel, Grzegorz; Wójcik, Grazyna

    2008-01-01

    The adaptation of an organism to a change in environmental conditions is a complex and in some aspects a poorly understood physiological process. The activating influence of stress on the sympathetic nervous system, the hypothalamic - pituitary - adrenal axis and the suppression of TSH, LH, FSH release is well known. The interplay of communication between the endocrine and immune systems plays an essential role in modulating the response to stress related mediators. The basis of many contradictory and incoherent results of experiments is due to the various methodologies of creating changes in environmental conditions, the way of collecting blood samples which influence stress mediators, the case of assessing the influence of many factors on reproductive functions and the performance of experiments without synchronization with the reproductive cycle. The review will focus on the presentation of simple and repeatable methods of development of an adaptation stress to changed environmental conditions (temperature, oxygenation, humidity) and the technique of blood collection during hour-long estimation of interactions between the endocrine, nervous and immune systems. We would like to place emphasis on appropriate ways of performing experiments on female rats, with regards to the choice of a suitable phase of the reproductive cycle. Also on ways of anaesthesia and microsurgical techniques of vein catheterisation for repeated blood sampling. The performance of all phases of the experiment allow us to estimate only the influence of environmental conditions and eliminate interfering factors during the process of preparing animal for the experiment.

  11. Tree-based flood damage modeling of companies: Damage processes and model performance

    NASA Astrophysics Data System (ADS)

    Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi

    2017-07-01

    Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.

  12. Levels and Types of Alcohol Biomarkers in DUI and Clinic Samples for Estimating Workplace Alcohol Problemsa

    PubMed Central

    Marques, Paul R

    2013-01-01

    Widespread concern about illicit drugs as an aspect of workplace performance potentially diminishes attention on employee alcohol use. Alcohol is the dominant drug contributing to poor job performance; it also accounts for a third of the worldwide public health burden. Evidence from public roadways – a workplace for many – provides an example for work-related risk exposure and performance lapses. In most developed countries, alcohol is involved in 20-35% of fatal crashes; drugs other than alcohol are less prominently involved in fatalities. Alcohol biomarkers can improve detection by extending the timeframe for estimating problematic exposure levels and thereby provide better information for managers. But what levels and which markers are right for the workplace? In this report, an established high-sensitivity proxy for alcohol-driving risk proclivity is used: an average 8 months of failed blood alcohol concentration (BAC) breath tests from alcohol ignition interlock devices. Higher BAC test fail rates are known to presage higher rates of future impaired-driving convictions (DUI). Drivers in alcohol interlock programs log 5-7 daily BAC tests; in 12 months, this yields thousands of samples. Also, higher program entry levels of alcohol biomarkers predict a higher likelihood of failed interlock BAC tests during subsequent months. This report summarizes selected biomarkers’ potential for workplace screening. Markers include phosphatidylethanol (PEth), percent carbohydrate deficient transferrin (%CDT), gammaglutamyltransferase (GGT), gamma %CDT (γ%CDT), and ethylglucuronide (EtG) in hair. Clinical cutoff levels and median/mean levels of these markers in abstinent people, the general population, DUI drivers, and rehabilitation clinics are summarized for context. PMID:22311827

  13. Built-up land mapping capabilities of the ASTER and Landsat ETM+ sensors in coastal areas of southeastern China

    NASA Astrophysics Data System (ADS)

    Xu, Hanqiu; Huang, Shaolin; Zhang, Tiejun

    2013-10-01

    Worldwide urbanization has accelerated expansion of urban built-up lands and resulted in substantial negative impacts on the global environments. Precisely measuring the urban sprawl is becoming an increasing need. Among the satellite-based earth observation systems, the Landsat and ASTER data are most suitable for mesoscale measurements of urban changes. Nevertheless, to date the difference in the capability of mapping built-up land between the two sensors is not clear. Therefore, this study compared the performances of the Landsat-7 ETM+ and ASTER sensors for built-up land mapping in the coastal areas of southeastern China. The comparison was implemented on three date-coincident image pairs and achieved by using three approaches, including per-band-based, index-based, and classification-based comparisons. The index used is the Index-based Built-up Index (IBI), while the classification algorithm employed is the Support Vector Machine (SVM). Results show that in the study areas, ETM+ and ASTER have an overall similar performance in built-up land mapping but also differ in several aspects. The IBI values determined from ASTER were consistently higher than from ETM+ by up to 45.54% according to percentage difference. The ASTER also estimates more built-up land area than ETM+ by 5.9-6.3% estimated with the IBI-based approach or 3.9-6.1% with the SVM classification. The differences in the spectral response functions and spatial resolution between relative spectral bands of the two sensors are attributed to these different performances.

  14. Cost estimation and analysis using the Sherpa Automated Mine Cost Engineering System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stebbins, P.E.

    1993-09-01

    The Sherpa Automated Mine Cost Engineering System is a menu-driven software package designed to estimate capital and operating costs for proposed surface mining operations. The program is engineering (as opposed to statistically) based, meaning that all equipment, manpower, and supply requirements are determined from deposit geology, project design and mine production information using standard engineering techniques. These requirements are used in conjunction with equipment, supply, and labor cost databases internal to the program to estimate all associated costs. Because virtually all on-site cost parameters are interrelated within the program, Sherpa provides an efficient means of examining the impact of changesmore » in the equipment mix on total capital and operating costs. If any aspect of the operation is changed, Sherpa immediately adjusts all related aspects as necessary. For instance, if the user wishes to examine the cost ramifications of selecting larger trucks, the program not only considers truck purchase and operation costs, it also automatically and immediately adjusts excavator requirements, operator and mechanic needs, repair facility size, haul road construction and maintenance costs, and ancillary equipment specifications.« less

  15. [Reflection of estimating postmortem interval in forensic entomology and the Daubert standard].

    PubMed

    Xie, Dan; Peng, Yu-Long; Guo, Ya-Dong; Cai, Ji-Feng

    2013-08-01

    Estimating postmortem interval (PMI) is always the emphasis and difficulty in forensic practice. Forensic entomology plays a significant indispensable role. Recently, the theories and technologies of forensic entomology are increasingly rich. But many problems remain in the research and practice. With proposing the Daubert standard, the reliability and accuracy of estimation PMI by forensic entomology need more demands. This review summarizes the application of the Daubert standard in several aspects of ecology, quantitative genetics, population genetics, molecular biology, and microbiology in the practice of forensic entomology. It builds a bridge for basic research and forensic practice to provide higher accuracy for estimating postmortem interval by forensic entomology.

  16. The 'F-complex' and MMN tap different aspects of deviance.

    PubMed

    Laufer, Ilan; Pratt, Hillel

    2005-02-01

    To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.

  17. Environmental Sciences: YIP: Combining Remotely Sensed Vegetation Data and Ecohydrologic Process Models to Improve Estimation of Root Zone Moisture at Spatial Scales Relevant to the Army

    DTIC Science & Technology

    2016-04-01

    vegetation arising due to contrasts in incoming solar radiation that is associated with hillslope aspects. At lower elevations, shrubs can be present on North...whereas shrubs are more prevalent on South-facing aspects. At watershed scales, the transition from grasses at lower elevations to coniferous evergreens...Mountain sage communities, adapted to cooler temperatures, are also found at higher elevations in RCEW, with ceanothus shrubs common   Mean annual

  18. Dual Production Sources in the Procurement of Weapon Systems: A Policy Analysis

    DTIC Science & Technology

    1983-11-01

    Competitive Aspects of the AIM-9B Seeker Assembly Procurement, memorandum to Carl Wilbourn , U.S. Navy OP-96D, December 16, 1977. 3 sARINC Report, p...Corporation, R-2706-DR&E, February 1981. Carroll, Frank, Competitive Aspects of the AIM-9B Seeker Assembly Procurement, Memorandum to Carl Wilbourn , U.S...David J., and Joseph P. Large, Estimated Costs of Extended Low Rate Airframe Production, The Rand Corporation, R-2243-AF, March 1978, Fisher, Gene H

  19. A state space approach for piecewise-linear recurrent neural networks for identifying computational dynamics from neural measurements.

    PubMed

    Durstewitz, Daniel

    2017-06-01

    The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.

  20. The variability of software scoring of the CDMAM phantom associated with a limited number of images

    NASA Astrophysics Data System (ADS)

    Yang, Chang-Ying J.; Van Metter, Richard

    2007-03-01

    Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.

  1. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. PONS2train: tool for testing the MLP architecture and local traning methods for runoff forecast

    NASA Astrophysics Data System (ADS)

    Maca, P.; Pavlasek, J.; Pech, P.

    2012-04-01

    The purpose of presented poster is to introduce the PONS2train developed for runoff prediction via multilayer perceptron - MLP. The software application enables the implementation of 12 different MLP's transfer functions, comparison of 9 local training algorithms and finally the evaluation the MLP performance via 17 selected model evaluation metrics. The PONS2train software is written in C++ programing language. Its implementation consists of 4 classes. The NEURAL_NET and NEURON classes implement the MLP, the CRITERIA class estimates model evaluation metrics and for model performance evaluation via testing and validation datasets. The DATA_PATTERN class prepares the validation, testing and calibration datasets. The software application uses the LAPACK, BLAS and ARMADILLO C++ linear algebra libraries. The PONS2train implements the first order local optimization algorithms: standard on-line and batch back-propagation with learning rate combined with momentum and its variants with the regularization term, Rprop and standard batch back-propagation with variable momentum and learning rate. The second order local training algorithms represents: the Levenberg-Marquardt algorithm with and without regularization and four variants of scaled conjugate gradients. The other important PONS2train features are: the multi-run, the weight saturation control, early stopping of trainings, and the MLP weights analysis. The weights initialization is done via two different methods: random sampling from uniform distribution on open interval or Nguyen Widrow method. The data patterns can be transformed via linear and nonlinear transformation. The runoff forecast case study focuses on PONS2train implementation and shows the different aspects of the MLP training, the MLP architecture estimation, the neural network weights analysis and model uncertainty estimation.

  3. Plausible combinations: An improved method to evaluate the covariate structure of Cormack-Jolly-Seber mark-recapture models

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.

    2013-01-01

    Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.

  4. Assessment of Voting Assistance Programs for Calendar Year 2011

    DTIC Science & Technology

    2012-03-30

    is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of...emphasis. We reviewed the Service IG reports and certain supporting data , as needed; met with senior IG representatives from the Army, Navy, Air Force

  5. The 6,000 Mile Screwdriver is Getting Longer: Washington’s Strengthening Grip

    DTIC Science & Technology

    2012-04-01

    OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing...this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson

  6. Launch and Recovery System Literature Review

    DTIC Science & Technology

    2010-12-01

    information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing he collection of information. Send comments regarding this burden estimate or any other aspect...if it does not display a currently valid OMB control number. PLEASE DO NOT RETURNYOU FORM TO THE AVOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 31-12

  7. Feasibility of an Extended-Duration Aerial Platform Using Autonomous Multi-Rotor Vehicle Swapping and Battery Management

    DTIC Science & Technology

    2017-12-01

    AN EXTENDED-DURATION AERIAL PLATFORM USING AUTONOMOUS MULTI-ROTOR VEHICLE SWAPPING AND BATTERY MANAGEMENT by Alexander G. Williams December...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time...of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for

  8. Human Capital Management of Air Force SOF: Leadership Identification, Selection and Cultivation

    DTIC Science & Technology

    2017-12-01

    MANAGEMENT OF AIR FORCE SOF: LEADERSHIP IDENTIFICATION, SELECTION AND CULTIVATION by Paul R. Andrews Jr. Brett A. Stitt December 2017...No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this

  9. The RADAR Test Methodology: Evaluating a Multi-Task Machine Learning System with Humans in the Loop

    DTIC Science & Technology

    2006-10-01

    burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing...data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this...burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington Headquarters Services

  10. Deep Ultraviolet Laser Imaging for Biology

    DTIC Science & Technology

    2008-08-15

    AFRL-SR-AR-TR-09-0013 REPORT DOCUMENTATION PAGE The public reporting burden for this collection of information is estimated to average 1 hour per...and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information...should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of

  11. Radiative Transfer in Submerged Macrophyte Canopies

    DTIC Science & Technology

    2001-09-30

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data...needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection...person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control

  12. Social and Cognitive Functioning as Risk Factors for Suicide: A Historical-Prospective Cohort Study

    DTIC Science & Technology

    2011-04-01

    Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per...response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining the data needed, and completing and...reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including

  13. Adaptive Training in an Unmanned Aerial Vehicel: Examination of Several Candidate Real-time Metrics

    DTIC Science & Technology

    2010-01-01

    for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden... estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services

  14. Enabling Software Acquisition Improvement: Government and Industry Software Development Team Acquisition Model

    DTIC Science & Technology

    2010-04-30

    estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources , gathering and maintaining...previous and current complex SW development efforts, the program offices will have a source of objective lessons learned and metrics that can be applied...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this

  15. Operative Therapy and the Growth of Breast Cancer Micrometastases: Cause and Effect

    DTIC Science & Technology

    2006-08-01

    0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...instructions, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing this collection of information...Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to

  16. Factors Impacting Intra-District Collaboration: A Field Study in a Midwest Police Department

    DTIC Science & Technology

    2018-03-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this...burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington headquarters Services

  17. Blind, Deaf, and Dumb: We Must Be Prepared to Fight for Information

    DTIC Science & Technology

    2017-05-25

    Blind, Deaf, and Dumb: We Must Be Prepared to Fight for Information A Monograph By LTC Stephen M. Johnson United States Army... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect

  18. Enhancing Quality of Orthotic Services with Process and Outcome Information

    DTIC Science & Technology

    2017-10-01

    AWARD NUMBER: W81XWH-16-1-0788 TITLE: Enhancing Quality of Orthotic Services with Process and Outcome Information PRINCIPAL INVESTIGATOR...OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this

  19. A Multidisciplinary Approach to Study the Role of the Gut Microbiome in Relapsing and Progressive MS

    DTIC Science & Technology

    2017-10-01

    DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per...and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and

  20. Agent And Component Object Framework For Concept Design Modeling Of Mobile Cyber Physical Systems

    DTIC Science & Technology

    2018-03-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this...burden estimate or any other aspect of this collection of information , including suggestions for reducing this burden, to Washington headquarters

  1. An Analysis of the Marine Corps Selection Process: Does Increased Competition Lead to Increased Quality

    DTIC Science & Technology

    2018-03-01

    collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or...any other aspect of this collection of information , including suggestions for reducing this burden, to Washington headquarters Services, Directorate

  2. The Role of Inflammation in Development of Alzheimer’s Disease Following Repetitive Head Trauma

    DTIC Science & Technology

    2017-08-01

    Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including...collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for...reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188

  3. A Multidisciplinary Approach to Study the Role of the Gut Microbiome in Relapsing and Progressive MS

    DTIC Science & Technology

    2017-10-01

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect of this collection of information ...including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  4. Reference Concepts in Ecosystem Restoration and Environmental Benefits Analysis (EBA): Principles and Practices

    DTIC Science & Technology

    2012-06-01

    Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information... stratification by narrowing selection of reference ecosystem metrics. 5. Aids in identifying natural variability and thresholds within an ecosystem

  5. Can Vertical Migrations of Dinoflagellates Explain Observed Bioluminescence Patterns During an Upwelling Event in Monterey Bay, California?

    DTIC Science & Technology

    2012-01-25

    concentration of their population , physical con- ditions (currents, temperature, strength of stratification , mixed layer depth etc.), light...REPORT DOCUMENTATION PAGE Form Approved OMB No. 07040188 The public reporting burden for this collection of information is estimated to average 1...completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of

  6. V-22 Osprey Joint Services Advanced Vertical Lift Aircraft (V-22)

    DTIC Science & Technology

    2013-12-01

    Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the... collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions

  7. Optimization of an Innovative Biofiltration System as a VOC Control Technology for Aircraft Painting Facilities

    DTIC Science & Technology

    2004-04-20

    EUROPE (Leson, 1991). Chemical Operations Coffee Roasting Composting Facilities Chemical Storage Coca Roasting Landfill Gas Extraction Film Coating...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect

  8. A Case Study in Transnational Crime: Ukraine and Modern Slavery

    DTIC Science & Technology

    2007-06-01

    remained unable to appropriate resources or plan efficiently. The full extent of the decline remains unknown, because statistics were manipulated to hide...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of

  9. Role of Obesity in Prostate Cancer Development

    DTIC Science & Technology

    2008-03-01

    DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per...reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including...305.95 Maltodextrin 160.0 104 Sucrose 100.0 65 Soybean oil3 40.0 160 Cellulose 40.0 98.4742 AIN-93-MX -Mineral mix 35.0

  10. Construction of a Bacterial Cell that Contains Only the Set of Essential Genes Necessary to Impart Life

    DTIC Science & Technology

    2014-11-11

    REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1...completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information...including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  11. Learning to Leave. The Preeminence of Disengagement in US Military Strategy

    DTIC Science & Technology

    2008-05-01

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of...Information center cataloging Data Brown, R. Greg. Learning to leave : the preeminence of disengagement in US military strategy / R. Greg Brown. p. ; cm

  12. Geochemical Characterization of Concentrated Gas Hydrate Deposits on the Hikurangi Margin, New Zealand: Preliminary Geochemical Cruise Report

    DTIC Science & Technology

    2008-02-29

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing this collection of information . Send comments regarding this burden estimate or any other aspect...of this collection of information , including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services

  13. The Budget and Economic Outlook: 2015 to 2025

    DTIC Science & Technology

    2015-01-01

    States. 19. In principle , GDI equals GDP, because each dollar of production yields a dollar of income; in practice, they differ because of diffi...Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour...and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information

  14. Improving Government Procurement: Lessons from the Coast Guard’s Deepwater Program

    DTIC Science & Technology

    2010-09-17

    dilemma. Ann Arbor, MI: University of Michigan Press. Mankiw , G. (2002). Macroeconomics (5th ed.). New York, NY: Worth. Martin, L. L. (2004...Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response...the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including

  15. Human Systems Integration (HSI) in Acquisition. Acquisition Phase Guide

    DTIC Science & Technology

    2009-08-01

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of...available Concept of Operations (CONOPS) and other available data 1.1 Select and review Baseline Comparison System(s) (BCS) documentation 1.2 Assess

  16. Applicability of Human Simulation for Enhancing Operations of Dismounted Soldiers

    DTIC Science & Technology

    2010-10-01

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect...consisted of a data capturing phase, in which field-trials at a German MOUT training facility were observed, a subsequent data -processing including

  17. Climate change and drought effects on rural income distribution in the Mediterranean: a case study for Spain

    NASA Astrophysics Data System (ADS)

    Quiroga, S.; Suárez, C.

    2015-07-01

    This paper examines the effects of climate change and drought on agricultural outputs in Spanish rural areas. By now the effects of drought as a response to climate change or policy restrictions have been analyzed through response functions considering direct effects on crop productivity and incomes. These changes also affect incomes distribution in the region and therefore modify the social structure. Here we consider this complementary indirect effect on social distribution of incomes which is essential in the long term. We estimate crop production functions for a range of Mediterranean crops in Spain and we use a decomposition of inequalities measure to estimate the impact of climate change and drought on yield disparities. This social aspect is important for climate change policies since it can be determinant for the public acceptance of certain adaptation measures in a context of drought. We provide the empirical estimations for the marginal effects of the two considered impacts: farms' income average and social income distribution. In our estimates we consider crop productivity response to both bio-physical and socio-economic aspects to analyze long term implications on both competitiveness and social disparities. We find disparities in the adaptation priorities depending on the crop and the region analyzed.

  18. [Supply services at health facilities: measuring performance].

    PubMed

    Dacosta Claro, I

    2001-01-01

    Performance measurement, in their different meanings--either balance scorecard or outputs measurement--have become an essential tool in today's organizations (World-Class organizations) to improve service quality and reduce costs. This paper presents a performance measurement system for the hospital supply chain. The system is organized in different levels and groups of indicators in order to show a hierarchical, coherent and integrated vision of the processes. Thus, supply services performance is measured according to (1) financial aspects, (2) customers satisfaction aspects and (3) internal aspects of the processes performed. Since the informational needs of the managers vary within the administrative structure, the performance measurement system is defined in three hierarchical levels. Firstly, the whole supply chain, with the different interrelation of activities. Secondly, the three main processes of the chain--physical management of products, purchasing and negotiation processes and the local storage units. And finally, the performance measurement of each activity involved. The system and the indicators have been evaluated with the participation of 17 health services of Quebec (Canada), however, and due to the similarities of the operation, could be equally implemented in Spanish hospitals.

  19. Physical examination for lumbar radiculopathy due to disc herniation in patients with low-back pain.

    PubMed

    van der Windt, Daniëlle Awm; Simons, Emmanuel; Riphagen, Ingrid I; Ammendolia, Carlo; Verhagen, Arianne P; Laslett, Mark; Devillé, Walter; Deyo, Rick A; Bouter, Lex M; de Vet, Henrica Cw; Aertgeerts, Bert

    2010-02-17

    Low-back pain with leg pain (sciatica) may be caused by a herniated intervertebral disc exerting pressure on the nerve root. Most patients will respond to conservative treatment, but in carefully selected patients, surgical discectomy may provide faster relief of symptoms. Primary care clinicians use patient history and physical examination to evaluate the likelihood of disc herniation and select patients for further imaging and possible surgery. (1) To assess the performance of tests performed during physical examination (alone or in combination) to identify radiculopathy due to lower lumbar disc herniation in patients with low-back pain and sciatica;(2) To assess the influence of sources of heterogeneity on diagnostic performance. We searched electronic databases for primary studies: PubMed (includes MEDLINE), EMBASE, and CINAHL, and (systematic) reviews: PubMed and Medion (all from earliest until 30 April 2008), and checked references of retrieved articles. We considered studies if they compared the results of tests performed during physical examination on patients with back pain with those of diagnostic imaging (MRI, CT, myelography) or findings at surgery. Two review authors assessed the quality of each publication with the QUADAS tool, and extracted details on patient and study design characteristics, index tests and reference standard, and the diagnostic two-by-two table. We presented information on sensitivities and specificities with 95% confidence intervals (95% CI) for all aspects of physical examination. Pooled estimates of sensitivity and specificity were computed for subsets of studies showing sufficient clinical and statistical homogeneity. We included 16 cohort studies (median N = 126, range 71 to 2504) and three case control studies (38 to100 cases). Only one study was carried out in a primary care population. When used in isolation, diagnostic performance of most physical tests (scoliosis, paresis or muscle weakness, muscle wasting, impaired reflexes, sensory deficits) was poor. Some tests (forward flexion, hyper-extension test, and slump test) performed slightly better, but the number of studies was small. In the one primary care study, most tests showed higher specificity and lower sensitivity compared to other settings.Most studies assessed the Straight Leg Raising (SLR) test. In surgical populations, characterized by a high prevalence of disc herniation (58% to 98%), the SLR showed high sensitivity (pooled estimate 0.92, 95% CI: 0.87 to 0.95) with widely varying specificity (0.10 to 1.00, pooled estimate 0.28, 95% CI: 0.18 to 0.40). Results of studies using imaging showed more heterogeneity and poorer sensitivity. The crossed SLR showed high specificity (pooled estimate 0.90, 95% CI: 0.85 to 0.94) with consistently low sensitivity (pooled estimate 0.28, 95% CI: 0.22 to 0.35).Combining positive test results increased the specificity of physical tests, but few studies presented data on test combinations. When used in isolation, current evidence indicates poor diagnostic performance of most physical tests used to identify lumbar disc herniation. However, most findings arise from surgical populations and may not apply to primary care or non-selected populations. Better performance may be obtained when tests are combined.

  20. International Classification of Functioning, Disability and Health categories explored for self-rated participation in Swedish adolescents and adults with a mild intellectual disability.

    PubMed

    Arvidsson, Patrik; Granlund, Mats; Thyberg, Ingrid; Thyberg, Mikael

    2012-06-01

    To explore internal consistency and correlations between perceived ability, performance and perceived importance in a preliminary selection of self-reported items representing the activity/participation component of the International Classification of Functioning, Disability and Health (ICF). Structured interview study. Fifty-five Swedish adolescents and adults with a mild intellectual disability. Questions about perceived ability, performance and perceived importance were asked on the basis of a 3-grade Likert-scale regarding each of 68 items representing the 9 ICF domains of activity/participation. Internal consistency for perceived ability (Cronbach's alpha for all 68 items): 0.95 (values for each domain varied between 0.57 and 0.85), for performance: 0.86 (between 0.27 and 0.66), for perceived importance: 0.84 (between 0.27 and 0.68). Seventy-two percent of the items showed correlations >0.5 (mean=0.59) for performance vs perceived importance, 41% >0.5 (mean=0.47) for perceived ability vs performance and 12% >0.5 (mean=0.28) for perceived ability vs perceived importance. Measures of performance and perceived importance may have to be based primarily on their estimated clinical relevance for describing aspects of the ICF participation concept. With a clinimetric approach, parts of the studied items and domains may be used to investigate factors related to different patterns and levels of participation, and outcomes of rehabilitation.

Top