Science.gov

Sample records for preliminary performance estimates

  1. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  2. Estimating Basic Preliminary Design Performances of Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.; Alexander, Reginald

    2004-01-01

    Aerodynamics and Performance Estimation Toolset is a collection of four software programs for rapidly estimating the preliminary design performance of aerospace vehicles represented by doing simplified calculations based on ballistic trajectories, the ideal rocket equation, and supersonic wedges through standard atmosphere. The program consists of a set of Microsoft Excel worksheet subprograms. The input and output data are presented in a user-friendly format, and calculations are performed rapidly enough that the user can iterate among different trajectories and/or shapes to perform "what-if" studies. Estimates that can be computed by these programs include: 1. Ballistic trajectories as a function of departure angles, initial velocities, initial positions, and target altitudes; assuming point masses and no atmosphere. The program plots the trajectory in two-dimensions and outputs the position, pitch, and velocity along the trajectory. 2. The "Rocket Equation" program calculates and plots the trade space for a vehicle s propellant mass fraction over a range of specific impulse and mission velocity values, propellant mass fractions as functions of specific impulses and velocities. 3. "Standard Atmosphere" will estimate the temperature, speed of sound, pressure, and air density as a function of altitude in a standard atmosphere, properties of a standard atmosphere as functions of altitude. 4. "Supersonic Wedges" will calculate the free-stream, normal-shock, oblique-shock, and isentropic flow properties for a wedge-shaped body flying supersonically through a standard atmosphere. It will also calculate the maximum angle for which a shock remains attached, and the minimum Mach number for which a shock becomes attached, all as functions of the wedge angle, altitude, and Mach number.

  3. PRELIMINARY PERFORMANCE AND COST ESTIMATES OF MERCURY EMISSION CONTROL OPTIONS FOR ELECTRIC UTILITY BOILERS

    EPA Science Inventory


    The paper discusses preliminary performance and cost estimates of mercury emission control options for electric utility boilers. Under the Clean Air Act Amendments of 1990, EPA had to determine whether mercury emissions from coal-fired power plants should be regulated. To a...

  4. PRELIMINARY ESTIMATES OF PERFORMANCE AND COST OF MERCURY CONTROL TECHNOLOGY APPLICATIONS ON ELECTRIC UTILITY BOILERS

    EPA Science Inventory

    Under the Clean Air Act Amendments of 1990, the Environmental Protection Agency has determined that regulation of mercury emissions from coal-fired power plants is appropriate and necessary. To aid in this determination, preliminary estimates of the performance and cost of powder...

  5. Smarter Balanced Preliminary Performance Levels: Estimated MAP Scores Corresponding to the Preliminary Performance Levels of the Smarter Balanced Assessment Consortium (Smarter Balanced)

    ERIC Educational Resources Information Center

    Northwest Evaluation Association, 2015

    2015-01-01

    Recently, the Smarter Balanced Assessment Consortium (Smarter Balanced) released a document that established initial performance levels and the associated threshold scale scores for the Smarter Balanced assessment. The report included estimated percentages of students expected to perform at each of the four performance levels, reported by grade…

  6. Evaluation of HFC-245ca for commercial use in low pressure chillers. Task 1 report: Preliminary estimates of chiller performance

    SciTech Connect

    Keuper, E.F.; Hamm, F.B.; Glamm, P.R.

    1995-04-30

    HFC-245ca has been identified as a potential replacement for both CFC-11 and HCFC-123 in centrifugal chillers based on estimates of its thermodynamic properties, even though serious concerns exist about its flammability characteristics. The overall objective of this project is to assess the commercial viability of HFC-245ca in centrifugal chillers. This first report focuses on preliminary estimates of chiller performance only, while the next report will include laboratory performance data. The chiller performance estimates are based on early correlations of thermodynamic properties and predictions of compressor efficiency, with variations in heat transfer ignored until experimental data are obtained. Conclusions from this study include the following: The theoretical efficiency of HFC-245ca in optimized three stage chiller designs is very close to that for CFC-11 and HCFC-123 chillers. HFC-245ca is not attractive as a service retrofit in CFC-11 and HCFC-123 chillers because significant compressor modifications or dramatic lowering of condenser water temperatures would be required. Hurdles which must be overcome to apply HFC-245ca in centrifugal chillers include the flammability behavior, evaluation of toxicity, unknown heat transfer characteristics, uncertain thermodynamic properties, high refrigerant cost and construction of HFC-245ca manufacturing plants. Although the flammability of HFC-245ca can probably be reduced or eliminated by blending HFC-245ca with various inert compounds, addition of these compounds will lower the chiller performance. The chiller performance will be degraded due to less attractive thermodynamic properties and lower heat transfer performance if the blend fractionates. The experimental phase of the project will improve the accuracy of our performance estimates, and the commercial viability assessment will also include the impact of flammability, toxicity, product cost and product availability.

  7. PRELIMINARY ESTIMATES OF PERFORMANCE AND COST OF MERCURY EMISSION CONTROL TECHNOLOGY APPLICATIONS ON ELECTRIC UTILITY BOILERS: AN UPDATE

    EPA Science Inventory

    The paper presents estimates of performance levels and related costs associated with controlling mercury (Hg) emissions from coal-fired power plants using either powdered activated carbon (PAC) injection or multipollutant control in which Hg capture is enhanced in existing and ne...

  8. Preliminary performance estimates of an oblique, all-wing, remotely piloted vehicle for air-to-air combat

    NASA Technical Reports Server (NTRS)

    Nelms, W. P., Jr.; Bailey, R. O.

    1974-01-01

    A computerized aircraft synthesis program has been used to assess the effects of various vehicle and mission parameters on the performance of an oblique, all-wing, remotely piloted vehicle (RPV) for the highly maneuverable, air-to-air combat role. The study mission consists of an outbound cruise, an acceleration phase, a series of subsonic and supersonic turns, and a return cruise. The results are presented in terms of both the required vehicle weight to accomplish this mission and the combat effectiveness as measured by turning and acceleration capability. This report describes the synthesis program, the mission, the vehicle, and results from sensitivity studies. An optimization process has been used to establish the nominal RPV configuration of the oblique, all-wing concept for the specified mission. In comparison to a previously studied conventional wing-body canard design for the same mission, this oblique, all-wing nominal vehicle is lighter in weight and has higher performance.

  9. 12 CFR 611.1250 - Preliminary exit fee estimate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Preliminary exit fee estimate. 611.1250 Section... System Institution Status § 611.1250 Preliminary exit fee estimate. (a) Preliminary exit fee estimate—terminating association. You must provide a preliminary exit fee estimate to us when you submit the plan...

  10. TRAC performance estimates

    NASA Technical Reports Server (NTRS)

    Everett, L.

    1992-01-01

    This report documents the performance characteristics of a Targeting Reflective Alignment Concept (TRAC) sensor. The performance will be documented for both short and long ranges. For long ranges, the sensor is used without the flat mirror attached to the target. To better understand the capabilities of the TRAC based sensors, an engineering model is required. The model can be used to better design the system for a particular application. This is necessary because there are many interrelated design variables in application. These include lense parameters, camera, and target configuration. The report presents first an analytical development of the performance, and second an experimental verification of the equations. In the analytical presentation it is assumed that the best vision resolution is a single pixel element. The experimental results suggest however that the resolution is better than 1 pixel. Hence the analytical results should be considered worst case conditions. The report also discusses advantages and limitations of the TRAC sensor in light of the performance estimates. Finally the report discusses potential improvements.

  11. 12 CFR 611.1250 - Preliminary exit fee estimate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... System Institution Status § 611.1250 Preliminary exit fee estimate. (a) Preliminary exit fee estimate... average daily balances based on financial statements that comply with GAAP. The financial statements, as... engage independent experts to conduct assessments, analyses, or studies, or to request rulings...

  12. 12 CFR 611.1250 - Preliminary exit fee estimate.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 611.1250 Banks and Banking FARM CREDIT ADMINISTRATION FARM CREDIT SYSTEM ORGANIZATION Termination of System Institution Status § 611.1250 Preliminary exit fee estimate. (a) Preliminary exit fee estimate... engage independent experts to conduct assessments, analyses, or studies, or to request rulings...

  13. Advanced composites sizing guide for preliminary weight estimates

    NASA Astrophysics Data System (ADS)

    Burns, J. W.

    During the preliminary design and proposal phases, it is necessary for the mass properties engineer to make weight estimates that require preliminary rough estimates to improve or verify Level I and Level II estimates and to support trade studies for various types of construction, materials substitution, wing t/c, and design criteria changes. The purpose of this paper is to provide a simple and easy to understand, preliminary sizing guide and present some numeric examples that will aid the mass properties engineer that is inexperienced with advanced composites analysis.

  14. Preliminary melter performance assessment report

    SciTech Connect

    Elliott, M.L.; Eyler, L.L.; Mahoney, L.A.; Cooper, M.F.; Whitney, L.D.; Shafer, P.J.

    1994-08-01

    The Melter Performance Assessment activity, a component of the Pacific Northwest Laboratory`s (PNL) Vitrification Technology Development (PVTD) effort, was designed to determine the impact of noble metals on the operational life of the reference Hanford Waste Vitrification Plant (HWVP) melter. The melter performance assessment consisted of several activities, including a literature review of all work done with noble metals in glass, gradient furnace testing to study the behavior of noble metals during the melting process, research-scale and engineering-scale melter testing to evaluate effects of noble metals on melter operation, and computer modeling that used the experimental data to predict effects of noble metals on the full-scale melter. Feed used in these tests simulated neutralized current acid waste (NCAW) feed. This report summarizes the results of the melter performance assessment and predicts the lifetime of the HWVP melter. It should be noted that this work was conducted before the recent Tri-Party Agreement changes, so the reference melter referred to here is the Defense Waste Processing Facility (DWPF) melter design.

  15. Estimating State IQ: Measurement Challenges and Preliminary Correlates

    ERIC Educational Resources Information Center

    McDaniel, Michael A.

    2006-01-01

    The purpose of this study is threefold. First, an estimate of state IQ is derived and its strengths and limitations are considered. To that end, an indicator of downward bias in estimating state IQ is provided. Two preliminary causal models are offered that predict state IQ. These models were found to be highly predictive of state IQ, yielding…

  16. Performance Bounds of Quaternion Estimators.

    PubMed

    Xia, Yili; Jahanchahi, Cyrus; Nitta, Tohru; Mandic, Danilo P

    2015-12-01

    The quaternion widely linear (WL) estimator has been recently introduced for optimal second-order modeling of the generality of quaternion data, both second-order circular (proper) and second-order noncircular (improper). Experimental evidence exists of its performance advantage over the conventional strictly linear (SL) as well as the semi-WL (SWL) estimators for improper data. However, rigorous theoretical and practical performance bounds are still missing in the literature, yet this is crucial for the development of quaternion valued learning systems for 3-D and 4-D data. To this end, based on the orthogonality principle, we introduce a rigorous closed-form solution to quantify the degree of performance benefits, in terms of the mean square error, obtained when using the WL models. The cases when the optimal WL estimation can simplify into the SWL or the SL estimation are also discussed. PMID:25643416

  17. Preliminary performance estimates of a highly maneuverable remotely piloted vehicle. [computerized synthesis program to assess effects of vehicle and mission parameters

    NASA Technical Reports Server (NTRS)

    Nelms, W. P., Jr.; Axelson, J. A.

    1974-01-01

    A computerized synthesis program has been used to assess the effects of various vehicle and mission parameters on the performance of a highly maneuverable remotely piloted vehicle (RPV) for the air-to-air combat role. The configuration used in the study is a trapezoidal-wing and body concept, with forward-mounted stabilizing and control surfaces. The study mission consists of an outbound cruise, an acceleration phase, a series of subsonic and supersonic turns, and a return cruise. Performance is evaluated in terms of both the required vehicle weight to accomplish this mission and combat effectiveness as measured by turning and acceleration capability. The report describes the synthesis program, the mission, the vehicle, and the results of sensitivity and trade studies.

  18. Sandia solar dryer: preliminary performance evaluation

    SciTech Connect

    Glass, J.S.; Holm-Hansen, T.; Tills, J.; Pierce, J.D.

    1986-01-01

    Preliminary performance evaluations were conducted with the prototype modular solar dryer for wastewater sludge at Sandia National Laboratories. Operational parameters which appeared to influence sludge drying efficiency included condensation system capacity and air turbulence at the sludge surface. Sludge heating profiles showed dependencies on sludge moisture content, sludge depth and seasonal variability in available solar energy. Heat-pasteurization of sludge in the module was demonstrated in two dynamic-processing experiments. Through balanced utilization of drying and heating functions, the facility has the potential for year-round sludge treatment application.

  19. Preliminary estimates of operating costs for lighter than air transports

    NASA Technical Reports Server (NTRS)

    Smith, C. L.; Ardema, M. D.

    1975-01-01

    A preliminary set of operating cost relationships are presented for airship transports. The starting point for the development of the relationships is the direct operating cost formulae and the indirect operating cost categories commonly used for estimating costs of heavier than air commercial transports. Modifications are made to the relationships to account for the unique features of airships. To illustrate the cost estimating method, the operating costs of selected airship cargo transports are computed. Conventional fully buoyant and hybrid semi-buoyant systems are investigated for a variety of speeds, payloads, ranges, and altitudes. Comparisons are made with aircraft transports for a range of cargo densities.

  20. Preliminary estimates of operating costs for lighter than air transports

    NASA Technical Reports Server (NTRS)

    Smith, C. L.; Ardema, M. D.

    1975-01-01

    Presented is a preliminary set of operating cost relationships for airship transports. The starting point for the development of the relationships is the direct operating cost formulae and the indirect operating cost categories commonly used for estimating costs of heavier than air commercial transports. Modifications are made to the relationships to account for the unique features of airships. To illustrate the cost estimating method, the operating costs of selected airship cargo transports are computed. Conventional fully buoyant and hybrid semi-buoyant systems are investigated for a variety of speeds, payloads, ranges, and altitudes. Comparisons are made with aircraft transports for a range of cargo densities.

  1. Guidance for performing preliminary assessments under CERCLA

    SciTech Connect

    1991-09-01

    EPA headquarters and a national site assessment workgroup produced this guidance for Regional, State, and contractor staff who manage or perform preliminary assessments (PAs). EPA has focused this guidance on the types of sites and site conditions most commonly encountered. The PA approach described in this guidance is generally applicable to a wide variety of sites. However, because of the variability among sites, the amount of information available, and the level of investigative effort required, it is not possible to provide guidance that is equally applicable to all sites. PA investigators should recognize this and be aware that variation from this guidance may be necessary for some sites, particularly for PAs performed at Federal facilities, PAs conducted under EPA`s Environmental Priorities Initiative (EPI), and PAs at sites that have previously been extensively investigated by EPA or others. The purpose of this guidance is to provide instructions for conducting a PA and reporting results. This guidance discusses the information required to evaluate a site and how to obtain it, how to score a site, and reporting requirements. This document also provides guidelines and instruction on PA evaluation, scoring, and the use of standard PA scoresheets. The overall goal of this guidance is to assist PA investigators in conducting high-quality assessments that result in correct site screening or further action recommendations on a nationally consistent basis.

  2. Preliminary basic performance analysis of the Cedar multiprocessor memory system

    NASA Technical Reports Server (NTRS)

    Gallivan, K.; Jalby, W.; Turner, S.; Veidenbaum, A.; Wijshoff, H.

    1991-01-01

    Some preliminary basic results on the performance of the Cedar multiprocessor memory system are presented. Empirical results are presented and used to calibrate a memory system simulator which is then used to discuss the scalability of the system.

  3. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  4. Application of Boosting Regression Trees to Preliminary Cost Estimation in Building Construction Projects.

    PubMed

    Shin, Yoonseok

    2015-01-01

    Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project. PMID:26339227

  5. Application of Boosting Regression Trees to Preliminary Cost Estimation in Building Construction Projects.

    PubMed

    Shin, Yoonseok

    2015-01-01

    Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.

  6. Estimates of Fermilab Tevatron collider performance

    SciTech Connect

    Dugan, G.

    1991-09-01

    This paper describes a model which has been used to estimate the average luminosity performance of the Tevatron collider. In the model, the average luminosity is related quantitatively to various performance parameters of the Fermilab Tevatron collider complex. The model is useful in allowing estimates to be developed for the improvements in average collider luminosity to be expected from changes in the fundamental performance parameters as a result of upgrades to various parts of the accelerator complex.

  7. Experiments, conceptual design, preliminary cost estimates and schedules for an underground research facility

    SciTech Connect

    Korbin, G.; Wollenberg, H.; Wilson, C.; Strisower, B.; Chan, T.; Wedge, D.

    1981-09-01

    Plans for an underground research facility are presented, incorporating techniques to assess the hydrological and thermomechanical response of a rock mass to the introduction and long-term isolation of radioactive waste, and to assess the effects of excavation on the hydrologic integrity of a repository and its subsequent backfill, plugging, and sealing. The project is designed to utilize existing mine or civil works for access to experimental areas and is estimated to last 8 years at a total cost for contruction and operation of $39.0 million (1981 dollars). Performing the same experiments in an existing underground research facility would reduce the duration to 7-1/2 years and cost $27.7 million as a lower-bound estimate. These preliminary plans and estimates should be revised after specific sites are identified which would accommodate the facility.

  8. Preliminary Estimates of Specific Discharge and TransportVelocities near Borehole NC-EWDP-24PB

    SciTech Connect

    Freifeld, Barry; Doughty, Christine; Finsterle, Stefan

    2006-06-21

    This report summarizes fluid electrical conductivity (FEC)and thermal logging data collected in Borehole NC-EWDP-24PB, locatedapproximately 15 km south of the proposed repository at Yucca Mountain.Preliminary analyses of a small fraction of the FEC and temperature dataindicate that relatively large, localized fluid fluxes are likely toexist at this location. The implication that considerable flow is inducedby small gradients, and that flow is highly localized, is significant forthe estimation of groundwater transport velocities and radionuclidetravel times. The sensitivity of the data to potential perturbationsduring testing (i.e., internal wellbore flow in the case of FEC data, andbuoyancy effects in the case of thermal logging data) make it difficultto conclusively derive fluid fluxes and transport velocities without adetailed analysis of all data and processes involved. Such acomprehensive analysis has not yet been performed. However, thepreliminary results suggest that the ambient component of the estimatedflow rates is significant and on the order of liters per minute, yieldinggroundwater transport velocities in the range of kilometers per year. Oneparticular zone in the Bullfrog tuff exhibits estimated velocities on theorder of 10 km/yr. Given that the preliminary estimates of ambient flowrates and transport velocities are relatively high, and considering thepotential impact of high rates and velocities on saturated-zone flow andtransport behavior, we recommend that a comprehensive analysis of all theavailable data be performed. Moreover, additional data sets at otherlocations should be collected to examine whether the current data set isrepresentative of the regional flow system near YuccaMountain.

  9. A Preliminary Foil Gas Bearing Performance Map

    NASA Technical Reports Server (NTRS)

    DellaCorte, Christopher; Radil, Kevin C.; Bruckner, Robert J.; Howard, S. Adam

    2006-01-01

    Recent breakthrough improvements in foil gas bearing load capacity, high temperature tribological coatings and computer based modeling have enabled the development of increasingly larger and more advanced Oil-Free Turbomachinery systems. Successful integration of foil gas bearings into turbomachinery requires a step wise approach that includes conceptual design and feasibility studies, bearing testing, and rotor testing prior to full scale system level demonstrations. Unfortunately, the current level of understanding of foil gas bearings and especially their tribological behavior is often insufficient to avoid developmental problems thereby hampering commercialization of new applications. In this paper, a new approach loosely based upon accepted hydrodynamic theory, is developed which results in a "Foil Gas Bearing Performance Map" to guide the integration process. This performance map, which resembles a Stribeck curve for bearing friction, is useful in describing bearing operating regimes, performance safety margins, the effects of load on performance and limiting factors for foil gas bearings.

  10. Preliminary comparison between real-time in-vivo spectral and transverse oscillation velocity estimates

    NASA Astrophysics Data System (ADS)

    Pedersen, Mads Møller; Pihl, Michael Johannes; Haugaard, Per; Hansen, Jens Munk; Lindskov Hansen, Kristoffer; Bachmann Nielsen, Michael; Jensen, Jørgen Arendt

    2011-03-01

    Spectral velocity estimation is considered the gold standard in medical ultrasound. Peak systole (PS), end diastole (ED), and resistive index (RI) are used clinically. Angle correction is performed using a flow angle set manually. With Transverse Oscillation (TO) velocity estimates the flow angle, peak systole (PSTO), end diastole (EDTO), and resistive index (RITO) are estimated. This study investigates if these clinical parameters are estimated equally good using spectral and TO data. The right common carotid arteries of three healthy volunteers were scanned longitudinally. Average TO flow angles and std were calculated { 52+/-18 ; 55+/-23 ; 60+/-16 }°. Spectral angles { 52 ; 56 ; 52 }° were obtained from the B-mode images. Obtained values are: PSTO { 76+/-15 ; 89+/-28 ; 77+/-7 } cm/s, spectral PS { 77 ; 110 ; 76 } cm/s, EDTO { 10+/-3 ; 14+/-8 ; 15+/-3 } cm/s, spectral ED { 18 ; 13 ; 20 } cm/s, RITO { 0.87+/-0.05 ; 0.79+/-0.21 ; 0.79+/-0.06 }, and spectral RI { 0.77 ; 0.88 ; 0.73 }. Vector angles are within +/-two std of the spectral angle. TO velocity estimates are within +/-three std of the spectral estimates. RITO are within +/-two std of the spectral estimates. Preliminary data indicates that the TO and spectral velocity estimates are equally good. With TO there is no manual angle setting and no flow angle limitation. TO velocity estimation can also automatically handle situations where the angle varies over the cardiac cycle. More detailed temporal and spatial vector estimates with diagnostic potential are available with the TO velocity estimation.

  11. Preliminary performance of the LBL AECR

    SciTech Connect

    Lyneis, C.M.; Xie, Zuqi; Clark, D.J.; Lam, R.S.; Lundgren, S.A.

    1990-11-01

    The AECR source, which operates at 14 GHz, is being developed for the 88-Inch Cyclotron at Lawrence Berkeley Laboratory. The AECR has been under source development since December 1989, when the mechanical construction was completed. The first AECR beams were injected into the cyclotron in June of 1990 and since then a variety of ion species from the AECR have been accelerated. The cyclotron recently accelerated {sup 209}Bi{sup 38+} to 954 MeV. An electron gun, which injects 10 to 150 eV electrons into the plasma chamber of the AECR, has been developed to increase the production of high charge state ions. With the electron gun the AECR has produced at 10 kV extraction voltage 131 e{mu}A of O{sup 7+}, 13 e{mu}A of O{sup 8+}, 17 e{mu}A of Ar{sup 14+}, 2.2 e{mu}A of Kr{sup 25+}, 1 e{mu}A of Xe{sup 31+}, and 0.2 e{mu}A of Bi{sup 38+}. The AECR was also tested as a single stage source with a coating of SiO{sub 2} on the plasma chamber walls. This significantly improved its performance compared to no coating, but direct injection of electrons with the electron gun produced the best results. 5 refs., 6 figs., 4 tabs.

  12. An Analytical Method of Estimating Turbine Performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1948-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and turning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of the blading-loss parameter. A variation of blading-loss parameter from 0.3 to 0.5 includes most of the experimental data from the turbine investigated.

  13. An analytical method of estimating turbine performance

    NASA Technical Reports Server (NTRS)

    Kochendorfer, Fred D; Nettles, J Cary

    1949-01-01

    A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.

  14. Star Tracker Performance Estimate with IMU

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Swank, Aaron J.

    2015-01-01

    A software tool for estimating cross-boresight error of a star tracker combined with an inertial measurement unit (IMU) was developed to support trade studies for the Integrated Radio and Optical Communication project (iROC) at the National Aeronautics and Space Administration Glenn Research Center. Typical laser communication systems, such as the Lunar Laser Communication Demonstration (LLCD) and the Laser Communication Relay Demonstration (LCRD), use a beacon to locate ground stations. iROC is investigating the use of beaconless precision laser pointing to enable laser communication at Mars orbits and beyond. Precision attitude knowledge is essential to the iROC mission to enable high-speed steering of the optical link. The preliminary concept to achieve this precision attitude knowledge is to use star trackers combined with an IMU. The Star Tracker Accuracy (STAcc) software was developed to rapidly assess the capabilities of star tracker and IMU configurations. STAcc determines the overall cross-boresight error of a star tracker with an IMU given the characteristic parameters: quantum efficiency, aperture, apparent star magnitude, exposure time, field of view, photon spread, detector pixels, spacecraft slew rate, maximum stars used for quaternion estimation, and IMU angular random walk. This paper discusses the supporting theory used to construct STAcc, verification of the program and sample results.

  15. Safety Performance of Airborne Separation: Preliminary Baseline Testing

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Wing, David J.; Baxley, Brian T.

    2007-01-01

    The Safety Performance of Airborne Separation (SPAS) study is a suite of Monte Carlo simulation experiments designed to analyze and quantify safety behavior of airborne separation. This paper presents results of preliminary baseline testing. The preliminary baseline scenario is designed to be very challenging, consisting of randomized routes in generic high-density airspace in which all aircraft are constrained to the same flight level. Sustained traffic density is varied from approximately 3 to 15 aircraft per 10,000 square miles, approximating up to about 5 times today s traffic density in a typical sector. Research at high traffic densities and at multiple flight levels are planned within the next two years. Basic safety metrics for aircraft separation are collected and analyzed. During the progression of experiments, various errors, uncertainties, delays, and other variables potentially impacting system safety will be incrementally introduced to analyze the effect on safety of the individual factors as well as their interaction and collective effect. In this paper we report the results of the first experiment that addresses the preliminary baseline condition tested over a range of traffic densities. Early results at five times the typical traffic density in today s NAS indicate that, under the assumptions of this study, airborne separation can be safely performed. In addition, we report on initial observations from an exploration of four additional factors tested at a single traffic density: broadcast surveillance signal interference, extent of intent sharing, pilot delay, and wind prediction error.

  16. Preliminary relative permeability estimates of methanehydrate-bearing sand

    SciTech Connect

    Seol, Yongkoo; Kneafsey, Timothy J.; Tomutsa, Liviu; Moridis,George J.

    2006-05-08

    The relative permeability to fluids in hydrate-bearing sediments is an important parameter for predicting natural gas production from gas hydrate reservoirs. We estimated the relative permeability parameters (van Genuchten alpha and m) in a hydrate-bearing sand by means of inverse modeling, which involved matching water saturation predictions with observations from a controlled waterflood experiment. We used x-ray computed tomography (CT) scanning to determine both the porosity and the hydrate and aqueous phase saturation distributions in the samples. X-ray CT images showed that hydrate and aqueous phase saturations are non-uniform, and that water flow focuses in regions of lower hydrate saturation. The relative permeability parameters were estimated at two locations in each sample. Differences between the estimated parameter sets at the two locations were attributed to heterogeneity in the hydrate saturation. Better estimates of the relative permeability parameters require further refinement of the experimental design, and better description of heterogeneity in the numerical inversions.

  17. Solid Waste Operations Complex W-113: Project cost estimate. Preliminary design report. Volume IV

    SciTech Connect

    1995-01-01

    This document contains Volume IV of the Preliminary Design Report for the Solid Waste Operations Complex W-113 which is the Project Cost Estimate and construction schedule. The estimate was developed based upon Title 1 material take-offs, budgetary equipment quotes and Raytheon historical in-house data. The W-113 project cost estimate and project construction schedule were integrated together to provide a resource loaded project network.

  18. Preliminary Findings on Searcher Performance and Perceptions of Performance in a Hypertext Bibliographic Retrieval System.

    ERIC Educational Resources Information Center

    Wolfram, Dietmar; Dimitroff, Alexandra

    1997-01-01

    Although hypertext system usage has been studied, little research has examined the relationship of searcher performance and perception of performance, particularly for hypertext-based information retrieval systems for bibliographic data. This article reports preliminary findings of a study at the University of Wisconsin-Milwaukee in which 83…

  19. Preliminary supersonic flight test evaluation of performance seeking control

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Gilyard, Glenn B.

    1993-01-01

    Digital flight and engine control, powerful onboard computers, and sophisticated controls techniques may improve aircraft performance by maximizing fuel efficiency, maximizing thrust, and extending engine life. An adaptive performance seeking control system for optimizing the quasi-steady state performance of an F-15 aircraft was developed and flight tested. This system has three optimization modes: minimum fuel, maximum thrust, and minimum fan turbine inlet temperature. Tests of the minimum fuel and fan turbine inlet temperature modes were performed at a constant thrust. Supersonic single-engine flight tests of the three modes were conducted using varied after burning power settings. At supersonic conditions, the performance seeking control law optimizes the integrated airframe, inlet, and engine. At subsonic conditions, only the engine is optimized. Supersonic flight tests showed improvements in thrust of 9 percent, increases in fuel savings of 8 percent, and reductions of up to 85 deg R in turbine temperatures for all three modes. The supersonic performance seeking control structure is described and preliminary results of supersonic performance seeking control tests are given. These findings have implications for improving performance of civilian and military aircraft.

  20. Information for fusion management and performance estimation

    NASA Astrophysics Data System (ADS)

    Mahler, Ronald P. S.

    1998-07-01

    This paper describes a unified, theoretically rigorous approach for measuring the performance of data fusion algorithms, using information theory. The proposed approach is based on 'finite-set statistics' (FISST), a direct generalization of conventional statistics to multisource, multitarget problems. FISST makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems. This can be done, moreover, in such a way that mathematical 'information' can be defined and measured even though an evaluator/end-user may have conflicting or even subjective definitions of what 'informative' means. The result is a scientifically defensible means of (1) comparing the performance of two algorithms with respect to a 'level playing field' when ground truth is known; (2) estimating the internal on-the-fly effectiveness of a given algorithm when ground truth is not known; and (3) dynamically choosing between algorithms (or different modes of a multi-mode algorithm) on the basis of the information content they provide.

  1. Scientific performance estimation of robustness and threat

    NASA Astrophysics Data System (ADS)

    Hoffman, John R.; Sorensen, Eric; Stelzig, Chad A.; Mahler, Ronald P. S.; El-Fallah, Adel I.; Alford, Mark G.

    2002-07-01

    For the last three years at this conference we have been describing the implementation of a unified, scientific approach to performance estimation for various aspects of data fusion: multitarget detection, tracking, and identification algorithms; sensor management algorithms; and adaptive data fusion algorithms. The proposed approach is based on finite-set statistics (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems in such a way that information can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what informative means. In this presentation, we will show how to extend our previous results to two new problems. First, that of evaluating the robustness of multisensor, multitarget algorithms. Second, that of evaluating the performance of multisource-multitarget threat assessment algorithms.

  2. Preliminary investigations of HE performance characterization using SWIFT

    NASA Astrophysics Data System (ADS)

    Murphy, M. J.; Johnson, C. E.

    2014-05-01

    Preliminary experiments are performed to assess the utility of using the shock wave image framing technique (SWIFT) to characterize high explosive (HE) performance on detonator length and time scales. Columns of XTX 8004, an extrudable RDX-based high explosive, are cured directly within polymethylmethacrylate (PMMA) dynamic witness plates, and SWIFT is employed to directly visualize shock waves driven into PMMA through detonation interaction. Current experiments investigate two-dimensional, axisymmetric test geometries that resemble historic aquarium tests, but on millimeter length scales, and the SWIFT system records 16-frame, time-resolved image sequences at 190 ns inter-framing. Detonation wave velocities are accurately calculated from the time-resolved images, and standard aquarium-test analysis is evaluated to investigate calculated shock pressures at the HE/PMMA interface. Experimental SWIFT results are discussed where the charge diameter of XTX 8004 is varied from 2.0 mm to 6.5 mm.

  3. 43 CFR 11.38 - Assessment Plan-preliminary estimate of damages.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... restoration, rehabilitation, replacement, and/or acquisition of equivalent resources for the injured natural resources; and the compensable value, as defined in § 11.83(c) of this part, of the injured natural... natural resources. (i) The preliminary estimate of costs should take into account the effects,...

  4. 43 CFR 11.38 - Assessment Plan-preliminary estimate of damages.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... restoration, rehabilitation, replacement, and/or acquisition of equivalent resources for the injured natural resources; and the compensable value, as defined in § 11.83(c) of this part, of the injured natural... natural resources. (i) The preliminary estimate of costs should take into account the effects,...

  5. The Use of Artificial Neural Networks to Estimate Speech Intelligibility from Acoustic Variables: A Preliminary Analysis.

    ERIC Educational Resources Information Center

    Metz, Dale Evan; And Others

    1992-01-01

    A preliminary scheme for estimating the speech intelligibility of hearing-impaired speakers from acoustic parameters, using a computerized artificial neural network to process mathematically the acoustic input variables, is outlined. Tests with 60 hearing-impaired speakers found the scheme to be highly accurate in identifying speakers separated by…

  6. Preliminary weight and cost estimates for transport aircraft composite structural design concepts

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Preliminary weight and cost estimates have been prepared for design concepts utilized for a transonic long range transport airframe with extensive applications of advanced composite materials. The design concepts, manufacturing approach, and anticipated details of manufacturing cost reflected in the composite airframe are substantially different from those found in conventional metal structure and offer further evidence of the advantages of advanced composite materials.

  7. Statistical Rick Estimation for Communication System Design --- A Preliminary Look

    NASA Astrophysics Data System (ADS)

    Babuscia, A.; Cheung, K.-M.

    2012-02-01

    Spacecraft are complex systems that involve different subsystems with multiple relationships among them. For these reasons, the design of a spacecraft is a time-evolving process that starts from requirements and evolves over time across different design phases. During this process, a lot of changes can happen. They can affect mass and power at the component level, at the subsystem level, and even at the system level. Each spacecraft has to respect the overall constraints in terms of mass and power: for this reason, it is important to be sure that the design does not exceed these limitations. Current practice in system models primarily deals with this problem, allocating margins on individual components and on individual subsystems. However, a statistical characterization of the fluctuations in mass and power of the overall system (i.e., the spacecraft) is missing. This lack of adequate statistical characterization would result in a risky spacecraft design that might not fit the mission constraints and requirements, or in a conservative design that might not fully utilize the available resources. Due to the complexity of the problem and to the different expertise and knowledge required to develop a complete risk model for a spacecraft design, this article is focused on risk estimation for a specific spacecraft subsystem: the communication subsystem. The current research aims to be a proof of concept of a risk-based design optimization approach, which can then be further expanded to the design of other subsystems as well as to the whole spacecraft. The objective of this research is to develop a mathematical approach to quantify the likelihood that the major design drivers of mass and power of a space communication system would meet the spacecraft and mission requirements and constraints through the mission design lifecycle. Using this approach, the communication system designers will be able to evaluate and to compare different communication architectures in a risk

  8. Preliminary Scaling Estimate for Select Small Scale Mixing Demonstration Tests

    SciTech Connect

    Wells, Beric E.; Fort, James A.; Gauglitz, Phillip A.; Rector, David R.; Schonewill, Philip P.

    2013-09-12

    The Hanford Site double-shell tank (DST) system provides the staging location for waste that will be transferred to the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Specific WTP acceptance criteria for waste feed delivery describe the physical and chemical characteristics of the waste that must be met before the waste is transferred from the DSTs to the WTP. One of the more challenging requirements relates to the sampling and characterization of the undissolved solids (UDS) in a waste feed DST because the waste contains solid particles that settle and their concentration and relative proportion can change during the transfer of the waste in individual batches. A key uncertainty in the waste feed delivery system is the potential variation in UDS transferred in individual batches in comparison to an initial sample used for evaluating the acceptance criteria. To address this uncertainty, a number of small-scale mixing tests have been conducted as part of Washington River Protection Solutions’ Small Scale Mixing Demonstration (SSMD) project to determine the performance of the DST mixing and sampling systems.

  9. Voltage-Controlled Sapphire Oscillator: Design, Development, and Preliminary Performance

    NASA Astrophysics Data System (ADS)

    Wang, R. T.; Dick, G. J.; Tjoelker, R. L.

    2007-08-01

    We present the design for a new short-term frequency standard, the voltage-controlled sapphire oscillator, as a practical and lower-cost alternative to a cryogenic sapphire oscillator operating at liquid helium temperatures. Performance goals are a frequency stability of 1 x 10^-14 (1 second equal to or less than tau equal to or less than 100 seconds), more than 2 years of continuous operation, and practical operability. Key elements include the sapphire resonator, low-power and long-life cryocooler, frequency compensation method, and cryo-Pound design. We report the design verification, experimental results, and test results of the cryocooler environmental sensitivity, as well as a preliminary stability measurement.

  10. Advanced stellar compass onboard autonomous orbit determination, preliminary performance.

    PubMed

    Betto, Maurizio; Jørgensen, John L; Jørgensen, Peter S; Denver, Troelz

    2004-05-01

    Deep space exploration is in the agenda of the major space agencies worldwide; certainly the European Space Agency (SMART Program) and the American NASA (New Millennium Program) have set up programs to allow the development and the demonstration of technologies that can reduce the risks and the cost of deep space missions. From past experience, it appears that navigation is the Achilles heel of deep space missions. Performed on ground, this imposes considerable constraints on the entire system and limits operations. This makes it is very expensive to execute, especially when the mission lasts several years and, furthermore, it is not failure tolerant. Nevertheless, to date, ground navigation has been the only viable solution. The technology breakthrough of advanced star trackers, like the advanced stellar compass (ASC), might change this situation. Indeed, exploiting the capabilities of this instrument, the authors have devised a method to determine the orbit of a spacecraft autonomously, onboard, and without a priori knowledge of any kind. The solution is robust and fast. This paper presents the preliminary performance obtained during the ground testing in August 2002 at the Mauna Kea Observatories. The main goals were: (1) to assess the robustness of the method in solving autonomously, onboard, the position lost-in-space problem; (2) to assess the preliminary accuracy achievable with a single planet and a single observation; (3) to verify the autonomous navigation (AutoNav) module could be implemented into an ASC without degrading the attitude measurements; and (4) to identify the areas of development and consolidation. The results obtained are very encouraging. PMID:15220158

  11. Advanced stellar compass onboard autonomous orbit determination, preliminary performance.

    PubMed

    Betto, Maurizio; Jørgensen, John L; Jørgensen, Peter S; Denver, Troelz

    2004-05-01

    Deep space exploration is in the agenda of the major space agencies worldwide; certainly the European Space Agency (SMART Program) and the American NASA (New Millennium Program) have set up programs to allow the development and the demonstration of technologies that can reduce the risks and the cost of deep space missions. From past experience, it appears that navigation is the Achilles heel of deep space missions. Performed on ground, this imposes considerable constraints on the entire system and limits operations. This makes it is very expensive to execute, especially when the mission lasts several years and, furthermore, it is not failure tolerant. Nevertheless, to date, ground navigation has been the only viable solution. The technology breakthrough of advanced star trackers, like the advanced stellar compass (ASC), might change this situation. Indeed, exploiting the capabilities of this instrument, the authors have devised a method to determine the orbit of a spacecraft autonomously, onboard, and without a priori knowledge of any kind. The solution is robust and fast. This paper presents the preliminary performance obtained during the ground testing in August 2002 at the Mauna Kea Observatories. The main goals were: (1) to assess the robustness of the method in solving autonomously, onboard, the position lost-in-space problem; (2) to assess the preliminary accuracy achievable with a single planet and a single observation; (3) to verify the autonomous navigation (AutoNav) module could be implemented into an ASC without degrading the attitude measurements; and (4) to identify the areas of development and consolidation. The results obtained are very encouraging.

  12. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  13. Preliminary Analysis of Remote Monitoring & Robotic Concepts for Performance Confirmation

    SciTech Connect

    D.A. McAffee

    1997-02-18

    main Performance Confirmation monitoring needs and requirements during the post-emplacement preclosure period. This includes radiological, non-radiological, host rock, and infrastructure performance monitoring needs. It also includes monitoring for possible off-normal events. (Presented in Section 7.3). (3) Identify general approaches and methods for obtaining performance information from within the emplacement drifts for Performance Confirmation. (Presented in Section 7.4) (4)Review and discuss available technologies and design strategies that may permit the use of remotely operated systems within the hostile thermal and radiation environment expected within the emplacement drifts. (Presented in Section 7.5). (5) Based on Performance Confirmation monitoring needs and available technologies, identify potential application areas for remote systems and robotics for post-emplacement preclosure Performance Confirmation activities (Presented in Section 7.6). (6) Develop preliminary remote monitoring and robotic concepts for post-emplacement, preclosure Performance Confirmation activities. (Presented in Section 7.7) This analysis is being performed very early in the systems engineering cycle, even as issues related to the Performance Confirmation program planning phase are being formulated and while the associated needs, constraints and objectives are yet to be fully determined and defined. This analysis is part of an issue formulation effort and is primarily concerned with identification and description of key issues related to remotely monitoring repository performance for Performance Confirmation. One of the purposes of this analysis is to provide an early investigation of potential design challenges that may have a high impact on future design concepts. This analysis can be used to guide future concept development and help access what is feasible and achievable by application of remote systems technology. Future design and systems engineering analysis with applicable

  14. EGADS: A microcomputer program for estimating the aerodynamic performance of general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Melton, John E.

    1994-01-01

    EGADS is a comprehensive preliminary design tool for estimating the performance of light, single-engine general aviation aircraft. The software runs on the Apple Macintosh series of personal computers and assists amateur designers and aeronautical engineering students in performing the many repetitive calculations required in the aircraft design process. The program makes full use of the mouse and standard Macintosh interface techniques to simplify the input of various design parameters. Extensive graphics, plotting, and text output capabilities are also included.

  15. Preliminary investigation of an ultrasound method for estimating pressure changes in deep-positioned vessels

    NASA Astrophysics Data System (ADS)

    Olesen, Jacob Bjerring; Villagomez-Hoyos, Carlos Armando; Traberg, Marie Sand; Chee, Adrian J. Y.; Yiu, Billy Y. S.; Ho, Chung Kit; Yu, Alfred C. H.; Jensen, Jørgen Arendt

    2016-04-01

    This paper presents a method for measuring pressure changes in deep-tissue vessels using vector velocity ultrasound data. The large penetration depth is ensured by acquiring data using a low frequency phased array transducer. Vascular pressure changes are then calculated from 2-D angle-independent vector velocity fields using a model based on the Navier-Stokes equations. Experimental scans are performed on a fabricated flow phantom having a constriction of 36% at a depth of 100 mm. Scans are carried out using a phased array transducer connected to the experimental scanner, SARUS. 2-D fields of angle-independent vector velocities are acquired using directional synthetic aperture vector flow imaging. The obtained results are evaluated by comparison to a 3-D numerical simulation model with equivalent geometry as the designed phantom. The study showed pressure drops across the constricted phantom varying from -40 Pa to 15 Pa with a standard deviation of 32%, and a bias of 25% found relative to the peak simulated pressure drop. This preliminary study shows that pressure can be estimated non-invasively to a depth that enables cardiac scans, and thereby, the possibility of detecting the pressure drops across the mitral valve.

  16. Enhancement of perfluoropolyether boundary lubrication performance: I. Preliminary results

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.; Ajayi, O. O.; Goodell, A. J.; Wedeven, L. D.; Devine, E.; Premore, R. E.

    1995-01-01

    A ball bearing simulator operating under starved conditions was used to evaluate the boundary lubrication performance of a perfluoropolyether (PFPE) Krytox 143 AB. Several approaches to enhance boundary lubrication were studied. These included: (1) soluble boundary additives, (2) bearing surface modifications, (3) 'run-in' surface films, and (4) ceramic bearing components. In addition, results were compared with two non-perfluorinated liquid lubricant formulations. Based on these preliminary tests, the following tentative conclusions can be made: (1) substantial improvements in boundary lubrication performance were observed with a beta-diketone boundary additive and a tricresyl phosphate (TCP) liquid surface pretreatment; (2) the use of rough Si3N4 balls (Ra = 40 micro-in) also provided substantial improvement but with concomitant abrasive wear; (3) marginal improvements were seen with two boundary additives (a phosphine and a phosphatriazine) and a neat (100%) fluid (a carboxylic acid terminated PFPE); and surface pretreatments with a synthetic hydrocarbon, a PTFE coating, and TiC coated 440C and smooth Si3N4 balls (R(sub a) less than 1 micro-in); and (4) two non-PFPE lubricant formulations (a PAO and a synthetic hydrocarbon) yielded substantial improvements.

  17. Development of the wireless ultra-miniaturized inertial measurement unit WB-4: preliminary performance evaluation.

    PubMed

    Lin, Zhuohua; Zecca, Massimiliano; Sessa, Salvatore; Bartolomeo, Luca; Ishii, Hiroyuki; Takanishi, Atsuo

    2011-01-01

    This paper presents the preliminary performance evaluation of our new wireless ultra-miniaturized inertial measurement unit (IMU) WB-4 by compared with the Vicon motion capture system. The WB-4 IMU primarily contains a mother board for motion sensing, a Bluetooth module for wireless data transmission with PC, and a Li-Polymer battery for power supply. The mother board is provided with a microcontroller and 9-axis inertial sensors (miniaturized MEMS accelerometer, gyroscope and magnetometer) to measure orientation. A quaternion-based extended Kalman filter (EKF) integrated with an R-Adaptive algorithm for automatic estimation of the measurement covariance matrix is implemented for the sensor fusion to retrieve the attitude. The experimental results showed that the wireless ultra-miniaturized WB-4 IMU could provide high accuracy performance at the angles of roll and pitch. The yaw angle which has reasonable performance needs to be further evaluated.

  18. Development of the wireless ultra-miniaturized inertial measurement unit WB-4: preliminary performance evaluation.

    PubMed

    Lin, Zhuohua; Zecca, Massimiliano; Sessa, Salvatore; Bartolomeo, Luca; Ishii, Hiroyuki; Takanishi, Atsuo

    2011-01-01

    This paper presents the preliminary performance evaluation of our new wireless ultra-miniaturized inertial measurement unit (IMU) WB-4 by compared with the Vicon motion capture system. The WB-4 IMU primarily contains a mother board for motion sensing, a Bluetooth module for wireless data transmission with PC, and a Li-Polymer battery for power supply. The mother board is provided with a microcontroller and 9-axis inertial sensors (miniaturized MEMS accelerometer, gyroscope and magnetometer) to measure orientation. A quaternion-based extended Kalman filter (EKF) integrated with an R-Adaptive algorithm for automatic estimation of the measurement covariance matrix is implemented for the sensor fusion to retrieve the attitude. The experimental results showed that the wireless ultra-miniaturized WB-4 IMU could provide high accuracy performance at the angles of roll and pitch. The yaw angle which has reasonable performance needs to be further evaluated. PMID:22255931

  19. APPENDIX C. PRELIMINARY ESTIMATES OF COSTS OF MERCURY EMISSION CONTROL TECHNOLOGIES FOR ELECTRIC UTILITY BOILERS

    EPA Science Inventory

    This appendix describes the development of a preliminary assessment of the performance and cost of mercury emission control technologies for utility boilers. It is to supplement an EPA examination of the co-benefits of potential pollution control options for the electric power in...

  20. Design and Preliminary Performance Testing of Electronegative Gas Plasma Thruster

    NASA Technical Reports Server (NTRS)

    Liu, Thomas M.; Schloeder, Natalie R.; Walker, Mitchell L. R.; Polzin, Kurt A.; Dankanich, John W.; Aanesland, Ane

    2014-01-01

    In classical gridded electrostatic ion thrusters, positively charged ions are generated from a plasma discharge of noble gas propellant and accelerated to provide thrust. To maintain overall charge balance on the propulsion system, a separate electron source is required to neutralize the ion beam as it exits the thruster. However, if high-electronegativity propellant gases (e.g., sulfur hexafluoride) are instead used, a plasma discharge can result consisting of both positively and negatively charged ions. Extracting such electronegative plasma species for thrust generation (e.g., with time-varying, bipolar ion optics) would eliminate the need for a separate neutralizer cathode subsystem. In addition for thrusters utilizing a RF plasma discharge, further simplification of the ion thruster power system may be possible by also using the RF power supply to bias the ion optics. Recently, the PEGASES (Plasma propulsion with Electronegative gases) thruster prototype successfully demonstrated proof-of-concept operations in alternatively accelerating positively and negatively charged ions from a RF discharge of a mixture of argon and sulfur hexafluoride.i In collaboration with NASA Marshall Space Flight Center (MSFC), the Georgia Institute of Technology High-Power Electric Propulsion Laboratory (HPEPL) is applying the lessons learned from PEGASES design and testing to develop a new thruster prototype. This prototype will incorporate design improvements and undergo gridless operational testing and diagnostics checkout at HPEPL in April 2014. Performance mapping with ion optics will be conducted at NASA MSFC starting in May 2014. The proposed paper discusses the design and preliminary performance testing of this electronegative gas plasma thruster prototype.

  1. Preliminary liver dose estimation in the new facility for biomedical applications at the RA-3 reactor.

    PubMed

    Gadan, M; Crawley, V; Thorp, S; Miller, M

    2009-07-01

    As a part of the project concerning the irradiation of a section of the human liver left lobe, a preliminary estimation of the expected dose was performed. To obtain proper input values for the calculation, neutron flux and gamma dose rate characterization were carried out using adequate portions of cow or pig liver covered with demineralized water simulating the preservation solution. Irradiations were done inside a container specially designed to fulfill temperature preservation of the organ and a reproducible irradiation position (which will be of importance for future planification purposes). Implantable rhodium based self-powered neutron detectors were developed to obtain neutron flux profiles both external and internal. Implantation of SPND was done along the central longitudinal axis of the samples, where lowest flux is expected. Gamma dose rate was obtained using a neutron shielded graphite ionization chamber moved along external surfaces of the samples. The internal neutron profile resulted uniform enough to allow for a single and static irradiation of the liver. For dose estimation, irradiation condition was set in order to obtain a maximum of 15 Gy-eq in healthy tissue. Additionally, literature reported boron concentrations of 47 ppm in tumor and 8 ppm in healthy tissue and a more conservative relationship (30/10 ppm) were used. To make a conservative estimation of the dose the following considerations were done: i). Minimum measured neutron flux inside the sample (approximately 5 x 10(9) n cm-2 s-1) was considered to calculate dose in tumor. (ii). Maximum measured neutron flux (considering both internal as external profiles) was used to calculate dose in healthy tissue (approximately 8.7 x 10(9) n cm-2 s-1). (iii). Maximum measured gamma dose rate (approximately 13.5 Gy h-1) was considered for both tumor and healthy tissue. Tumor tissue dose was approximately 69 Gy-eq for 47 ppm of (10)B and approximately 42 Gy-eq for 30 ppm, for a maximum dose of 15 Gy

  2. Preliminary Performance of CdZnTe Imaging Detector Prototypes

    NASA Technical Reports Server (NTRS)

    Ramsey, B.; Sharma, D. P.; Meisner, J.; Gostilo, V.; Ivanov, V.; Loupilov, A.; Sokolov, A.; Sipila, H.

    1999-01-01

    The promise of good energy and spatial resolution coupled with high efficiency and near-room-temperature operation has fuelled a large International effort to develop Cadmium-Zinc-Telluride (CdZnTe) for the hard-x-ray region. We present here preliminary results from our development of small-pixel imaging arrays fabricated on 5x5x1-mm and 5x5x2-mm spectroscopy and discriminator-grade material. Each array has 16 (4x4) 0.65-mm gold readout pads on a 0.75-mm pitch, with each pad connected to a discrete preamplifier via a pulse-welded gold wire. Each array is mounted on a 3-stage Peltier cooler and housed in an ion-pump-evacuated housing which also contains a hybrid micro-assembly for the 16 channels of electronics. We have investigated the energy resolution and approximate photopeak efficiency for each pixel at several energies and have used an ultra-fine beam x-ray generator to probe the performance at the pixel boundaries. Both arrays gave similar results, and at an optimum temperature of -20 C we achieved between 2 and 3% FWHM energy resolution at 60 keV and around 15% at 5.9 keV. We found that all the charge was contained within 1 pixel until very close to the pixels edge, where it would start to be shared with its neighbor. Even between pixels, all the charge would be appropriately shared with no apparently loss of efficiency or resolution. Full details of these measurements will be presented, together with their implications for future imaging-spectroscopy applications.

  3. Preliminary age, growth and maturity estimates of spotted ratfish (Hydrolagus colliei) in British Columbia

    NASA Astrophysics Data System (ADS)

    King, J. R.; McPhie, R. P.

    2015-05-01

    The spotted ratfish (Hydrolagus colliei) is a chimaeroid ranging from southeast Alaska to Baja California and found at depths of up to 1029 m. Despite being widespread and ubiquitous, few biological parameter estimates exist for spotted ratfish due to a lack of suitable ageing structures to estimate age and growth. We present preliminary results of age, growth and maturity estimates based on a new method in which tritor ridges are counted on the vomerine tooth plate. We also provide a method for estimating the number of worn tritor ridges based on tooth plate diameter measurements for the spotted ratfish. The tritor ridges are distinct bumps that are easy to identify and precision estimates between readers suggests that this method is transferable. Tritor ridges are a potential structure for estimating age in H. colliei and we provide recommendations for future research to improve the method. We sampled 269 spotted ratfish captured in trawl surveys off the coast British Columbia ranging in size from 74 to 495 mm in precaudal length (PCL). The estimated ages ranged from 2 to 16 years for males and from 2 to 21 years for females. The von Bertalanffy, von Bertalanffy with known size at birth, Gompertz and logistic growth models were fitted to the data. Based on Akaike information criterion corrected for sample size and number of parameters estimated, the logistic growth curve was selected as most suitable. The logistic growth model yielded the following parameter estimates: Linf=407.22 mm (PCL), k=0.23 year-1, t0=-7.06 years for males; L∞=494.52 mm (PCL), k=0.26 year-1, t0=-8.35 years for females. Estimated ages at 50% maturity were 12 and 14 years for males and females, respectively. Correspondingly, the size at 50% maturity estimates was smaller for males (302 mm, PCL) than females (393 mm, PCL). Both estimates are larger than those made for spotted ratfish off of California indicating regional differences in life history traits for this species. Our preliminary

  4. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    PubMed

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  5. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 2, Technical basis

    SciTech Connect

    Not Available

    1992-12-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume, Volume 2, contains the technical basis for the 1992 PA. Specifically, it describes the conceptual basis for consequence modeling and the PA methodology, including the selection of scenarios for analysis, the determination of scenario probabilities, and the estimation of scenario consequences using a Monte Carlo technique and a linked system of computational models. Additional information about the 1992 PA is provided in other volumes. Volume I contains an overview of WIPP PA and results of a preliminary comparison with the long-term requirements of the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses related to the preliminary comparison with 40 CFR 191B. Volume 5 contains uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance. Finally, guidance derived from the entire 1992 PA is presented in Volume 6.

  6. [Estimation of bleomycin-induced chromosome aberrations in lymphocytes of laryngeal cancer subjects. Preliminary report].

    PubMed

    Kita, S; Jarmuz, M; Dabrowski, P; Biegalski, W; Jezewska, A; Kowalczyk, M; Szyfter, W; Szyfter, K

    1999-01-01

    Chromosome instability is associated with an increased risk of malignancy. However, the quantitative analysis of chromosome breaks provided by the bleomycin test requires additional analysis aimed for the localisation of chromosome aberrations. For this reason, the metaphasis slides prepared for bleomycin test were stained with fluorochrome DAPI to estimate chromosome breaks in particular chromosomes. The additional staining of chromosomes can be recognised as an extension of the classical bleomycin test addressed for identification of structural aberrations. Preliminary results indicate that the most frequent chromosome breaks were found in chromosomes 1, 2, 3, 7 and 13. PMID:10481493

  7. Estimation of Croplands in West Africa using Global Land Cover and Land Use Datasets: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Adhikari, P.; de Beurs, K.

    2013-12-01

    Africa is vulnerable to the effects of global climate change resulting in reduced agricultural production and worsening food security. Studies show that Africa has the lowest cereal yield compared to other regions of the world. The situation is particularly dire in East, Central and West Africa. Despite their low cereal yield, the population of East, Central and West Africa has doubled between 1980 and 2007. Furthermore, West Africa has a history of severe and long droughts which have occasionally caused widespread famine. To understand how global climate change and land cover change have impacted crop production (yield) it is important to estimate croplands in the region. The objective of this study is to compare ten publicly available land cover and land use datasets, covering different time periods, to estimate croplands in West Africa. The land cover and land use data sets used cover the period from early 1990s to 2010. Preliminary results show a high variability in cropland estimates. For example, in Benin, the estimated cropland area varies from 2.5 to 21% of the total area, while it varies from 3 to 8% in Niger. Datasets with a finer resolution (≤ 1,000 m) have consistently estimated comparable cropland areas across all countries. Several categorical verification statistics such as probability of detection (POD), false alarm ratio (FAR) and critical success index are also used to analyze the correspondence between estimated and observed cropland pixels at the scales of 1 Km and 10 Km.

  8. Intellectual Competence and Academic Performance: Preliminary Validation of a Model

    ERIC Educational Resources Information Center

    Chamorro-Premuzic, Tomas; Arteche, Adriane

    2008-01-01

    The present study provides a preliminary empirical test of [Chamorro-Premuzic, T., & Furnham, A. (2004). A possible model to understand the personality-intelligence interface. "British Journal of Psychology," 95, 249-264], [Chamorro-Premuzic, T., & Furnham, A. (2006a). Intellectual competence and the intelligent personality: A third way in…

  9. The Estimate of Risk of Adolescent Sexual Offense Recidivism (ERASOR): preliminary psychometric data.

    PubMed

    Worling, James R

    2004-06-01

    The Estimate of Risk of Adolescent Sexual Offense Recidivism (ERASOR) is an empirically guided checklist designed to assist clinicians to estimate the short-term risk of a sexual reoffense for youth aged 12-18 years of age. The ERASOR provides objective coding instructions for 25 risk factors (16 dynamic and 9 static). To investigate the psychometric properties, risk ratings were collected from 28 clinicians who evaluated 136 adolescent males (aged 12-18 years) following comprehensive, clinical assessments. Preliminary psychometric data (i.e., interrater agreement, item-total correlation, internal consistency) were found to be supportive of the reliability and item composition of the tool. ERASOR ratings also significantly discriminated adolescents based on whether or not they had previously been sanctioned for a prior sexual offense. PMID:15326883

  10. Estimating laser transit anemometry noise performance capabilities

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Hunter, William W., Jr.

    1989-01-01

    A Monte Carlo based LTA (laser transit anemometry) simulation system has been used to perform a detailed evaluation of a set of processing algorithms proposed by Mayo and Smart (1984) for the extraction of two-dimensional flow parameters from LTA data sets collected in a plane normal to the optical axis of the system. The present evaluation includes data ensembles containing 0.0, 5.0, 10.0, and 20.0 percent background noise levels in the constituent correlograms. The results of these evaluations indicate that for turbulence levels of up to 10.0 percent the processing system is able to extract the necessary flow parameters accurately from the LTA data sets. Mean velocity magnitude and flow angle are measurable to within 2.0 percent for turbulence intensity levels of up to 14.0 percent. Standard deviations are measureable to within 10.0 percent over a turbulence range of 3.0-10.0 percent at the same noise levels. These results indicate that the algorithms described have applications in fluid flow surveys.

  11. Preliminary measurement-based estimates of PAH emissions from oil sands tailings ponds

    NASA Astrophysics Data System (ADS)

    Galarneau, Elisabeth; Hollebone, Bruce P.; Yang, Zeyu; Schuster, Jasmin

    2014-11-01

    Tailings ponds in the oil sands region (OSR) of western Canada are suspected sources of polycyclic aromatic hydrocarbons (PAHs) to the atmosphere. In the absence of detailed characterization or direct flux measurements, we present preliminary measurement-based estimates of the emissions of thirteen priority PAHs from the ponds. Using air concentrations measured under the Joint Canada-Alberta Oil Sands Monitoring Plan and water concentrations from a small sampling campaign in 2013, the total flux of 13 US EPA priority PAHs (fluorene to benzo[ghi]perylene) was estimated to be upward from water to air and to total 1069 kg y-1 for the region as a whole. By comparison, the most recent air emissions reported to Canada's National Pollutant Release Inventory (NPRI) from oil sands facilities totalled 231 kg y-1. Exchange fluxes for the three remaining priority PAHs (naphthalene, acenaphthylene and acenaphthene) could not be quantified but evidence suggests that they are also upward from water to air. These results indicate that tailings ponds may be an important PAH source to the atmosphere that is missing from current inventories in the OSR. Uncertainty and sensitivity analyses lend confidence to the estimated direction of air-water exchange being upward from water to air. However, more detailed characterization of ponds at other facilities and direct flux measurements are needed to confirm the quantitative results presented herein.

  12. The relative performance of targeted maximum likelihood estimators.

    PubMed

    Porter, Kristin E; Gruber, Susan; van der Laan, Mark J; Sekhon, Jasjeet S

    2011-01-01

    There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges. PMID:21931570

  13. Potential of preliminary test methods to predict biodegradation performance of petroleum hydrocarbons in soil.

    PubMed

    Aichberger, H; Hasinger, Marion; Braun, Rudolf; Loibner, Andreas P

    2005-03-01

    Preliminary tests at different scales such as degradation experiments (laboratory) in shaking flasks, soil columns and lysimeters as well as in situ respiration tests (field) were performed with soil from two hydrocarbon contaminated sites. Tests have been evaluated in terms of their potential to provide information on feasibility, degradation rates and residual concentration of bioremediation in the vadose zone. Sample size, costs and duration increased with experimental scale in the order shaking flasks - soil columns - lysimeter - in situ respiration tests, only time demand of respiration tests was relatively low. First-order rate constants observed in degradation experiments exhibited significant differences between both, different experimental sizes and different soils. Rates were in line with type and history of contamination at the sites, but somewhat overestimated field rates particularly in small scale experiments. All laboratory experiments allowed an estimation of residual concentrations after remediation. In situ respiration tests were found to be an appropriate pre-testing and monitoring tool for bioventing although residual concentrations cannot be predicted from in situ respiration tests. Moreover, this method does not account for potential limitations that might hamper biodegradation in the longer term but only reflects the actual degradation potential when the test is performed.

  14. Next Generation Munitions Handler: Human-Machine Interface and Preliminary Performance Evaluation

    SciTech Connect

    Draper, J.V.; Jansen, J.F.; Pin, F.G.; Rowe, J.C.

    1999-04-25

    The Next Generation Munitions Handler/Advanced Technology Demonstrator (NGMI-VATTD) is a technology demonstrator for the application of an advanced robotic device for re-arming U.S. Air Force (USAF) and U.S. Navy (USN) tactical fighters. It comprises two key hardware components: a heavy-lift dexterous manipulator (HDM) and a nonholonomic mobility platform. The NGMWATTD is capable of lifting weapons up to 4400 kg (2000 lb) and placing them on any weapons rack on existing fighters (including the F-22 Raptor). This report describes the NGMH mission with particular reference to human-machine interfaces. It also describes preliminary testing to garner feedback about the heavy-lift manipulator arm from experienced fighter load crewmen. The purpose of the testing was to provide preliminary information about control system parameters and to gather feed- back from users about manipulator arm functionality. To that end, the Air Force load crewmen interacted with the NGMWATTD in an informal testing session and provided feedback about the performance of the system. Certain con- trol system parameters were changed during the course of the testing and feedback from the participants was used to make a rough estimate of "good" initial operating parameters. Later, formal testing will concentrate within this range to identify optimal operating parameters. User reactions to the HDM were generally positive, All of the USAF personnel were favorably impressed with the capabilities of the system. Fine-tuning operating parameters created a system even more favorably regarded by the load crews. Further adjustment to control system parameters will result in a system that is operationally efficient, easy to use, and well accepted by users.

  15. Preliminary performance report of the RHUM-RUM OBS network

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Crawford, Wayne; Barruol, Guilhem; Sigloch, Karin; Mechita, Schmidt-Aursch

    2015-04-01

    RHUM-RUM is a German-French seismological experiment based on the seafloor surrounding the hotspot of La Réunion, western Indian Ocean. Its primary objective is to clarify the presence or absence of a mantle plume beneath the Reunion hotspot. RHUM-RUM's central component is a one-year deployment (Oct 2012 - Nov 2013) of 57 broadband ocean-bottom seismometers (OBS) and hydrophones on an area of 2000x2000 km2 surrounding the hotspot. The OBS pool contained 48 instruments from the German DEPAS pool and 9 French stations from INSU. All OBS have been successfully recovered. Preliminary analysis of the seismometer recordings show large differences in long-period (>10s) noise levels between the German and the French OBS. These differences are strongest on the horizontal components and can be probably explained by dynamic tilt of the instrument itself. The noise level of the German instruments is >20dB higher in this period range compared to the French ones. A reason could be that for the German OBS, the seismometer is integrated into the OBS frame and therefore affected by its movement due to currents. The high noise level on the horizontal components will have to be considered in future experiment design, when using this instrument type for three-component waveform tomography.

  16. AMT-200S Motor Glider Parameter and Performance Estimation

    NASA Technical Reports Server (NTRS)

    Taylor, Brian R.

    2011-01-01

    Parameter and performance estimation of an instrumented motor glider was conducted at the National Aeronautics and Space Administration Dryden Flight Research Center in order to provide the necessary information to create a simulation of the aircraft. An output-error technique was employed to generate estimates from doublet maneuvers, and performance estimates were compared with results from a well-known flight-test evaluation of the aircraft in order to provide a complete set of data. Aircraft specifications are given along with information concerning instrumentation, flight-test maneuvers flown, and the output-error technique. Discussion of Cramer-Rao bounds based on both white noise and colored noise assumptions is given. Results include aerodynamic parameter and performance estimates for a range of angles of attack.

  17. Estimation of desmosponge (Porifera, Demospongiae) larval settlement rates from short-term recruitment rates: Preliminary experiments

    NASA Astrophysics Data System (ADS)

    Zea, Sven

    1992-09-01

    During a study of the spatial and temporal patterns of desmosponge (Porifera, Demospongiae) recruitment on rocky and coral reef habitats of Santa Marta, Colombian Caribbean Sea, preliminary attempts were made to estimate actual settlement rates from short-term (1 to a few days) recruitment censuses. Short-term recruitment rates on black, acrylic plastic plates attached to open, non-cryptic substratum by anchor screws were low and variable (0 5 recruits/plate in 1 2 days, sets of n=5 10 plates), but reflected the depth and seasonal trends found using mid-term (1 to a few months) censusing intervals. Moreover, mortality of recruits during 1 2 day intervals was low (0 12%). Thus, short-term censusing intervals can be used to estimate actual settlement rates. To be able to make statistical comparisons, however, it is necessary to increase the number of recruits per census by pooling data of n plates per set, and to have more than one set per site or treatment.

  18. Preliminary experiments to estimate the PE.MA.M (PElagic MArine Mesocosm) offshore behaviour

    NASA Astrophysics Data System (ADS)

    Albani, Marta; Piermattei, Viviana; Stefanì, Chiara; Marcelli, Marco

    2016-04-01

    The phytoplankton community is controlled not only by local environmental conditions but also by physical processes occurring on different temporal and spatial scales. Hydrodynamic local conditions play an important role in marine ecosystems. Several studies have shown that hydrodynamic conditions can influence the phytoplankton settling velocity, vertical and horizontal distribution and formation of cyanobacterial blooms. Mesocosms are useful structures to simulate marine environment at mesoscale resolution; allowing to closely approximate biotic or abiotic parameters of interest directly in nature. In this work an innovative structure named PE.MA.M (PElagic MArine Mesocosm) is presented and tested. Laboratory experiments have been conducted in order to observe seasonal variations of biomass behaviour in two different hydrodynamic conditions: outside as well as whithin the PE.MA.M. We have evaluated whether it is possible to isolate a natural system from external water mass hydrodynamic exchanges and to assume that phytoplankton cells' transition is limited at the net and sea interface. Preliminary experiments test the isolating capacity of the net, to determine the currents' attenuation rate and to estimate the possible PE.MA.M. offshore behaviour. In the first investigation, we monitored the diffusion of phytoplankton cells. The PE.MA.M. exterior and interior were simulated using a plexiglass tank divided into two half-tanks (Aout-Bin) by a septum consisting of a net like a PE.MA.M. The tank was filled up with 10 L of water and only the half-tank Aout was filled up with 10 ml of phytoplankton culture (Clorella sp.). We monitored the chlorophyll concentrations for 24 hours. The two tanks had similar concentrations after 4 hours (2.70322 mg/m³ Aout and 2.37245 mg/m3 Bin) and this constant relationship was maintened until the end of the test. In the second investigation we used clod cards to measure water motions.We conducted two experiments within tank, the first

  19. Analytical Approach for Estimating Preliminary Mass of ARES I Crew Launch Vehicle Upper Stage Structural Components

    NASA Technical Reports Server (NTRS)

    Aggarwal, Pravin

    2007-01-01

    electrical power functions to other Elements of the CLV, is included as secondary structure. The MSFC has an overall responsibility for the integrated US element as well as structural design an thermal control of the fuel tanks, intertank, interstage, avionics, main propulsion system, Reaction Control System (RCS) for both the Upper Stage and the First Stage. MSFC's Spacecraft and Vehicle Department, Structural and Analysis Design Division is developing a set of predicted mass of these elements. This paper details the methodology, criterion and tools used for the preliminary mass predictions of the upper stage structural assembly components. In general, weight of the cylindrical barrel sections are estimated using the commercial code Hypersizer, whereas, weight of the domes are developed using classical solutions. HyperSizer is software that performs automated structural analysis and sizing optimization based on aerospace methods for strength, stability, and stiffness. Analysis methods range from closed form, traditional hand calculations repeated every day in industry to more advanced panel buckling algorithms. Margin-of-safety reporting for every potential failure provides the engineer with a powerful insight into the structural problem. Optimization capabilities include finding minimum weight panel or beam concepts, material selections, cross sectional dimensions, thicknesses, and lay-ups from a library of 40 different stiffened and sandwich designs and a database of composite, metallic, honeycomb, and foam materials. Multiple different concepts (orthogrid, isogrid, and skin stiffener) were run for multiple loading combinations of ascent design load with and with out tank pressure as well as proof pressure condition. Subsequently, selected optimized concept obtained from Hypersizer runs was translated into a computer aid design (CAD) model to account for the wall thickness tolerance, weld land etc for developing the most probable weight of the components. The flow diram

  20. A preliminary structural analysis of space-base living quarters modules to verify a weight-estimating technique

    NASA Technical Reports Server (NTRS)

    Grissom, D. S.; Schneider, W. C.

    1971-01-01

    The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.

  1. Estimation of Ultrafilter Performance Based on Characterization Data

    SciTech Connect

    Peterson, Reid A.; Geeting, John GH; Daniel, Richard C.

    2007-08-02

    Due to limited availability of test data with actual waste samples, a method was developed to estimate expected filtration performance based on physical characterization data for the Hanford Waste Treatment and Immobilization Plant. A test with simulated waste was analyzed to demonstrate that filtration of this class of waste is consistent with a concentration polarization model. Subsequently, filtration data from actual waste samples were analyzed to demonstrate that centrifuged solids concentrations provide a reasonable estimate of the limiting concentration for filtration.

  2. Preliminary flight evaluation of an engine performance optimization algorithm

    NASA Technical Reports Server (NTRS)

    Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.

    1991-01-01

    A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.

  3. Demographic Estimation from Face Images: Human vs. Machine Performance.

    PubMed

    Han, Hu; Otto, Charles; Liu, Xiaoming; Jain, Anil K

    2015-06-01

    Demographic estimation entails automatic estimation of age, gender and race of a person from his face image, which has many potential applications ranging from forensics to social media. Automatic demographic estimation, particularly age estimation, remains a challenging problem because persons belonging to the same demographic group can be vastly different in their facial appearances due to intrinsic and extrinsic factors. In this paper, we present a generic framework for automatic demographic (age, gender and race) estimation. Given a face image, we first extract demographic informative features via a boosting algorithm, and then employ a hierarchical approach consisting of between-group classification, and within-group regression. Quality assessment is also developed to identify low-quality face images that are difficult to obtain reliable demographic estimates. Experimental results on a diverse set of face image databases, FG-NET (1K images), FERET (3K images), MORPH II (75K images), PCSO (100K images), and a subset of LFW (4K images), show that the proposed approach has superior performance compared to the state of the art. Finally, we use crowdsourcing to study the human perception ability of estimating demographics from face images. A side-by-side comparison of the demographic estimates from crowdsourced data and the proposed algorithm provides a number of insights into this challenging problem. PMID:26357339

  4. Demographic Estimation from Face Images: Human vs. Machine Performance.

    PubMed

    Han, Hu; Otto, Charles; Liu, Xiaoming; Jain, Anil K

    2015-06-01

    Demographic estimation entails automatic estimation of age, gender and race of a person from his face image, which has many potential applications ranging from forensics to social media. Automatic demographic estimation, particularly age estimation, remains a challenging problem because persons belonging to the same demographic group can be vastly different in their facial appearances due to intrinsic and extrinsic factors. In this paper, we present a generic framework for automatic demographic (age, gender and race) estimation. Given a face image, we first extract demographic informative features via a boosting algorithm, and then employ a hierarchical approach consisting of between-group classification, and within-group regression. Quality assessment is also developed to identify low-quality face images that are difficult to obtain reliable demographic estimates. Experimental results on a diverse set of face image databases, FG-NET (1K images), FERET (3K images), MORPH II (75K images), PCSO (100K images), and a subset of LFW (4K images), show that the proposed approach has superior performance compared to the state of the art. Finally, we use crowdsourcing to study the human perception ability of estimating demographics from face images. A side-by-side comparison of the demographic estimates from crowdsourced data and the proposed algorithm provides a number of insights into this challenging problem.

  5. Questionnaire assessment of estimated radiation effects upon military task performance

    SciTech Connect

    Glickman, A.S.; Winne, P.S.; Morgan, B.B. Jr.; Moe, R.B.

    1984-04-01

    One hundred twenty-five supervisors in four types of U.S. Army combat systems estimated the degree of degradation of military tasks for 30 descriptive symptom complexes associated with various radiation exposures. Results indicated that (a) the relative order of symptom effects were highly consistent across positions and the types of systems, (b) performances were expected to be deleteriously affected under most illness conditions, even mild ones, but incapacitation was not anticipated until illness conditions became quite severe, and (c) the most important factors in estimating performances were fluid loss and fatigability/weakness.

  6. Preliminary predictions of athletic performance among collegiate baseball players with a biopsychosocial model.

    PubMed

    Plante, T G; Booth, J

    1995-06-01

    This study investigated the association of nine biopsychosocial variables and athletic performance among 40 elite collegiate baseball players. High scores on confidence and perceived fitness and low scores on repressive denial, strength of religious faith, and sensitivity to glare were reliably associated with ratings of superior athletic performance by four coaches. Preliminary results suggest that the biopsychosocial model may prove useful in predicting athletic performance.

  7. Astrometric telescope facility. Preliminary systems definition study. Volume 3: Cost estimate

    NASA Technical Reports Server (NTRS)

    Sobeck, Charlie (Editor)

    1987-01-01

    The results of the Astrometric Telescope Facility (ATF) Preliminary System Definition Study conducted in the period between March and September 1986 are described. The main body of the report consists primarily of the charts presented at the study final review which was held at NASA Ames Research Center on July 30 and 31, 1986. The charts have been revised to reflect the results of that review. Explanations for the charts are provided on the adjoining pages where required. Note that charts which have been changed or added since the review are dated 10/1/86; unchanged charts carry the review date 7/30/86. In addition, a narrative summary is presented of the study results and two appendices. The first appendix is a copy of the ATF Characteristics and Requirements Document generated as part of the study. The second appendix shows the inputs to the Space Station Mission Requirements Data Base submitted in May 1986. The report is issued in three volumes. Volume 1 contains an executive summary of the ATF mission, strawman design, and study results. Volume 2 contains the detailed study information. Volume 3 has the ATF cost estimate, and will have limited distribution.

  8. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  9. Resource estimation in high performance medical image computing.

    PubMed

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources. PMID:24906466

  10. A Preliminary Analysis of LANDSAT-4 Thematic Mapper Radiometric Performance

    NASA Technical Reports Server (NTRS)

    Justice, C.; Fusco, L.; Mehl, W.

    1985-01-01

    The NASA raw (BT) product, the radiometrically corrected (AT) product, and the radiometrically and geometrically corrected (PT) product of a TM scene were analyzed examine the frequency distribution of the digital data; the statistical correlation between the bands; and the variability between the detectors within a band. The analyses were performed on a series of image subsets from the full scence. Results are presented from one 1024 c 1024 pixel subset of Realfoot Lake, Tennessee which displayed a representative range of ground conditions and cover types occurring within the full frame image. From this cursory examination of one of the first seven channel TM data sets, it would appear that the radiometric performance of the system is most satisfactory and largely meets pre-launch specifications. Problems were noted with Band 5 Detector 3 and Band 2 Detector 4. Differences were observed between forward and reverse scan detector responses both for the BT and AT products. No systematic variations were observed between odd and even detectors.

  11. A Preliminary Analysis of LANDSAT-4 Thematic Mapper Radiometric Performance

    NASA Technical Reports Server (NTRS)

    Justice, C.; Fusco, L.; Mehl, W.

    1984-01-01

    Analysis was performed to characterize the radiometry of three Thematic Mapper (TM) digital products of a scene of Arkansas. The three digital products examined were the NASA raw (BT) product, the radiometrically corrected (AT) product and the radiometrically and geometrically corrected (PT) product. The frequency distribution of the digital data; the statistical correlation between the bands; and the variability between the detectors within a band were examined on a series of image subsets from the full scene. The results are presented from one 1024 x 1024 pixel subset of Realfoot Lake, Tennessee which displayed a representative range of ground conditions and cover types occurring within the full frame image. Bands 1, 2 and 5 of the sample area are presented. The subsets were extracted from the three digital data products to cover the same geographic area. This analysis provides the first step towards a full appraisal of the TM radiometry being performed as part of the ESA/CEC contribution to the NASA/LIDQA program.

  12. Preliminary Transportation, Aging and Disposal Canister System Performance Specification

    SciTech Connect

    C.A Kouts

    2006-11-22

    This document provides specifications for selected system components of the Transportation, Aging and Disposal (TAD) canister-based system. A list of system specified components and ancillary components are included in Section 1.2. The TAD canister, in conjunction with specialized overpacks will accomplish a number of functions in the management and disposal of spent nuclear fuel. Some of these functions will be accomplished at purchaser sites where commercial spent nuclear fuel (CSNF) is stored, and some will be performed within the Office of Civilian Radioactive Waste Management (OCRWM) transportation and disposal system. This document contains only those requirements unique to applications within Department of Energy's (DOE's) system. DOE recognizes that TAD canisters may have to perform similar functions at purchaser sites. Requirements to meet reactor functions, such as on-site dry storage, handling, and loading for transportation, are expected to be similar to commercially available canister-based systems. This document is intended to be referenced in the license application for the Monitored Geologic Repository (MGR). As such, the requirements cited herein are needed for TAD system use in OCRWM's disposal system. This document contains specifications for the TAD canister, transportation overpack and aging overpack. The remaining components and equipment that are unique to the OCRWM system or for similar purchaser applications will be supplied by others.

  13. Advanced Analysis of Finger-Tapping Performance: A Preliminary Study

    PubMed Central

    Barut, Çağatay; Kızıltan, Erhan; Gelir, Ethem; Köktürk, Fürüzan

    2013-01-01

    Background: The finger-tapping test is a commonly employed quantitative assessment tool used to measure motor performance in the upper extremities. This task is a complex motion that is affected by external stimuli, mood and health status. The complexity of this task is difficult to explain with a single average intertap-interval value (time difference between successive tappings) which only provides general information and neglects the temporal effects of the aforementioned factors. Aims: This study evaluated the time course of average intertap-interval values and the patterns of variation in both the right and left hands of right-handed subjects using a computer-based finger-tapping system. Study Design: Cross sectional study. Methods: Thirty eight male individuals aged between 20 and 28 years (Mean±SD = 22.24±1.65) participated in the study. Participants were asked to perform single-finger-tapping test for 10 seconds of test period. Only the results of right-handed (RH) 35 participants were considered in this study. The test records the time of tapping and saves data as the time difference between successive tappings for further analysis. The average number of tappings and the temporal fluctuation patterns of the intertap-intervals were calculated and compared. The variations in the intertap-interval were evaluated with the best curve fit method. Results: An average tapping speed or tapping rate can reliably be defined for a single-finger tapping test by analysing the graphically presented data of the number of tappings within the test period. However, a different presentation of the same data, namely the intertap-interval values, shows temporal variation as the number of tapping increases. Curve fitting applications indicate that the variation has a biphasic nature. Conclusion: The measures obtained in this study reflect the complex nature of the finger-tapping task and are suggested to provide reliable information regarding hand performance. Moreover, the

  14. The lunar gravity mission MAGIA: preliminary design and performances

    NASA Astrophysics Data System (ADS)

    Fermi, Marco; Gregnanin, Marco; Mazzolena, Marco; Chersich, Massimiliano; Reguzzoni, Mirko; Sansò, Fernando

    2011-10-01

    The importance of an accurate model of the Moon gravity field has been assessed for future navigation missions orbiting and/or landing on the Moon, in order to use our natural satellite as an intermediate base for next solar system observations and exploration as well as for lunar resources mapping and exploitation. One of the main scientific goals of MAGIA mission, whose Phase A study has been recently funded by the Italian Space Agency (ASI), is the mapping of lunar gravitational anomalies, and in particular those on the hidden side of the Moon, with an accuracy of 1 mGal RMS at lunar surface in the global solution of the gravitational field up to degree and order 80. MAGIA gravimetric experiment is performed into two phases: the first one, along which the main satellite shall perform remote sensing of the Moon surface, foresees the use of Precise Orbit Determination (POD) data available from ground tracking of the main satellite for the determination of the long wavelength components of gravitational field. Improvement in the accuracy of POD results are expected by the use of ISA, the Italian accelerometer on board the main satellite. Additional gravitational data from recent missions, like Kaguya/Selene, could be used in order to enhance the accuracy of such results. In the second phase the medium/short wavelength components of gravitational field shall be obtained through a low-to-low (GRACE-like) Satellite-to-Satellite Tracking (SST) experiment. POD data shall be acquired during the whole mission duration, while the SST data shall be available after the remote sensing phase, when the sub-satellite shall be released from the main one and both satellites shall be left in a free-fall dynamics in the gravity field of the Moon. SST range-rate data between the two satellites shall be measured through an inter-satellite link with accuracy compliant with current state of art space qualified technology. SST processing and gravitational anomalies retrieval shall

  15. Nonparametric estimation receiver operating characteristic analysis for performance evaluation on combined detection and estimation tasks.

    PubMed

    Wunderlich, Adam; Goossens, Bart

    2014-10-01

    In an effort to generalize task-based assessment beyond traditional signal detection, there is a growing interest in performance evaluation for combined detection and estimation tasks, in which signal parameters, such as size, orientation, and contrast are unknown and must be estimated. One motivation for studying such tasks is their rich complexity, which offers potential advantages for imaging system optimization. To evaluate observer performance on combined detection and estimation tasks, Clarkson introduced the estimation receiver operating characteristic (EROC) curve and the area under the EROC curve as a summary figure of merit. This work provides practical tools for EROC analysis of experimental data. In particular, we propose nonparametric estimators for the EROC curve, the area under the EROC curve, and for the variance/covariance matrix of a vector of correlated EROC area estimates. In addition, we show that reliable confidence intervals can be obtained for EROC area, and we validate these intervals with Monte Carlo simulation. Application of our methodology is illustrated with an example comparing magnetic resonance imaging [Formula: see text]-space sampling trajectories. MATLAB® software implementing the EROC analysis estimators described in this work is publicly available at http://code.google.com/p/iqmodelo/. PMID:26158044

  16. Estimating endogenous changes in task performance from EEG

    PubMed Central

    Touryan, Jon; Apker, Gregory; Lance, Brent J.; Kerick, Scott E.; Ries, Anthony J.; McDowell, Kaleb

    2014-01-01

    Brain wave activity is known to correlate with decrements in behavioral performance as individuals enter states of fatigue, boredom, or low alertness.Many BCI technologies are adversely affected by these changes in user state, limiting their application and constraining their use to relatively short temporal epochs where behavioral performance is likely to be stable. Incorporating a passive BCI that detects when the user is performing poorly at a primary task, and adapts accordingly may prove to increase overall user performance. Here, we explore the potential for extending an established method to generate continuous estimates of behavioral performance from ongoing neural activity; evaluating the extended method by applying it to the original task domain, simulated driving; and generalizing the method by applying it to a BCI-relevant perceptual discrimination task. Specifically, we used EEG log power spectra and sequential forward floating selection (SFFS) to estimate endogenous changes in behavior in both a simulated driving task and a perceptual discrimination task. For the driving task the average correlation coefficient between the actual and estimated lane deviation was 0.37 ± 0.22 (μ ± σ). For the perceptual discrimination task we generated estimates of accuracy, reaction time, and button press duration for each participant. The correlation coefficients between the actual and estimated behavior were similar for these three metrics (accuracy = 0.25 ± 0.37, reaction time = 0.33 ± 0.23, button press duration = 0.36 ± 0.30). These findings illustrate the potential for modeling time-on-task decrements in performance from concurrent measures of neural activity. PMID:24994968

  17. Hydronic radiant cooling: Overview and preliminary performance assessment

    SciTech Connect

    Feustel, H.E.

    1993-05-01

    A significant amount of electrical energy used to cool non-residential buildings is drawn by the fans used to transport the cool air through the thermal distribution system. Hydronic systems reduce the amount of air transported through the building by separating ventilation and thermal conditioning. Due to the physical properties of water, hydronic distribution systems can transport a given amount of thermal energy using less than 5% of the otherwise necessary fan energy. This savings alone significantly reduces the energy consumption and especially the peak power requirement This survey clearly shows advantages for radiant cooling in combination with hydronic thermal distribution systems in comparison with the All-Air Systems commonly used in California. The report describes a literature survey on the system's development, thermal comfort issues, and cooling performance. The cooling power potential and the cooling power requirement are investigated for several California climates. Peak-power requirement is compared for hydronic radiant cooling and conventional All-Air-Systems.

  18. Model-based approach for elevator performance estimation

    NASA Astrophysics Data System (ADS)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  19. Methodology for the Preliminary Design of High Performance Schools in Hot and Humid Climates

    ERIC Educational Resources Information Center

    Im, Piljae

    2009-01-01

    A methodology to develop an easy-to-use toolkit for the preliminary design of high performance schools in hot and humid climates was presented. The toolkit proposed in this research will allow decision makers without simulation knowledge easily to evaluate accurately energy efficient measures for K-5 schools, which would contribute to the…

  20. Preliminary estimation of isotopic inventories of 2000 MWt ABR (revision 1).

    SciTech Connect

    Kim, T. K.; Yang, W. S.; Nuclear Engineering Division

    2008-06-16

    The isotopic inventories of a 2000 MWt Advanced Burner Reactor (ABR) core have been estimated to support the ABR accident analysis to be reported in the Appendix D of the Programmatic Environmental Impact Statement (PEIS). Based on the Super-PRISM design, a preliminary core design of 2000 MWt ABR was developed to achieve a one-year cycle length with 3-batch fuel management scheme. For a bounding estimation of transuranics (TRU) inventory, a low TRU conversion ratio ({approx}0.3) was targeted to increase the TRU enrichment. By changing the fuel compositions, isotopic inventories of mass and radioactivity were evaluated for four different core configurations: recycled metal fuel core, recycled oxide fuel core, startup metal fuel core, and startup oxide fuel core. For recycled cores, the TRU recovered from ABR spent fuel was used as the primary TRU feed, and the TRU recovered from 10-year cooled light water reactor spent fuel was used as the makeup TRU feed. For startup cores, weapons-grade plutonium was used as TRU feed without recycling ABR spent fuel. It was also assumed that a whole batch of discharged fuel assemblies is stored in the in-vessel storage for an entire irradiation cycle. For both metal and oxide fuel cores, the estimated TRU mass at beginning of equilibrium cycle (BOEC), including spent fuel TRU stored in the in-vessel storage, was about 8.5-8.7 MT for the recycled cores and 5.2 MT for the startup cores. Since a similar power was generated, the fission product mass are comparable for all four cores: 1.4 MT at BOEC and about 2.0 MT at end of equilibrium cycle (EOEC). Total radioactivity at BOEC is about 8.2 x 10{sup 8} curies in recycled cores and about 6.9 x 10{sup 8} curies in startup cores, and increases to about 1.1 x 10{sup 10} curies at EOEC for all four cases. Fission products are the dominant contributor (more than 80%) to the total radioactivity at EOEC for all four cases, but the fission product radioactivity decreases by 79% after one

  1. Hydronic radiant cooling: Overview and preliminary performance assessment

    SciTech Connect

    Feustel, H.E.

    1993-05-01

    A significant amount of electrical energy used to cool non-residential buildings is drawn by the fans used to transport the cool air through the thermal distribution system. Hydronic systems reduce the amount of air transported through the building by separating ventilation and thermal conditioning. Due to the physical properties of water, hydronic distribution systems can transport a given amount of thermal energy using less than 5% of the otherwise necessary fan energy. This savings alone significantly reduces the energy consumption and especially the peak power requirement This survey clearly shows advantages for radiant cooling in combination with hydronic thermal distribution systems in comparison with the All-Air Systems commonly used in California. The report describes a literature survey on the system`s development, thermal comfort issues, and cooling performance. The cooling power potential and the cooling power requirement are investigated for several California climates. Peak-power requirement is compared for hydronic radiant cooling and conventional All-Air-Systems.

  2. Preliminary Investigations of HE Performance Characterization Using SWIFT

    NASA Astrophysics Data System (ADS)

    Murphy, Michael; Johnson, Carl

    2013-06-01

    Initial pseudo-aquarium experimentation is underway to assess the utility of using the shock wave image framing technique (SWIFT) to characterize HE performance on detonator length and time scales. SWIFT is employed to directly visualize shock waves driven into polymethylmethacrylate (PMMA) samples through detonation interaction in pseudo-aquarium test geometries. Columns of XTX 8004, an extrudable RDX-based high explosive, are either cured directly within PMMA dynamic witness plates or within confinement tubes of different materials with varying shock impedances that are then embedded within PMMA. For current experiments, the SWIFT system records 16-frame image sequences using 175 ns inter-frame delays to directly visualize the evolution of lead shock-front geometries as they are driven radially into PMMA by the detonating XTX column. Standard aquarium-test analysis is employed to calculate shock pressure evolution within PMMA, and detonation wave velocities are accurately calculated from the time-resolved images as well. The SWIFT system and numerous pseudo-aquarium experimental results will be presented and discussed.

  3. Preliminary estimates of annual agricultural pesticide use for counties of the conterminous United States, 2010-11

    USGS Publications Warehouse

    Baker, Nancy T.; Stone, Wesley W.

    2013-01-01

    This report provides preliminary estimates of annual agricultural use of 374 pesticide compounds in counties of the conterminous United States in 2010 and 2011, compiled by means of methods described in Thelin and Stone (2013). U.S. Department of Agriculture (USDA) county-level data for harvested-crop acreage were used in conjunction with proprietary Crop Reporting District (CRD)-level pesticide-use data to estimate county-level pesticide use. Estimated pesticide use (EPest) values were calculated with both the EPest-high and EPest-low methods. The distinction between the EPest-high method and the EPest-low method is that there are more counties with estimated pesticide use for EPest-high compared to EPest-low, owing to differing assumptions about missing survey data (Thelin and Stone, 2013). Preliminary estimates in this report will be revised upon availability of updated crop acreages in the 2012 Agricultural Census, to be published by the USDA in 2014. In addition, estimates for 2008 and 2009 previously published by Stone (2013) will be updated subsequent to the 2012 Agricultural Census release. Estimates of annual agricultural pesticide use are provided as downloadable, tab-delimited files, which are organized by compound, year, state Federal Information Processing Standard (FIPS) code, county FIPS code, and kg (amount in kilograms).

  4. Performance Analysis of an Improved MUSIC DoA Estimator

    NASA Astrophysics Data System (ADS)

    Vallet, Pascal; Mestre, Xavier; Loubaton, Philippe

    2015-12-01

    This paper adresses the statistical performance of subspace DoA estimation using a sensor array, in the asymptotic regime where the number of samples and sensors both converge to infinity at the same rate. Improved subspace DoA estimators were derived (termed as G-MUSIC) in previous works, and were shown to be consistent and asymptotically Gaussian distributed in the case where the number of sources and their DoA remain fixed. In this case, which models widely spaced DoA scenarios, it is proved in the present paper that the traditional MUSIC method also provides DoA consistent estimates having the same asymptotic variances as the G-MUSIC estimates. The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that G-MUSIC estimates are still able to consistently separate the sources, while it is no longer the case for the MUSIC ones. The asymptotic variances of G-MUSIC estimates are also evaluated.

  5. Data used in preliminary performance assessment of the Waste Isolation Pilot Plant (1990)

    SciTech Connect

    Rechard, R.P ); Luzzolino, H. ); Sandha, J.S. )

    1990-12-01

    This report documents the data available as of August 1990 and used by the Performance Assessment Division of Sandia National Laboratories in its December 1990 preliminary performance assessment of the Waste Isolation Pilot Plant (WIPP). Parameter values are presented in table form for the geologic subsystem, engineered barriers, borehole flow properties, climate variability, and intrusion characteristics. Sources for the data and a brief discussion of each parameter are provided. 101 refs., 72 figs., 21 tabs.

  6. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    PubMed

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  7. Preliminary estimation of deoxynivalenol excretion through a 24 h pilot study.

    PubMed

    Rodríguez-Carrasco, Yelko; Mañes, Jordi; Berrada, Houda; Font, Guillermina

    2015-02-25

    A duplicate diet study was designed to explore the occurrence of 15 Fusarium mycotoxins in the 24 h-diet consumed by one volunteer as well as the levels of mycotoxins in his 24 h-collected urine. The employed methodology involved solvent extraction at high ionic strength followed by dispersive solid phase extraction and gas chromatography determination coupled to mass spectrometry in tandem. Satisfactory results in method performance were achieved. The method's accuracy was in a range of 68%-108%, with intra-day relative standard deviation and inter-day relative standard deviation lower than 12% and 15%, respectively. The limits of quantitation ranged from 0.1 to 8 µg/Kg. The matrix effect was evaluated and matrix-matched calibrations were used for quantitation. Only deoxynivalenol (DON) was quantified in both food and urine samples. A total DON daily intake amounted to 49.2 ± 5.6 µg whereas DON daily excretion of 35.2 ± 4.3 µg was determined. DON daily intake represented 68.3% of the established DON provisional maximum tolerable daily intake (PMTDI). Valuable preliminary information was obtained as regards DON excretion and needs to be confirmed in large-scale monitoring studies.

  8. Preliminary Estimation of Deoxynivalenol Excretion through a 24 h Pilot Study

    PubMed Central

    Rodríguez-Carrasco, Yelko; Mañes, Jordi; Berrada, Houda; Font, Guillermina

    2015-01-01

    A duplicate diet study was designed to explore the occurrence of 15 Fusarium mycotoxins in the 24 h-diet consumed by one volunteer as well as the levels of mycotoxins in his 24 h-collected urine. The employed methodology involved solvent extraction at high ionic strength followed by dispersive solid phase extraction and gas chromatography determination coupled to mass spectrometry in tandem. Satisfactory results in method performance were achieved. The method’s accuracy was in a range of 68%–108%, with intra-day relative standard deviation and inter-day relative standard deviation lower than 12% and 15%, respectively. The limits of quantitation ranged from 0.1 to 8 µg/Kg. The matrix effect was evaluated and matrix-matched calibrations were used for quantitation. Only deoxynivalenol (DON) was quantified in both food and urine samples. A total DON daily intake amounted to 49.2 ± 5.6 µg whereas DON daily excretion of 35.2 ± 4.3 µg was determined. DON daily intake represented 68.3% of the established DON provisional maximum tolerable daily intake (PMTDI). Valuable preliminary information was obtained as regards DON excretion and needs to be confirmed in large-scale monitoring studies. PMID:25723325

  9. Irrigated rice area estimation using remote sensing techniques: Project's proposal and preliminary results. [Rio Grande do Sul, Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Deassuncao, G. V.; Moreira, M. A.; Novaes, R. A.

    1984-01-01

    The development of a methodology for annual estimates of irrigated rice crop in the State of Rio Grande do Sul, Brazil, using remote sensing techniques is proposed. The project involves interpretation, digital analysis, and sampling techniques of LANDSAT imagery. Results are discussed from a preliminary phase for identifying and evaluating irrigated rice crop areas in four counties of the State, for the crop year 1982/1983. This first phase involved just visual interpretation techniques of MSS/LANDSAT images.

  10. Genetic Parameter Estimation in Seedstock Swine Population for Growth Performances

    PubMed Central

    Choi, Jae Gwan; Cho, Chung Il; Choi, Im Soo; Lee, Seung Soo; Choi, Tae Jeong; Cho, Kwang Hyun; Park, Byoung Ho; Choy, Yun Ho

    2013-01-01

    The objective of this study was to estimate genetic parameters that are to be used for across-herd genetic evaluations of seed stock pigs at GGP level. Performance data with pedigree information collected from swine breeder farms in Korea were provided by Korea Animal Improvement Association (AIAK). Performance data were composed of final body weights at test days and ultrasound measures of back fat thickness (BF), rib eye area (EMA) and retail cut percentage (RCP). Breeds of swine tested were Landrace, Yorkshire and Duroc. Days to 90 kg body weight (DAYS90) were estimated with linear function of age and ADG calculated from body weights at test days. Ultrasound measures were taken with A-mode ultrasound scanners by trained technicians. Number of performance records after censoring outliers and keeping records pigs only born from year 2000 were of 78,068 Duroc pigs, 101,821 Landrace pigs and 281,421 Yorkshire pigs. Models included contemporary groups defined by the same herd and the same seasons of births of the same year, which was regarded as fixed along with the effect of sex for all traits and body weight at test day as a linear covariate for ultrasound measures. REML estimation was processed with REMLF90 program. Heritability estimates were 0.40, 0.32, 0.21 0.39 for DAYS90, ADG, BF, EMA, RCP, respectively for Duroc population. Respective heritability estimates for Landrace population were 0.43, 0.41, 0.22, and 0.43 and for Yorkshire population were 0.36, 0.38, 0.22, and 0.42. Genetic correlation coefficients of DAYS90 with BF, EMA, or RCP were estimated to be 0.00 to 0.09, −0.15 to −0.25, 0.22 to 0.28, respectively for three breeds populations. Genetic correlation coefficients estimated between BF and EMA was −0.33 to −0.39. Genetic correlation coefficient estimated between BF and RCP was high and negative (−0.78 to −0.85) but the environmental correlation coefficients between these two traits was medium and negative (near −0.35), which describes

  11. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  12. An accurate link correlation estimator for improving wireless protocol performance.

    PubMed

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-02-12

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation.

  13. Comparison of the performance of two methods for height estimation.

    PubMed

    Edelman, Gerda; Alberink, Ivo; Hoogeboom, Bart

    2010-03-01

    In the case study, two methods of performing body height measurements in images are compared based on projective geometry and 3D modeling of the crime scene. Accuracy and stability of height estimations are tested using reconstruction images of test persons of known height. Given unchanged camera settings, predictions of both methods are accurate. However, as the camera had been moved in the case, new vanishing points and camera matches had to be created for the reconstruction images. 3D modeling still yielded accurate and stable estimations. Projective geometry produced incorrect predictions for test persons and unstable intervals for questioned persons. The latter is probably caused by the straight lines in the field of view being hard to discern. With the quality of material presented, which is representative for our case practice, using vanishing points may thus yield unstable results. The results underline the importance of performing validation experiments in casework. PMID:20158593

  14. Optimal speed estimation in natural image movies predicts human performance.

    PubMed

    Burge, Johannes; Geisler, Wilson S

    2015-01-01

    Accurate perception of motion depends critically on accurate estimation of retinal motion speed. Here we first analyse natural image movies to determine the optimal space-time receptive fields (RFs) for encoding local motion speed in a particular direction, given the constraints of the early visual system. Next, from the RF responses to natural stimuli, we determine the neural computations that are optimal for combining and decoding the responses into estimates of speed. The computations show how selective, invariant speed-tuned units might be constructed by the nervous system. Then, in a psychophysical experiment using matched stimuli, we show that human performance is nearly optimal. Indeed, a single efficiency parameter accurately predicts the detailed shapes of a large set of human psychometric functions. We conclude that many properties of speed-selective neurons and human speed discrimination performance are predicted by the optimal computations, and that natural stimulus variation affects optimal and human observers almost identically.

  15. Preliminary Assessment of Variable Speed Power Turbine Technology on Civil Tiltrotor Size and Performance

    NASA Technical Reports Server (NTRS)

    Snyder, Christopher A.; Acree, Cecil W., Jr.

    2012-01-01

    A Large Civil Tiltrotor (LCTR) conceptual design was developed as part of the NASA Heavy Lift Rotorcraft Systems Investigation in order to establish a consistent basis for evaluating the benefits of advanced technology for large tiltrotors. The concept has since evolved into the second-generation LCTR2, designed to carry 90 passengers for 1,000 nm at 300 knots, with vertical takeoff and landing capability. This paper performs a preliminary assessment of variable-speed power turbine technology on LCTR2 sizing, while maintaining the same, advanced technology engine core. Six concepts were studied; an advanced, single-speed engine with a conventional power turbine layout (Advanced Conventional Engine, or ACE) using a multi-speed (shifting) gearbox. There were five variable-speed power turbine (VSPT) engine concepts, comprising a matrix of either three or four turbine stages, and fixed or variable guide vanes; plus a minimum weight, twostage, fixed-geometry VSPT. The ACE is the lightest engine, but requires a multi-speed (shifting) gearbox to maximize its fuel efficiency, whereas the VSPT concepts use a lighter, fixed-ratio gearbox. The NASA Design and Analysis of Rotorcraft (NDARC) design code was used to study the trades between rotor and engine efficiency and weight. Rotor performance was determined by Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD II), and engine performance was estimated with the Numerical Propulsion System Simulation (NPSS). Design trades for the ACE vs. VSPT are presented in terms of vehicle gross and empty weight, propulsion system weight and mission fuel burn for the civil mission. Because of its strong effect on gearbox weight and on both rotor and engine efficiency, rotor speed was chosen as the reference design variable for comparing design trades. Major study assumptions are presented and discussed. Impressive engine power-to-weight and fuel efficiency reduced vehicle sensitivity to propulsion system choice

  16. Estimating landslide losses - preliminary results of a seven-State pilot project

    USGS Publications Warehouse

    Highland, Lynn M.

    2006-01-01

    reliable information on economic losses associated with landslides. Each State survey examined the availability, distribution, and inherent uncertainties of economic loss data in their study areas. Their results provide the basis for identifying the most fruitful methods of collecting landslide loss data nationally, using methods that are consistent and provide common goals. These results can enhance and establish the future directions of scientific investigation priorities by convincingly documenting landslide risks and consequences that are universal throughout the 50 States. This report is organized as follows: A general summary of the pilot project history, goals, and preliminary conclusions from the Lincoln, Neb. workshop are presented first. Internet links are then provided for each State report, which appear on the internet in PDF format and which have been placed at the end of this open-file report. A reference section follows the reports, and, lastly, an Appendix of categories of landslide loss and sources of loss information is included for the reader's information. Please note: The Oregon Geological Survey has also submitted a preliminary report on indirect loss estimation methodology, which is also linked with the others. Each State report is unique and presented in the form in which it was submitted, having been independently peer reviewed by each respective State survey. As such, no universal 'style' or format has been adopted as there have been no decisions on which inventory methods will be recommended to the 50 states, as of this writing. The reports are presented here as information for decision makers, and for the record; although several reports provide recommendations on inventory methods that could be adopted nationwide, currently no decisions have been made on adopting a uniform methodology for the States.

  17. Preliminary study on the estimation of Emax using single-beat methods during assistance with rotary blood pumps.

    PubMed

    Sugai, Telma Keiko; Tanaka, Akira; Yoshizawa, Makoto; Shiraishi, Yasuyuki; Baba, Atsushi; Yambe, Tomoyuki; Nitta, Shin-ichi

    2008-01-01

    Recently, rotary blood pumps (RBPs) have been used as bridge to recovery. In such application the RBP might be weaned once the cardiac function has been recovered. In such cases, the detection of the cardiac function is fundamental for the treatment efficiency. However, most of the widely used cardiac function indices (CFIs) were proposed for unassisted hearts and have not been completely evaluated under assistance. In contrast, Emax, which is known as a reliable CFI, has already been validated under assistance with RBP. However, since the conventional method for the estimation of Emax has some limitations for the clinical application, the objective of this study was to evaluate different single-beat estimation methods qualitatively and also quantitatively using in vivo data. The preliminary results showed that although single-beat estimation have more clinical applicability, not all those estimation methods are suitable for the RBP assistance.

  18. Empirical tests of performance of some M - estimators

    NASA Astrophysics Data System (ADS)

    Banaś, Marek; Ligas, Marcin

    2014-12-01

    The paper presents an empirical comparison of performance of three well known M - estimators (i.e. Huber, Tukey and Hampel's M - estimators) and also some new ones. The new M - estimators were motivated by weighting functions applied in orthogonal polynomials theory, kernel density estimation as well as one derived from Wigner semicircle probability distribution. M - estimators were used to detect outlying observations in contaminated datasets. Calculations were performed using iteratively reweighted least-squares (IRLS). Since the residual variance (used in covariance matrices construction) is not a robust measure of scale the tests employed also robust measures i.e. interquartile range and normalized median absolute deviation. The methods were tested on a simple leveling network in a large number of variants showing bad and good sides of M - estimation. The new M - estimators have been equipped with theoretical tuning constants to obtain 95% efficiency with respect to the standard normal distribution. The need for data - dependent tuning constants rather than those established theoretically is also pointed out. W artykule przedstawiono empiryczne porównanie trzech dobrze znanych M - estymatorów (Huber'a, Tukey'a oraz Hampel'a) jak również kilku nowych. Nowe estymatory motywowane były funkcjami wagowymi wykorzystywanymi w teorii wielomianów ortogonalnych, estymacji jądrowej oraz jeden motywowany przez funkcję gęstości "półokręgu" Wigner'a. Każdy z estymatorów został użyty do wykrywania obserwacji odstających w skażonych zbiorach danych. Obliczenia wykonano za pomocą "reważonej" metody najmniejszych kwadratów. Ze względu na fakt, iż wariancja resztowa (używana w konstrukcji macierzy kowariancyjnych) nie jest odpornym estymatorem skali, w testach wykorzystano również odporne miary takie jak: rozstęp ćwiartkowy oraz znormalizowane odchylenie medianowe. Testy wykonano na prostej sieci niwelacyjnej w dużej ilości wariantów ukazuj

  19. Mars Science Laboratory Entry, Descent and Landing System Development Challenges and Preliminary Flight Performance

    NASA Technical Reports Server (NTRS)

    Steltzner, Adam D.; San Martin, A. Miguel; Rivellini, Tommaso P.

    2013-01-01

    The Mars Science Laboratory project recently landed the Curiosity rover on the surface of Mars. With the success of the landing system, the performance envelope of entry, descent, and landing capabilities has been extended over the previous state of the art. This paper will present an overview of the MSL entry, descent, and landing system, a discussion of a subset of its development challenges, and include a discussion of preliminary results of the flight reconstruction effort.

  20. Performance and Weight Estimates for an Advanced Open Rotor Engine

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.; Tong, Michael T.

    2012-01-01

    NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.

  1. Characterization and estimation of permeability correlation structure from performance data

    SciTech Connect

    Ershaghi, I.; Al-Qahtani, M.

    1997-08-01

    In this study, the influence of permeability structure and correlation length on the system effective permeability and recovery factors of 2-D cross-sectional reservoir models, under waterflood, is investigated. Reservoirs with identical statistical representation of permeability attributes are shown to exhibit different system effective permeability and production characteristics which can be expressed by a mean and variance. The mean and variance are shown to be significantly influenced by the correlation length. Detailed quantification of the influence of horizontal and vertical correlation lengths for different permeability distributions is presented. The effect of capillary pressure, P{sub c1} on the production characteristics and saturation profiles at different correlation lengths is also investigated. It is observed that neglecting P{sub c} causes considerable error at large horizontal and short vertical correlation lengths. The effect of using constant as opposed to variable relative permeability attributes is also investigated at different correlation lengths. Next we studied the influence of correlation anisotropy in 2-D reservoir models. For a reservoir under five-spot waterflood pattern, it is shown that the ratios of breakthrough times and recovery factors of the wells in each direction of correlation are greatly influenced by the degree of anisotropy. In fully developed fields, performance data can aid in the recognition of reservoir anisotropy. Finally, a procedure for estimating the spatial correlation length from performance data is presented. Both the production performance data and the system`s effective permeability are required in estimating the correlation length.

  2. Preliminary on-orbit performance of the Thermal Infrared Sensor (TIRS) on board Landsat 8

    NASA Astrophysics Data System (ADS)

    Montanaro, Matthew; Tesfaye, Zelalem; Lunsford, Allen; Wenny, Brian; Reuter, Dennis; Markham, Brian; Smith, Ramsey; Thome, Kurtis

    2013-09-01

    The Thermal Infrared Sensor (TIRS) on board Landsat 8 continues thermal band measurements of the Earth for the Landsat program. TIRS improves on previous Landsat designs by making use of a pushbroom sensor layout to collect data from the Earth in two spectral channels. The radiometric performance requirements of each detector were set to ensure the proper radiometric integrity of the instrument. The performance of TIRS was characterized during pre-flight thermal-vacuum testing. Calibration methods and algorithms were developed to translate the raw signal from the detectors into an accurate at-aperture spectral radiance. The TIRS instrument has the ability to view an on-board variable-temperature blackbody and a deep space view port for calibration purposes while operating on-orbit. After TIRS was successfully activated on-orbit, checks were performed on the instrument data to determine its image quality. These checkouts included an assessment of the on-board blackbody and deep space views as well as normal Earth scene collects. The calibration parameters that were determined pre-launch were updated by utilizing data from these preliminary on-orbit assessments. The TIRS on-orbit radiometric performance was then characterized using the updated calibration parameters. Although the characterization of the instrument is continually assessed over the lifetime of the mission, the preliminary results indicate that TIRS is meeting the noise and stability requirements while the pixel-to-pixel uniformity performance and the absolute radiometric performance require further study.

  3. Estimating performance of Feynman's ratchet with limited information

    NASA Astrophysics Data System (ADS)

    Thomas, George; Johal, Ramandeep S.

    2015-08-01

    We estimate the performance of Feynman’s ratchet at given values of the ratio of cold to hot reservoir temperatures (θ) and the figure of merit (efficiency in the case of engine and coefficienct of performance in the case of refrigerator). The latter implies that only the ratio of two intrinsic energy scales is known to the observer, but their exact values are completely uncertain. The prior probability distribution for the uncertain energy parameters is argued to be Jeffreys prior. We define an average measure for performance of the model by averaging, over the prior distribution, the power output (heat engine) or the χ-criterion (refrigerator) which is the product of rate of heat absorbed from the cold reservoir and the coefficient of performance (COP). We observe that the figure of merit, at optimal performance close to equilibrium, is reproduced by the prior-averaging procedure. Further, we obtain the well-known expressions of finite-time thermodynamics for the efficiency at optimal power and the COP at optimal χ-criterion, given by 1-\\sqrt{θ } and 1/\\sqrt{1-θ }-1 respectively. This analogy is explored further and we point out that the expected heat flow from and to the reservoirs, behaves as an effective Newtonian flow. We also show, in a class of quasi-static models of quantum heat engines, how Curzon-Ahlborn efficiency emerges in asymptotic limit with the use of Jeffreys prior.

  4. Preliminary validation of a new methodology for estimating dose reduction protocols in neonatal chest computed radiographs

    NASA Astrophysics Data System (ADS)

    Don, Steven; Whiting, Bruce R.; Hildebolt, Charles F.; Sehnert, W. James; Ellinwood, Jacquelyn S.; Töpfer, Karin; Masoumzadeh, Parinaz; Kraus, Richard A.; Kronemer, Keith A.; Herman, Thomas; McAlister, William H.

    2006-03-01

    The risk of radiation exposure is greatest for pediatric patients and, thus, there is a great incentive to reduce the radiation dose used in diagnostic procedures for children to "as low as reasonably achievable" (ALARA). Testing of low-dose protocols presents a dilemma, as it is unethical to repeatedly expose patients to ionizing radiation in order to determine optimum protocols. To overcome this problem, we have developed a computed-radiography (CR) dose-reduction simulation tool that takes existing images and adds synthetic noise to create realistic images that correspond to images generated with lower doses. The objective of our study was to determine the extent to which simulated, low-dose images corresponded with original (non-simulated) low-dose images. To make this determination, we created pneumothoraces of known volumes in five neonate cadavers and obtained images of the neonates at 10 mR, 1 mR and 0.1 mR (as measured at the cassette plate). The 10-mR exposures were considered "relatively-noise-free" images. We used these 10 mR-images and our simulation tool to create simulated 0.1- and 1-mR images. For the simulated and original images, we identified regions of interest (ROI) of the entire chest, free-in-air region, and liver. We compared the means and standard deviations of the ROI grey-scale values of the simulated and original images with paired t tests. We also had observers rate simulated and original images for image quality and for the presence or absence of pneumothoraces. There was no statistically significant difference in grey-scale-value means nor standard deviations between simulated and original entire chest ROI regions. The observer performance suggests that an exposure >=0.2 mR is required to detect the presence or absence of pneumothoraces. These preliminary results indicate that the use of the simulation tool is promising for achieving ALARA exposures in children.

  5. How to perform meaningful estimates of genetic effects.

    PubMed

    Alvarez-Castro, José M; Le Rouzic, Arnaud; Carlborg, Orjan

    2008-05-01

    Although the genotype-phenotype map plays a central role both in Quantitative and Evolutionary Genetics, the formalization of a completely general and satisfactory model of genetic effects, particularly accounting for epistasis, remains a theoretical challenge. Here, we use a two-locus genetic system in simulated populations with epistasis to show the convenience of using a recently developed model, NOIA, to perform estimates of genetic effects and the decomposition of the genetic variance that are orthogonal even under deviations from the Hardy-Weinberg proportions. We develop the theory for how to use this model in interval mapping of quantitative trait loci using Halley-Knott regressions, and we analyze a real data set to illustrate the advantage of using this approach in practice. In this example, we show that departures from the Hardy-Weinberg proportions that are expected by sampling alone substantially alter the orthogonal estimates of genetic effects when other statistical models, like F2 or G2A, are used instead of NOIA. Finally, for the first time from real data, we provide estimates of functional genetic effects as sets of effects of natural allele substitutions in a particular genotype, which enriches the debate on the interpretation of genetic effects as implemented both in functional and in statistical models. We also discuss further implementations leading to a completely general genotype-phenotype map. PMID:18451979

  6. Cyclone performance estimates for pressurized fluidized-bed combustion

    SciTech Connect

    Henry, R.F.; Podolski, W.F.

    1981-07-01

    Hot pressurized flue gas from pressurized fluidized-bed combustion must be cleaned up prior to its expansion in a gas turbine as part of the combined-cycle electric power generation concept. The performance of conventional cyclones in experimental tests has been compared with theory, with reasonable agreement. Prediction of the performance of a larger cyclone system shows that three stages should provide the cleanup required on the basis of current estimates of turbine tolerance of particulate matter. Advances in hot gas cleanup - optimized cyclones, augmented cyclones, and alternative devices - should provide future improvement in cycle efficiencies and costs, but simple cyclones are planned for first-generation PFB/CC pilot and demonstration plants.

  7. Performance-based selection of likelihood models for phylogeny estimation.

    PubMed

    Minin, Vladimir; Abdo, Zaid; Joyce, Paul; Sullivan, Jack

    2003-10-01

    Phylogenetic estimation has largely come to rely on explicitly model-based methods. This approach requires that a model be chosen and that that choice be justified. To date, justification has largely been accomplished through use of likelihood-ratio tests (LRTs) to assess the relative fit of a nested series of reversible models. While this approach certainly represents an important advance over arbitrary model selection, the best fit of a series of models may not always provide the most reliable phylogenetic estimates for finite real data sets, where all available models are surely incorrect. Here, we develop a novel approach to model selection, which is based on the Bayesian information criterion, but incorporates relative branch-length error as a performance measure in a decision theory (DT) framework. This DT method includes a penalty for overfitting, is applicable prior to running extensive analyses, and simultaneously compares all models being considered and thus does not rely on a series of pairwise comparisons of models to traverse model space. We evaluate this method by examining four real data sets and by using those data sets to define simulation conditions. In the real data sets, the DT method selects the same or simpler models than conventional LRTs. In order to lend generality to the simulations, codon-based models (with parameters estimated from the real data sets) were used to generate simulated data sets, which are therefore more complex than any of the models we evaluate. On average, the DT method selects models that are simpler than those chosen by conventional LRTs. Nevertheless, these simpler models provide estimates of branch lengths that are more accurate both in terms of relative error and absolute error than those derived using the more complex (yet still wrong) models chosen by conventional LRTs. This method is available in a program called DT-ModSel. PMID:14530134

  8. Performance assessment methodology as applied to the Greater Confinement Disposal site: Preliminary results of the third performance iteration

    SciTech Connect

    Brown, T.J.; Baer, T.A.

    1994-12-31

    The US Department of Energy has contracted Sandia National Laboratories to conduct a performance assessment of the Greater Confinement Disposal facility, Nevada. The performance assessment is an iterative process in which transport models are used to prioritize site characterization data collection. Then the data are used to refine the conceptual and performance assessment models. The results of the first two performance assessment iterations indicate that the site is likely to comply with the performance standards under the existing hydrologic conditions. The third performance iteration expands the conceptual model of the existing transport system to include possible future events and incorporates these processes in the performance assessment models. The processes included in the third performance assessment are climate change, bioturbation, plant uptake, erosion, upward advection, human intrusion and subsidence. The work completed to date incorporates the effects of bioturbation, erosion and subsidence in the performance assessment model. Preliminary analyses indicate that the development of relatively deep-rooting plant species at the site, which could occur due to climate change, irrigated farming or subsidence, poses the greatest threat to the site`s performance.

  9. Estimation of genetic trend in racing performance of thoroughbred horses.

    PubMed

    Gaffney, B; Cunningham, E P

    1988-04-21

    Thoroughbred horses have been bred exclusively for racing in England since Tudor times and thoroughbred horse racing is now practised in over 40 countries and involves more than half-a-million horses worldwide. The genetic origins of the thoroughbred go back largely to horses imported from the Middle East and North Africa to England in the late seventeenth and early eighteenth centuries. Since the establishment of the Stud Book in 1791, the population has been effectively closed to outside sources, and over 80% of the thoroughbred population's gene pool derives from 31 known ancestors from this early period. Despite intense directional selection, especially on the male side, and the generally high heritabilities of various measures of racing performance, winning times of classic races have not improved in recent decades. One possible explanation for this is that additive genetic variance in performance may have been exhausted in the face of strong selection. To test this, we have estimated the genetic trend in performance over the period 1952-77 using TIMEFORM handicap ratings which are based entirely on the horse's own performance, and express its racing merit as a weight in pounds which the compilers believe the horse should carry in an average free-handicap race. These ratings take into account such factors as the firmness of the ground, the distance and the level of the competition. Our results indicate that the failure of winning times to improve is not due to insufficient genetic variance in the thoroughbred population as a whole. PMID:3357536

  10. A preliminary estimate of the EUVE cumulative distribution of exposure time on the unit sphere. [Extreme Ultra-Violet Explorer

    NASA Technical Reports Server (NTRS)

    Tang, C. C. H.

    1984-01-01

    A preliminary study of an all-sky coverage of the EUVE mission is given. Algorithms are provided to compute the exposure of the celestial sphere under the spinning telescopes, taking into account that during part of the exposure time the telescopes are blocked by the earth. The algorithms are used to give an estimate of exposure time at different ecliptic latitudes as a function of the angle of field of view of the telescope. Sample coverage patterns are also given for a 6-month mission.

  11. Motion estimation performance models with application to hardware error tolerance

    NASA Astrophysics Data System (ADS)

    Cheong, Hye-Yeon; Ortega, Antonio

    2007-01-01

    The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.

  12. Preliminary estimates of spatially distributed net infiltration and recharge for the Death Valley region, Nevada-California

    USGS Publications Warehouse

    Hevesi, J.A.; Flint, A.L.; Flint, L.E.

    2002-01-01

    A three-dimensional ground-water flow model has been developed to evaluate the Death Valley regional flow system, which includes ground water beneath the Nevada Test Site. Estimates of spatially distributed net infiltration and recharge are needed to define upper boundary conditions. This study presents a preliminary application of a conceptual and numerical model of net infiltration. The model was developed in studies at Yucca Mountain, Nevada, which is located in the approximate center of the Death Valley ground-water flow system. The conceptual model describes the effects of precipitation, runoff, evapotranspiration, and redistribution of water in the shallow unsaturated zone on predicted rates of net infiltration; precipitation and soil depth are the two most significant variables. The conceptual model was tested using a preliminary numerical model based on energy- and water-balance calculations. Daily precipitation for 1980 through 1995, averaging 202 millimeters per year over the 39,556 square kilometers area of the ground-water flow model, was input to the numerical model to simulate net infiltration ranging from zero for a soil thickness greater than 6 meters to over 350 millimeters per year for thin soils at high elevations in the Spring Mountains overlying permeable bedrock. Estimated average net infiltration over the entire ground-water flow model domain is 7.8 millimeters per year.To evaluate the application of the net-infiltration model developed on a local scale at Yucca Mountain, to net-infiltration estimates representing the magnitude and distribution of recharge on a regional scale, the net-infiltration results were compared with recharge estimates obtained using empirical methods. Comparison of model results with previous estimates of basinwide recharge suggests that the net-infiltration estimates obtained using this model may overestimate recharge because of uncertainty in modeled precipitation, bedrock permeability, and soil properties for

  13. Preliminary estimates of spatially distributed net infiltration and recharge for the Death Valley region, Nevada-California

    SciTech Connect

    Hevesi, J.A.; Flint, A.L.; Flint, L.E.

    2002-07-18

    A three-dimensional ground-water flow model has been developed to evaluate the Death Valley regional flow system, which includes ground water beneath the Nevada Test Site. Estimates of spatially distributed net infiltration and recharge are needed to define upper boundary conditions. This study presents a preliminary application of a conceptual and numerical model of net infiltration. The model was developed in studies at Yucca Mountain, Nevada, which is located in the approximate center of the Death Valley ground-water flow system. The conceptual model describes the effects of precipitation, runoff, evapotranspiration, and redistribution of water in the shallow unsaturated zone on predicted rates of net infiltration; precipitation and soil depth are the two most significant variables. The conceptual model was tested using a preliminary numerical model based on energy- and water-balance calculations. Daily precipitation for 1980 through 1995, averaging 202 millimeters per year over the 39,556 square kilometers area of the ground-water flow model, was input to the numerical model to simulate net infiltration ranging from zero for a soil thickness greater than 6 meters to over 350 millimeters per year for thin soils at high elevations in the Spring Mountains overlying permeable bedrock. Estimated average net infiltration over the entire ground-water flow model domain is 7.8 millimeters per year. To evaluate the application of the net-infiltration model developed on a local scale at Yucca Mountain, to net-infiltration estimates representing the magnitude and distribution of recharge on a regional scale, the net-infiltration results were compared with recharge estimates obtained using empirical methods. Comparison of model results with previous estimates of basinwide recharge suggests that the net-infiltration estimates obtained using this model may overestimate recharge because of uncertainty in modeled precipitation, bedrock permeability, and soil properties for

  14. On a stochastic approach to a code performance estimation

    NASA Astrophysics Data System (ADS)

    Gorshenin, Andrey K.; Frenkel, Sergey L.; Korolev, Victor Yu.

    2016-06-01

    The main goal of an efficient profiling of software is to minimize the runtime overhead under certain constraints and requirements. The traces built by a profiler during the work, affect the performance of the system itself. One of important aspect of an overhead arises from the randomness of variability in the context in which the application is embedded, e.g., due to possible cache misses, etc. Such uncertainty needs to be taken into account in the design phase. In order to overcome these difficulties we propose to investigate this issue through the analysis of the probability distribution of the difference between profiler's times for the same code. The approximating model is based on the finite normal mixtures within the framework of the method of moving separation of mixtures. We demonstrate some results for the MATLAB profiler using plotting of 3D surfaces by the function surf. The idea can be used for an estimating of a program efficiency.

  15. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    SciTech Connect

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  16. Preliminary evaluation of spectral, normal and meteorological crop stage estimation approaches

    NASA Technical Reports Server (NTRS)

    Cate, R. B.; Artley, J. A.; Doraiswamy, P. C.; Hodges, T.; Kinsler, M. C.; Phinney, D. E.; Sestak, M. L. (Principal Investigator)

    1980-01-01

    Several of the projects in the AgRISTARS program require crop phenology information, including classification, acreage and yield estimation, and detection of episodal events. This study evaluates several crop calendar estimation techniques for their potential use in the program. The techniques, although generic in approach, were developed and tested on spring wheat data collected in 1978. There are three basic approaches to crop stage estimation: historical averages for an area (normal crop calendars), agrometeorological modeling of known crop-weather relationships agrometeorological (agromet) crop calendars, and interpretation of spectral signatures (spectral crop calendars). In all, 10 combinations of planting and biostage estimation models were evaluated. Dates of stage occurrence are estimated with biases between -4 and +4 days while root mean square errors range from 10 to 15 days. Results are inconclusive as to the superiority of any of the models and further evaluation of the models with the 1979 data set is recommended.

  17. Dose estimates for the solid waste performance assessment

    SciTech Connect

    Rittman, P.D.

    1994-08-30

    The Solid Waste Performance Assessment calculations by PNL in 1990 were redone to incorporate changes in methods and parameters since then. The ten scenarios found in their report were reduced to three, the Post-Drilling Resident, the Post-Excavation Resident, and an All Pathways Irrigator. In addition, estimates of population dose to people along the Columbia River are also included. The attached report describes the methods and parameters used in the calculations, and derives dose factors for each scenario. In addition, waste concentrations, ground water concentrations, and river water concentrations needed to reach the performance objectives of 100 mrem/yr and 500 person-rem/yr are computed. Internal dose factors from DOE-0071 were applied when computing internal dose. External dose rate factors came from the GENII Version 1.485 software package. Dose calculations were carried out on a spreadsheet. The calculations are described in detail in the report for 63 nuclides, including 5 not presently in the GENII libraries. The spreadsheet calculations were checked by comparison with GENII, as described in Appendix D.

  18. Performance Estimation for Two-Dimensional Brownian Rotary Ratchet Systems

    NASA Astrophysics Data System (ADS)

    Tutu, Hiroki; Horita, Takehiko; Ouchi, Katsuya

    2015-04-01

    Within the context of the Brownian ratchet model, a molecular rotary system that can perform unidirectional rotations induced by linearly polarized ac fields and produce positive work under loads was studied. The model is based on the Langevin equation for a particle in a two-dimensional (2D) three-tooth ratchet potential of threefold symmetry. The performance of the system is characterized by the coercive torque, i.e., the strength of the load competing with the torque induced by the ac driving field, and the energy efficiency in force conversion from the driving field to the torque. We propose a master equation for coarse-grained states, which takes into account the boundary motion between states, and develop a kinetic description to estimate the mean angular momentum (MAM) and powers relevant to the energy balance equation. The framework of analysis incorporates several 2D characteristics and is applicable to a wide class of models of smooth 2D ratchet potential. We confirm that the obtained expressions for MAM, power, and efficiency of the model can enable us to predict qualitative behaviors. We also discuss the usefulness of the torque/power relationship for experimental analyses, and propose a characteristic for 2D ratchet systems.

  19. Subject-specific estimation of central aortic blood pressure via system identification: preliminary in-human experimental study.

    PubMed

    Fazeli, Nima; Kim, Chang-Sei; Rashedi, Mohammad; Chappell, Alyssa; Wang, Shaohua; MacArthur, Roderick; McMurtry, M Sean; Finegan, Barry; Hahn, Jin-Oh

    2014-10-01

    This paper demonstrates preliminary in-human validity of a novel subject-specific approach to estimation of central aortic blood pressure (CABP) from peripheral circulatory waveforms. In this "Individualized Transfer Function" (ITF) approach, CABP is estimated in two steps. First, the circulatory dynamics of the cardiovascular system are determined via model-based system identification, in which an arterial tree model is characterized based on the circulatory waveform signals measured at the body's extremity locations. Second, CABP waveform is estimated by de-convolving peripheral circulatory waveforms from the arterial tree model. The validity of the ITF approach was demonstrated using experimental data collected from 13 cardiac surgery patients. Compared with the invasive peripheral blood pressure (BP) measurements, the ITF approach yielded significant reduction in errors associated with the estimation of CABP, including 1.9-2.6 mmHg (34-42 %) reduction in BP waveform errors (p < 0.05) as well as 5.8-9.1 mmHg (67-76 %) and 6.0-9.7 mmHg (78-85 %) reductions in systolic and pulse pressure (SP and PP) errors (p < 0.05). It also showed modest but significant improvement over the generalized transfer function approach, including 0.1 mmHg (2.6 %) reduction in BP waveform errors as well as 0.7 (20 %) and 5.0 mmHg (75 %) reductions in SP and PP errors (p < 0.05).

  20. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  1. Preliminary estimates of annual agricultural pesticide use for counties of the conterminous United States, 2013

    USGS Publications Warehouse

    Baker, Nancy T.

    2015-10-05

    Thelin, G.P., and Stone, W.W., 2013, Estimation of annual agricultural pesticide use for counties of the conterminous United States, 1992–2009: U.S. Geological Survey Scientific Investigations Report 2013–5009, 54 p.

  2. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.

  3. A preliminary evaluation of an F100 engine parameter estimation process using flight data

    NASA Technical Reports Server (NTRS)

    Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.

    1990-01-01

    The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.

  4. A Preliminary Performance Assessment for Salt Disposal of High-Level Nuclear Waste - 12173

    SciTech Connect

    Lee, Joon H.; Clayton, Daniel; Jove-Colon, Carlos; Wang, Yifeng

    2012-07-01

    A salt repository is one of the four geologic media currently under study by the U.S. DOE Office of Nuclear Energy to support the development of a long-term strategy for geologic disposal of commercial used nuclear fuel (UNF) and high-level radioactive waste (HLW). The immediate goal of the generic salt repository study is to develop the necessary modeling tools to evaluate and improve the understanding of the repository system response and processes relevant to long-term disposal of UNF and HLW in a salt formation. The current phase of this study considers representative geologic settings and features adopted from previous studies for salt repository sites. For the reference scenario, the brine flow rates in the repository and underlying interbeds are very low, and transport of radionuclides in the transport pathways is dominated by diffusion and greatly retarded by sorption on the interbed filling materials. I-129 is the dominant annual dose contributor at the hypothetical accessible environment, but the calculated mean annual dose is negligibly small. For the human intrusion (or disturbed) scenario, the mean mass release rate and mean annual dose histories are very different from those for the reference scenario. Actinides including Pu-239, Pu-242 and Np-237 are major annual dose contributors, and the calculated peak mean annual dose is acceptably low. A performance assessment model for a generic salt repository has been developed incorporating, where applicable, representative geologic settings and features adopted from literature data for salt repository sites. The conceptual model and scenario for radionuclide release and transport from a salt repository were developed utilizing literature data. The salt GDS model was developed in a probabilistic analysis framework. The preliminary performance analysis for demonstration of model capability is for an isothermal condition at the ambient temperature for the near field. The capability demonstration emphasizes key

  5. Structural and preliminary thermal performance testing of a pressure activated contact heat exchanger

    NASA Technical Reports Server (NTRS)

    Lee, C. Y.; Christian, E. L.; Wohlwend, J. W.; Parish, R. C.

    1987-01-01

    A contact heat exchanger concept is being developed for use onboard Space Station as an interface device between external thermal bus and pressurized modules. The concept relies on mechanical contact activated by the fluid pressure inside thin-walled tubes. Structural testings were carried out to confirm the technology feasibility of using such thin-walled tubes. The test results also verified the linear elastic stress analysis which was used to predict the tube mechanical behaviors. A preliminary thermal testing was also performed with liquid Freon-11 flowing inside tubes and heat being supplied by electrical heating from the bottom of the contact heat exchanger baseplate. The test results showed excellent agreement of test data with analytical prediction for all thermal resistances except for the two-phase flow characteristics. Testing with two-phase flow inside tubes will, however, be performed on the NASA-JSC test bed.

  6. The LPSP instrument on OSO 8. II - In-flight performance and preliminary results

    NASA Technical Reports Server (NTRS)

    Bonnet, R. M.; Lemaire, P.; Vial, J. C.; Artzner, G.; Gouttebroze, P.; Jouchoux, A.; Vidal-Madjar, A.; Leibacher, J. W.; Skumanich, A.

    1978-01-01

    The paper describes the in-flight performance for the first 18 months of operation of the LPSP (Laboratoire de Physique Stellaire et Planetaire) instrument incorporated in the OSO 8 launched June 1975. By means of the instrument, an absolute pointing accuracy of nearly one second was achieved in orbit during real-time operations. The instrument uses a Cassegrain telescope and a spectrometer simultaneously observing six wavelengths. In-flight performance is discussed with attention to angular resolution, spectral resolution, dispersion and grating mechanism (spectral scanner) stability, scattered light background and dark current, photometric standardization, and absolute calibration. Real-time operation and problems are considered with reference to pointing system problems, target acquisition, and L-alpha modulation. Preliminary results involving the observational program, quiet sun and chromospheric studies, quiet chromospheric oscillation and transients, sunspots and active regions, prominences, and aeronomy investigations are reported.

  7. Performance analysis of bullet trajectory estimation: Approach, simulation, and experiments

    SciTech Connect

    Ng, L.C.; Karr, T.J.

    1994-11-08

    This paper describes an approach to estimate a bullet`s trajectory from a time sequence of angles-only observations from a high-speed camera, and analyzes its performance. The technique is based on fitting a ballistic model of a bullet in flight along with unknown source location parameters to a time series of angular observations. The theory is developed to precisely reconstruct, from firing range geometry, the actual bullet trajectory as it appeared on the focal plane array and in real space. A metric for measuring the effective trajectory track error is also presented. Detailed Monte-Carlo simulations assuming different bullet ranges, shot-angles, camera frame rates, and angular noise show that angular track error can be as small as 100 {mu}rad for a 2 mrad/pixel sensor. It is also shown that if actual values of bullet ballistic parameters were available, the bullet s source location variables, and the angles of flight information could also be determined.

  8. Preliminary calibration of GPS signals and its effects on soil moisture estimation

    NASA Astrophysics Data System (ADS)

    Wan, Wei; Li, Huang; Chen, Xiuwan; Luo, Peng; Wan, Jiahuan

    2013-04-01

    In recent years, Global Navigation Satellite Systems Reflectometry (GNSS-R) is developed to estimate soil moisture content (SMC) as a new remote sensing tool. Signal error of Global Positioning System (GPS) bistatic radar is an important factor that affects the accuracy of SMC estimation. In this paper, two methods of GPS signal calibration involving both the direct and reflected signals are introduced, and a detailed explanation of the theoretical basis for such methods is given. An improved SMC estimation model utilizing calibrated GPS L-band signals is proposed, and the estimation accuracy is validated using the airborne GPS data from the Soil Moisture Experiment in 2002 (SMEX02). We choose 21 sites with soybean and corn in the Walnut Creek region of the US for validation. The sites are divided into three categories according to their vegetation cover: bare soil, mid-vegetation cover (Mid-Veg), and high-vegetation cover (High-Veg). The accuracy of SMC estimation is 11.17% for bare soil and 8.12% for Mid-Veg sites, much better than that of the traditional model. For High-Veg sites, the effect of signal attenuation due to vegetation cover is preliminarily taken into consideration and a linear model related to Normalized Difference Vegetation Indices (NDVI) is adopted to obtain a factor for rectifying the "over-calibration", and the error for High-Veg sites is finally reduced to 3.81%.

  9. Regression model estimation of early season crop proportions: North Dakota, some preliminary results

    NASA Technical Reports Server (NTRS)

    Lin, K. K. (Principal Investigator)

    1982-01-01

    To estimate crop proportions early in the season, an approach is proposed based on: use of a regression-based prediction equation to obtain an a priori estimate for specific major crop groups; modification of this estimate using current-year LANDSAT and weather data; and a breakdown of the major crop groups into specific crops by regression models. Results from the development and evaluation of appropriate regression models for the first portion of the proposed approach are presented. The results show that the model predicts 1980 crop proportions very well at both county and crop reporting district levels. In terms of planted acreage, the model underpredicted 9.1 percent of the 1980 published data on planted acreage at the county level. It predicted almost exactly the 1980 published data on planted acreage at the crop reporting district level and overpredicted the planted acreage by just 0.92 percent.

  10. Preliminary development of a technique for estimating municipal-solid-waste generation

    NASA Astrophysics Data System (ADS)

    1981-06-01

    The data obtained revealed detailed generation quantities by collection route for defined areas of the cities. These data were then used to test various predictive factors. The end result of the analysis is a provisional method for estimating residential solid-waste generation by relating it to income data readily available from government documents, and a provisional method for estimating commercial solid waste generation by relating it to readily available retail sales data. The analysis of the data obtained has resulted in a residential solid-waste estimation technique that can be applied to virtually any city or region of interest. The general approach consisted of: data collection from selected solid waste jurisdictions, the determination of demographic and socioeconomic data for each solid waste jurisdiction; and data analysis. Each stage is discussed.

  11. Preliminary estimate of the manufacturing cost for lithium/metal sulfide cells for stationary and mobile applications

    SciTech Connect

    Chilenskas, A. A.; Schaefer, J. C.; Towle, W. L.; Barney, D. L.

    1980-01-01

    A preliminary estimate has been made of the manufacturing cost for lithium/iron sulfide cells for stationary energy-storage and electric-vehicle applications. This preliminary cost analysis indicated that the manufacturing cost (in 1979 dollars) is $24 to 41/kW-h for stationary energy-storage cells and $31 to 55/kW-h for electric-vehicle cells. The materials cost was found to contribute between 52 and 65% of this manufacturing cost. The most expensive materials and components were lithium (metal and compounds), $4.61 to $14.26/kW-h; BN felt, $4.00 to 8.50/kW-h; feed-through components, $2.40/kW-h; positive current collectors, $1.48 to 2.20/kW-h; and aluminum, $1.43 to 1.66/kW-h. The projected lithium requirements were determined for use in lithium/iron sulfide batteries and conventional uses to the year 2006. The results showed that the lithium requirements were about 275,000 short tons by 2006, which is equivalent to about 51% of presently known US resources. Of this amount, about 33% would be used in battery production and 67% consumed in conventional uses. It is expected that the lithium used in battery production would be recycled.

  12. School based working memory training: Preliminary finding of improvement in children’s mathematical performance

    PubMed Central

    Witt, Marcus

    2011-01-01

    Working memory is a complex cognitive system responsible for the concurrent storage and processing of information. Ggiven that a complex cognitive task like mental arithmetic clearly places demands on working memory (e.g., in remembering partial results, monitoring progress through a multi-step calculation), there is surprisingly little research exploring the possibility of increasing young children’s working memory capacity through systematic school-based training. Tthis study reports the preliminary results of a working memory training programme, targeting executive processes such as inhibiting unwanted information, monitoring processes, and the concurrent storage and processing of information. Tthe findings suggest that children who received working memory training made significantly greater gains in the trained working memory task, and in a non-trained visual-spatial working memory task, than a matched control group. Moreover, the training group made significant improvements in their mathematical functioning as measured by the number of errors made in an addition task compared to the control group. Tthese findings, although preliminary, suggest that school-based measures to train working memory could have benefits in terms of improved performance in mathematics. PMID:21818243

  13. School based working memory training: Preliminary finding of improvement in children's mathematical performance.

    PubMed

    Witt, Marcus

    2011-01-01

    Working memory is a complex cognitive system responsible for the concurrent storage and processing of information. Ggiven that a complex cognitive task like mental arithmetic clearly places demands on working memory (e.g., in remembering partial results, monitoring progress through a multi-step calculation), there is surprisingly little research exploring the possibility of increasing young children's working memory capacity through systematic school-based training. Tthis study reports the preliminary results of a working memory training programme, targeting executive processes such as inhibiting unwanted information, monitoring processes, and the concurrent storage and processing of information. Tthe findings suggest that children who received working memory training made significantly greater gains in the trained working memory task, and in a non-trained visual-spatial working memory task, than a matched control group. Moreover, the training group made significant improvements in their mathematical functioning as measured by the number of errors made in an addition task compared to the control group. Tthese findings, although preliminary, suggest that school-based measures to train working memory could have benefits in terms of improved performance in mathematics. PMID:21818243

  14. Preliminary Results of Performance Measurements on a Cylindrical Hall-Effect Thruster with Magnetic Field Generated by Permanent Magnets

    NASA Technical Reports Server (NTRS)

    Polzin, K. A.; Raitses, Y.; Merino, E.; Fisch, N. J.

    2008-01-01

    The performance of a low-power cylindrical Hall thruster, which more readily lends itself to miniaturization and low-power operation than a conventional (annular) Hall thruster, was measured using a planar plasma probe and a thrust stand. The field in the cylindrical thruster was produced using permanent magnets, promising a power reduction over previous cylindrical thruster iterations that employed electromagnets to generate the required magnetic field topology. Two sets of ring-shaped permanent magnets are used, and two different field configurations can be produced by reorienting the poles of one magnet relative to the other. A plasma probe measuring ion flux in the plume is used to estimate the current utilization for the two magnetic configurations. The measurements indicate that electron transport is impeded much more effectively in one configuration, implying a higher thrust efficiency. Preliminary thruster performance measurements on this configuration were obtained over a power range of 100-250 W. The thrust levels over this power range were 3.5-6.5 mN, with anode efficiencies and specific impulses spanning 14-19% and 875- 1425 s, respectively. The magnetic field in the thruster was lower for the thrust measurements than the plasma probe measurements due to heating and weakening of the permanent magnets, reducing the maximum field strength from 2 kG to roughly 750-800 G. The discharge current levels observed during thrust stand testing were anomalously high compared to those levels measured in previous experiments with this thruster.

  15. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne Rose; Nessel, James A.; Miranda, Felix

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a QV-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  16. Frequency Estimator Performance for a Software-Based Beacon Receiver

    NASA Technical Reports Server (NTRS)

    Zemba, Michael J.; Morse, Jacquelynne R.; Nessel, James A.

    2014-01-01

    As propagation terminals have evolved, their design has trended more toward a software-based approach that facilitates convenient adjustment and customization of the receiver algorithms. One potential improvement is the implementation of a frequency estimation algorithm, through which the primary frequency component of the received signal can be estimated with a much greater resolution than with a simple peak search of the FFT spectrum. To select an estimator for usage in a Q/V-band beacon receiver, analysis of six frequency estimators was conducted to characterize their effectiveness as they relate to beacon receiver design.

  17. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 3, Model parameters: Sandia WIPP Project

    SciTech Connect

    Not Available

    1992-12-29

    This volume documents model parameters chosen as of July 1992 that were used by the Performance Assessment Department of Sandia National Laboratories in its 1992 preliminary performance assessment of the Waste Isolation Pilot Plant (WIPP). Ranges and distributions for about 300 modeling parameters in the current secondary data base are presented in tables for the geologic and engineered barriers, global materials (e.g., fluid properties), and agents that act upon the WIPP disposal system such as climate variability and human-intrusion boreholes. The 49 parameters sampled in the 1992 Preliminary Performance Assessment are given special emphasis with tables and graphics that provide insight and sources of data for each parameter.

  18. A preliminary estimate of geoid-induced variations in repeat orbit satellite altimeter observations

    NASA Technical Reports Server (NTRS)

    Brenner, Anita C.; Beckley, B. D.; Koblinsky, C. J.

    1990-01-01

    Altimeter satellites are often maintained in a repeating orbit to facilitate the separation of sea-height variations from the geoid. However, atmospheric drag and solar radiation pressure cause a satellite orbit to drift. For Geosat this drift causes the ground track to vary by + or - 1 km about the nominal repeat path. This misalignment leads to an error in the estimates of sea surface height variations because of the local slope in the geoid. This error has been estimated globally for the Geosat Exact Repeat Mission using a mean sea surface constructed from Geos 3 and Seasat altimeter data. Over most of the ocean the geoid gradient is small, and the repeat-track misalignment leads to errors of only 1 to 2 cm. However, in the vicinity of trenches, continental shelves, islands, and seamounts, errors can exceed 20 cm. The estimated error is compared with direct estimates from Geosat altimetry, and a strong correlation is found in the vicinity of the Tonga and Aleutian trenches. This correlation increases as the orbit error is reduced because of the increased signal-to-noise ratio.

  19. Preliminary Estimates from the 1995 National Household Survey on Drug Abuse. Advance Report Number 18.

    ERIC Educational Resources Information Center

    Gfroerer, Joseph

    This report presents the first results from the 1995 National Household Survey on Drug Abuse, an annual survey conducted by the Substance Abuse and Mental Health Services Administration. The survey provides estimates of the prevalence of use of a variety of illicit drugs, alcohol, and tobacco, based on a nationally representative sample of the…

  20. Preliminary verification of instantaneous air temperature estimation for clear sky conditions based on SEBAL

    NASA Astrophysics Data System (ADS)

    Zhu, Shanyou; Zhou, Chuxuan; Zhang, Guixin; Zhang, Hailong; Hua, Junwei

    2016-03-01

    Spatially distributed near surface air temperature at the height of 2 m is an important input parameter for the land surface models. It is of great significance in both theoretical research and practical applications to retrieve instantaneous air temperature data from remote sensing observations. An approach based on Surface Energy Balance Algorithm for Land (SEBAL) to retrieve air temperature under clear sky conditions is presented. Taking the meteorological measurement data at one station as the reference and remotely sensed data as the model input, the research estimates the air temperature by using an iterative computation. The method was applied to the area of Jiangsu province for nine scenes by using MODIS data products, as well as part of Fujian province, China based on four scenes of Landsat 8 imagery. Comparing the air temperature estimated from the proposed method with that of the meteorological station measurement, results show that the root mean square error is 1.7 and 2.6 °C at 1000 and 30 m spatial resolution respectively. Sensitivity analysis of influencing factors reveals that land surface temperature is the most sensitive to the estimation precision. Research results indicate that the method has great potentiality to be used to estimate instantaneous air temperature distribution under clear sky conditions.

  1. Preliminary estimates of benthic fluxes of dissolved metals in Coeur d'Alene Lake, Idaho

    USGS Publications Warehouse

    Balistrieri, L.S.

    1998-01-01

    This report presents porewater and selected water column data collected from Coeur d'Alene Lake in September of 1992. Despite probable oxidation of the porewater samples during collection and handling, these data are used to calculate molecular diffusive fluxes of dissolved metals (that is, Zn, Pb, Cu, and Mn) across the sediment-water interface. While these data and calculations provide preliminary information on benthic metal fluxes in Coeur d'Alene Lake, further work is needed to verify their direction and magnitude. The benthic flux calculations indicate that the sediment is generally a source of dissolved Zn, Cu, Mn, and, possibly, Pb to the overlying water column. These benthic fluxes are compared with two other major sources of metals to Coeur d'Alene Lake-the Coeur d'Alene and St. Joe Rivers. Comparisons indicate that benthic fluxes of Zn, Pb, and Cu are generally less than half of the fluxes of these metals into the lake from the Coeur d'Alene River. However, in a few cases, the calculated benthic metal fluxes exceed the Coeur d'Alene River fluxes. Benthic fluxes of Zn and, possibly, Pb may be greater than the corresponding metal fluxes from the St. Joe River. These results have implications for changes in the relative importance of metal sources to the lake as remediation activities in the Coeur d'Alene River basin proceed.

  2. Performance of internal covariance estimators for cosmic shear correlation functions

    SciTech Connect

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.

  3. Performance of internal covariance estimators for cosmic shear correlation functions

    DOE PAGES

    Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.

    2015-12-31

    Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less

  4. Teleseismic waveform analysis of deep-focus earthquake for the preliminary estimation of crustal structure of the northern part of Korea

    NASA Astrophysics Data System (ADS)

    Cho, H.; Shin, J.

    2010-12-01

    Crustal structures in the several areas of the northern part of Korea are estimated using the long-period teleseismic depth phase pP and the Moho underside-reflected phase pMP generated by deep-focus earthquakes. The analysis of waveform is performed through comparison of recordings and synthetics of these phases computed using a hybrid reflectivity method, WKBJ approximation for propagation in the vertically inhomogeneous mantle and the computation of Haskell propagator matrix in the layered crust and upper mantle. The pMP phase is a precursor to the surface reflection pP phase and its amplitude is relatively small. The analysis of vertical component of P, pP, and pMP provides the estimation of structure of the source side. The deep-focus earthquakes occurred at the border area of North Korea, China, and Russia are adequate for this study. The seismograms recorded at the GSN stations in Southeast Asia provide clear identification of pMP and pP phases. The preliminary analysis employs deep-focus (580 km) earthquake of magnitude 6.3 Mb of which epicenter is located at the border region between east Russia and northeast China. Seismograms after 0.01 - 0.2 Hz bandpass filtering clearly exhibit pMP and pP phases recorded on four GSN stations (BTDF, PSI, COCO, and DGAR). Shin and Baag (2000) suggested approximate crustal thickness of the region between northern Korea and northeastern China. The crustal thickness appears to be varied from 25 to 35 km that is compatible with the preliminary analysis.

  5. A preliminary estimate of future communications traffic for the electric power system

    NASA Technical Reports Server (NTRS)

    Barnett, R. M.

    1981-01-01

    Diverse new generator technologies using renewable energy, and to improve operational efficiency throughout the existing electric power systems are presented. A description of a model utility and the information transfer requirements imposed by incorporation of dispersed storage and generation technologies and implementation of more extensive energy management are estimated. An example of possible traffic for an assumed system, and an approach that can be applied to other systems, control configurations, or dispersed storage and generation penetrations is provided.

  6. Age and growth of round gobies in Lake Michigan, with preliminary mortality estimation

    USGS Publications Warehouse

    Huo, Bin; Madenjian, Charles P.; Xie, Cong X.; Zhao, Yingming; O'Brien, Timothy P.; Czesny, Sergiusz J.

    2015-01-01

    The round goby (Neogobius melanostomus) is a prevalent invasive species throughout Lake Michigan, as well as other Laurentian Great Lakes, yet little information is available on spatial variation in round goby growth within one body of water. Age and growth of round goby at three areas of Lake Michigan were studied by otolith analysis from a sample of 659 specimens collected from 2008 to 2012. Total length (TL) ranged from 48 to 131 mm for Sturgeon Bay, from 50 to 125 mm for Waukegan, and from 54 to 129 mm for Sleeping Bear Dunes. Ages ranged from 2 to 7 years for Sturgeon Bay, from 2 to 5 years for Waukegan, and from 2 to 6 years for Sleeping Bear Dunes. Area-specific and sex-specific body–otolith relationships were used to back-calculate estimates of total length at age, which were fitted to von Bertalanffy models to estimate growth rates. For both sexes, round gobies at Sleeping Bear Dunes and Waukegan grew significantly faster than those at Sturgeon Bay. However, round goby growth did not significantly differ between Sleeping Bear Dunes and Waukegan for either sex. At all three areas of Lake Michigan, males grew significantly faster than females. Based on catch curve analysis, estimates of annual mortality rates ranged from 0.79 to 0.84. These relatively high mortality rates suggested that round gobies may be under predatory control in Lake Michigan.

  7. Preliminary estimates of galactic cosmic ray shielding requirements for manned interplanetary missions

    NASA Technical Reports Server (NTRS)

    Townsend, Lawrence W.; Wilson, John W.; Nealy, John E.

    1988-01-01

    Estimates of radiation risk to the blood forming organs from galactic cosmic rays are presented for manned interplanetary missions. The calculations use the Naval Research Laboratory cosmic ray spectrum model as input into the Langley Research Center galactic cosmic ray transport code. This transport code, which transports both heavy ions and nucleons, can be used with any number of layers of target material, consisting of up to five different constituents per layer. Calculated galactic cosmic ray doses and dose equivalents behind various thicknesses of aluminum and water shielding are presented for solar maximum and solar minimum periods. Estimates of risk to the blood forming organs are made using 5 cm depth dose/dose equivalent values for water. These results indicate that at least 5 g/sq cm (5 cm) of water of 6.5 g/sq cm (2.4 cm) of aluminum shield is required to reduce annual exposure below the current recommended limit of 50 rem. Because of the large uncertainties in fragmentation parameters, and the input cosmic ray spectrum, these exposure estimates may be uncertain by as much as 70 percent. Therefore, more detailed analyses with improved inputs could indicate the need for additional shielding.

  8. Preliminary Results from Nuclear Decay Experiments Performed During the Solar Eclipse of August 1, 2008

    SciTech Connect

    Javorsek, D. II; Kerford, J. L.; Stewart, C. A.; Hoft, A. W.; Horan, T. J.; Buncher, J. B.; Fischbach, E.; Gruenwald, J. T.; Heim, J.; Kohler, M.; Longman, A.; Mattes, J. J.; Mohsinally, T.; Newport, J. R.; Jenkins, J. H.; Lee, R. H.; Morreale, B.; Morris, D. B.; O'Keefe, D.; Terry, B.

    2010-08-04

    Recent developments in efforts to determine the cause of anomalous experimental nuclear decay fluctuations suggest a possible solar influence. Here we report on the preliminary results from several nuclear decay experiments performed at Thule Air Base in Greenland during the Solar Eclipse that took place on 1 August 2008. Because of the high northern latitude and time of year, the Sun never set and thereby provided relatively stabilized conditions for nearly all environmental factors. An exhaustive list of relevant factors were monitored during the eclipse to help rule out possible systematic effects due to external influences. In addition to the normal temperature, pressure, humidity, and cloud cover associated with the outside ambient observations, we included similar measurements within the laboratory along with monitoring of the power supply output, local neutron count rates, and the Earth's local magnetic and electric fields.

  9. Preliminary analysis of performance and loads data from the 2-megawatt mod-1 wind turbine generator

    NASA Technical Reports Server (NTRS)

    Spera, D. A.; Viterna, L. A.; Richards, T. R.; Neustadter, H. E.

    1979-01-01

    Preliminary test data on output power versus wind speed, rotor blade loads, system dynamic behavior, and start-stop characteristics on the Mod-1 wind turbine generator are presented. These data were analyzed statistically and are compared with design predictions of system performance and loads. To date, the Mod-1 wind turbine generator has produced up to 1.5 MW of power, with a measured power versus wind speed curve which agrees closely with design. Blade loads were measured at wind speeds up to 14 m/s and also during rapid shutdowns. Peak transient loads during the most severe shutdowns are less than the design limit loads. On the inboard blade sections, fatigue loads are approximately equal to the design cyclic loads. On the outboard blade sections, however, measured cyclic loads are significantly larger than design values, but they do not appear to exceed fatigue allowable loads as yet.

  10. Kinematic analysis of motor performance in robot-assisted surgery: a preliminary study.

    PubMed

    Nisky, Ilana; Patil, Sangram; Hsieh, Michael H; Okamura, Allison M

    2013-01-01

    The inherent dynamics of the master manipulator of a teleoperated robot-assisted surgery (RAS) system can affect the movements of a human operator, in comparison with free-space movements. To measure the effects of these dynamics on operators with differing levels of surgical expertise, a da Vinci Si system was instrumented with a custom surgeon grip fixture and magnetic pose trackers. We compared users' performance of canonical motor control movements during teleoperation with the manipulator and freehand cursor control, and found significant differences in several aspects of motion, including target acquisition error, movement speed, and acceleration. In addition, there was preliminary evidence for differences between experts and novices. These findings could impact robot design, control, and training methods for RAS.

  11. PORFLOW MODELING FOR A PRELIMINARY ASSESSMENT OF THE PERFORMANCE OF NEW SALTSTONE DISPOSAL UNIT DESIGNS

    SciTech Connect

    Smith, F.

    2012-08-06

    At the request of Savannah River Remediation (SRR), SRNL has analyzed the expected performance obtained from using seven 32 million gallon Saltstone Disposal Units (SDUs) in the Z-Area Saltstone Disposal Facility (SDF) to store future saltstone grout. The analysis was based on preliminary SDU final design specifications. The analysis used PORFLOW modeling to calculate the release of 20 radionuclides from an SDU and transport of the radionuclides and daughters through the vadose zone. Results from this vadose zone analysis were combined with previously calculated releases from existing saltstone vaults and FDCs and a second PORFLOW model run to calculate aquifer transport to assessment points located along a boundary 100 m from the nearest edge of the SDF sources. Peak concentrations within 12 sectors spaced along the 100 m boundary were determined over a period of evaluation extending 20,000 years after SDF closure cap placement. These peak concentrations were provided to SRR to use as input for dose calculations.

  12. Preliminary Evaluation of MapReduce for High-Performance Climate Data Analysis

    NASA Technical Reports Server (NTRS)

    Duffy, Daniel Q.; Schnase, John L.; Thompson, John H.; Freeman, Shawn M.; Clune, Thomas L.

    2012-01-01

    MapReduce is an approach to high-performance analytics that may be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. We are particularly interested in the potential of MapReduce to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we are prototyping a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. Our initial focus has been on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. Preliminary results suggest this approach can improve efficiencies within data intensive analytic workflows.

  13. Dynamic State Estimation Utilizing High Performance Computing Methods

    SciTech Connect

    Schneider, Kevin P.; Huang, Zhenyu; Yang, Bo; Hauer, Matthew L.; Nieplocha, Jaroslaw

    2009-03-18

    The state estimation tools which are currently deployed in power system control rooms are based on a quasi-steady-state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper presents an overview of the Kalman Filtering process and then focuses on the implementation of the predication component on multiple processors.

  14. Dark Matter Capture and Annihilation on the First Stars: Preliminary Estimates

    SciTech Connect

    Iocco, Fabio

    2008-05-02

    Assuming that Dark Matter is dominated by WIMPs, it accretes by gravitational attraction and scattering over baryonic material and annihilates inside celestial objects, giving rise to a 'Dark Luminosity' which may potentially affect the evolution of stars. We estimate the Dark Luminosity achieved by different kinds of stars in a halo with DM properties characteristic of the ones where the first star formation episode occurs. We find that either massive, metal-free and small, galactic-like stars can achieve Dark Luminosities comparable or exceeding their nuclear ones. This might have dramatic effects over the evolution of the very first stars, known as Population III.

  15. Dark Matter Capture and Annihilation on the First Stars: Preliminary Estimates

    NASA Astrophysics Data System (ADS)

    Iocco, Fabio

    2008-04-01

    Assuming that dark matter is dominated by WIMPs, it accretes by gravitational attraction and scattering over baryonic material and annihilates inside celestial objects, giving rise to a "dark luminosity" which may potentially affect the evolution of stars. We estimate the dark luminosity achieved by different kinds of stars in a halo with DM properties characteristic of the ones where the first star formation episode occurs. We find that both massive, metal-free and small, galactic-like stars can achieve dark luminosities comparable to or exceeding those due to their nuclear burning. This might have dramatic effects over the evolution of the very first stars, known as Population III.

  16. Preliminary estimation of the reservoir capacity and the longevity of the Baca Geothermal Field, New Mexico

    SciTech Connect

    Bodvarsson, G.S.; Vonder Haar, S.; Wilt, M.; Tsang, C.F.

    1980-07-01

    A 50 MW geothermal power plant is currently under development at the Baca site in the Valles Caldera, New Mexico, as a joint venture of the Department of Energy (DOE), Union Oil Company of California, and the Public Service Company of New Mexico (PNM). To date, over 20 wells have been drilled on the prospect, and the data from these wells indicate the presence of a high-temperature liquid dominated reservoir. Data from open literature on the field are used to estimate the amount of hot water in place (reservoir capacity) and the length of time the reservoir can supply steam for a 50 MW power plant (reservoir longevity). The reservoir capacity is estimated by volumetric calculations using existing geological, geophysical, and well data. The criteria used are described and the sensitivity of the results discussed. The longevity of the field is studied using a two-phase numerical simulator (SHAFT79). A number of cases are studied based upon different boundary conditions, and injection and production criteria. Constant or variable mass production is employed in the simulations with closed, semi-infinite or infinite reservoir boundaries. In one of the cases, a fault zone feeding the production region is modeled. The injection strategy depends on the available waste water. The results of these simulations are discussed and the sensitivity of the results, with respect to mesh size and the relative permeability curves used, are briefly studied.

  17. Preliminary Investigation of Over-all Performance of Experimental Turbojet Engine for Guided Missiles

    NASA Technical Reports Server (NTRS)

    Eustis, Robert H.; Berkey, William E.

    1947-01-01

    A preliminary investigation of the over-all performance of a simply constructed, short-life, turbojet engine was conducted. The unit was operated at a pressure altitude of 15,000 feet for ram-pressure ratios of 1.2 t o 1.8. The corrected engine speed was varied from the minimum for good combustion to about 17,000 rpm, which is approximately 75 percent of rated speed. The performance is given by generalized parameters that permit the calculation of performance at any altitude. The corrected net thrust of the turbojet engine increased with ram-pressure ratio for a given corrected engine speed above 14,500 rpm and reached a maximum of 425 pounds at a ram-pressure ratio of 1.8 and a corrected engine speed of 16,650 rpm, The corrected thrust specific fuel consumption decreased with flight speed for corrected engine speeds higher than 13,600 rpm, The minimum corrected thrust specific fuel consumption of 1.48 was obtained at a ram-pressure ratio of 1,8 and a corrected engine speed of 15,000 rpm. For all ram-pressure ratios, choking occurred in the engine for corrected engine speeds greater than 14,500 rpm.

  18. New technologies and new performances of the JCMT radio-telescope: a preliminary design study

    NASA Astrophysics Data System (ADS)

    Mian, S.; De Lorenzi, S.; Ghedin, L.; Rampini, F.; Marchiori, G.; Craig, S.

    2012-09-01

    With a diameter of 15m the James Clerk Maxwell Telescope (JCMT) is the largest astronomical telescope in the world designed specifically to operate in the submillimeter wavelength region of the spectrum. It is situated close to the summit of Mauna Kea, Hawaii, at an altitude of 4092m. Its primary reflector currently consists of a steel geodesic supporting structure and pressed aluminium panels on a passive mount. The major issues of the present reflector are its thermal stability and its panels deterioration. A preliminary design study for the replacement of the JCMT antenna dish is here presented. The requested shape error for the new reflector is <20μm RMS. The proposed solution is based on a semi-monocoque backing structure made of CFRP and on high precision electroformed panels. The choice of CFRP for the backing structure allows indeed to improve the antenna performance in terms of both stiffness and thermal stability, so that the required surface accuracy of the primary can be achieved even by adopting a passive panels system. Moreover thanks to CFRP, a considerable weight reduction of the elevation structure can be attained. The performance of the proposed solution for the JCMT antenna has been investigated through FE analyses and the assessed deformation of the structure under different loading cases has been taken into account for subsequent error budgeting. Results show that the proposed solution is in line with the requested performance. With this new backing structure, the JCMT would have the largest CFRP reflector ever built.

  19. Barrier analogs: Long-term performance issues, preliminary studies, and recommendations

    SciTech Connect

    Waugh, W.J.; Chatters, J.C.; Last, G.V.; Bjornstad, B.N.; Link, S.O.; Hunter, C.R.

    1994-02-01

    The US Department of Energy`s Hanford Protective Barrier Development Program is funding studies of natural analogs of the long-term performance of waste site covers. Natural-analog studies examine past environments as evidence for projecting the future performance of engineered structures. The information generated by analog studies is needed to (1) evaluate the designs and results of short term experiments and demonstrations, (2) formulate performance-modeling problems that bound expected changes in waste site environments, and (3) understand emergent system attributes that cannot be evaluated with short-term experiments or computer models. Waste site covers will be part of dynamic environmental systems with attributes that transcend the traits of engineered components. This report discusses results of the previously unreported preliminary studies conducted in 1983 and 1984. These results indicate that analogs could play an important role in predicting the long-term behavior of engineered waste covers. Layered exposures of glacial-flood-deposited gravels mantled with silt or sand that resemble contemporary barrier designs were examined. Bergmounds, another anomaly left by cataclysmic glacial floods, were also examined as analogs of surface gravel.

  20. Natural Phenomena Hazards Modeling Project: Preliminary flood hazards estimates for screening Department of Energy sites, Albuquerque Operations Office

    SciTech Connect

    McCann, M.W. Jr.; Boissonnade, A.C.

    1988-05-01

    As part of an ongoing program, Lawrence Livermore National Laboratory (LLNL) is directing the Natural Phenomena Hazards Modeling Project (NPHMP) on behalf of the Department of Energy (DOE). A major part of this effort is the development of probabilistic definitions of natural phenomena hazards; seismic, wind, and flood. In this report the first phase of the evaluation of flood hazards at DOE sites is described. Unlike seismic and wind events, floods may not present a significant threat to the operations of all DOE sites. For example, at some sites physical circumstances may exist that effectively preclude the occurrence of flooding. As a result, consideration of flood hazards may not be required as part of the site design basis. In this case it is not necessary to perform a detailed flood hazard study at all DOE sites, such as those conducted for other natural phenomena hazards, seismic and wind. The scope of the preliminary flood hazard analysis is restricted to evaluating the flood hazards that may exist in proximity to a site. The analysis does involve an assessment of the potential encroachment of flooding on-site at individual facility locations. However, the preliminary flood hazard assessment does not consider localized flooding at a site due to precipitation (i.e., local run-off, storm sewer capacity, roof drainage). These issues are reserved for consideration by the DOE site manager. 11 refs., 84 figs., 61 tabs.

  1. The Greenville Fault: preliminary estimates of its long-term creep rate and seismic potential

    USGS Publications Warehouse

    Lienkaemper, James J.; Barry, Robert G.; Smith, Forrest E.; Mello, Joseph D.; McFarland, Forrest S.

    2013-01-01

    Once assumed locked, we show that the northern third of the Greenville fault (GF) creeps at 2 mm/yr, based on 47 yr of trilateration net data. This northern GF creep rate equals its 11-ka slip rate, suggesting a low strain accumulation rate. In 1980, the GF, easternmost strand of the San Andreas fault system east of San Francisco Bay, produced a Mw5.8 earthquake with a 6-km surface rupture and dextral slip growing to ≥2 cm on cracks over a few weeks. Trilateration shows a 10-cm post-1980 transient slip ending in 1984. Analysis of 2000-2012 crustal velocities on continuous global positioning system stations, allows creep rates of ~2 mm/yr on the northern GF, 0-1 mm/yr on the central GF, and ~0 mm/yr on its southern third. Modeled depth ranges of creep along the GF allow 5-25% aseismic release. Greater locking in the southern two thirds of the GF is consistent with paleoseismic evidence there for large late Holocene ruptures. Because the GF lacks large (>1 km) discontinuities likely to arrest higher (~1 m) slip ruptures, we expect full-length (54-km) ruptures to occur that include the northern creeping zone. We estimate sufficient strain accumulation on the entire GF to produce Mw6.9 earthquakes with a mean recurrence of ~575 yr. While the creeping 16-km northern part has the potential to produce a Mw6.2 event in 240 yr, it may rupture in both moderate (1980) and large events. These two-dimensional-model estimates of creep rate along the southern GF need verification with small aperture surveys.

  2. An Evaluation of Empirical Bayes' Estimation of Value- Added Teacher Performance Measures. Working Paper #31. Revised

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul; Wooldridge, Jeffrey M.

    2014-01-01

    Empirical Bayes' (EB) estimation is a widely used procedure to calculate teacher value-added. It is primarily viewed as a way to make imprecise estimates more reliable. In this paper we review the theory of EB estimation and use simulated data to study its ability to properly rank teachers. We compare the performance of EB estimators with that of…

  3. Preliminary estimates of the quantity and quality of groundwater discharge to a section of Bear Creek in central Iowa

    SciTech Connect

    Caron, G.A.; Simpkins, W.W.; Schultz, R.C. )

    1994-04-01

    Studies in Iowa and elsewhere in the Midwest have suggested that most agrichemicals enter surface water through runoff events of tile drainage. Although it also contains agrichemicals, groundwater's contribution to surface water contamination is largely unknown, particularly in till-dominated watersheds. The purpose of this study was to estimate the quantity and quality of groundwater discharge to a 1,000-m-long section of Bear Creek in central Iowa. The study is part of a larger project that is evaluating constructed, multi-species riparian buffer strips as a Best Management Practice for agriculture. Groundwater discharge to the creek was estimated using: (1) differences in discharge between an upstream and downstream weir (minus tile drain outflow), (2) seepage meter data from the creek bed, and (3) Darcy's Law, using hydraulic gradient and K data from piezometers adjacent to the creek and minipiezometers in the creek. The authors preliminary estimates show groundwater discharge rates of 2 to 15 L/s (weirs), 10 L/s (seepage meters), and 3 L/s (Darcy's Law). Discharge in Bear Creek ranged from 100 to 400 L/s during the period; thus, groundwater contributed only a small part of the total creek discharge. Water samples from minipiezometers beneath the creek bed are characterized by NO[sub 3]-N concentrations < 3 mg/L and atrazine concentrations < 0.1 [mu]g/L. In contrast, water samples from Bear Creek typically show NO[sub 3]-N concentrations > 20 mg/L and atrazine concentrations up to 1.0 [mu]g/L. These water quality data suggest that groundwater may not be a significant contributor to agrichemical contamination of surface water in this till-dominated watershed.

  4. Preliminary estimates of Gulf Stream characteristics from TOPEX data and a precise gravimetric geoid

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.; Smith, Dru A.

    1994-01-01

    TOPEX sea surface height data has been used, with a gravimetric geoid, to calculate sea surface topography across the Gulf Stream. This topography was initially computed for nine tracks on cycles 21 to 29. Due to inaccurate geoid undulations on one track, results for eight tracks are reported. The sea surface topography estimates were used to calculate parameters that describe Gulf Stream characteristics from two models of the Gulf Stream. One model was based on a Gaussian representation of the velocity while the other was a hyperbolic representation of velocity or the sea surface topography. The parameters of the Gaussian velocity model fit were a width parameter, a maximum velocity value, and the location of the maximum velocity. The parameters of the hyperbolic sea surface topography model were the width, the height jump, position, and sea surface topography at the center of the stream. Both models were used for the eight tracks and nine cycles studied. Comparisons were made between the width parameters, the maximum velocities, and the height jumps. Some of the parameter estimates were found to be highly (0.9) correlated when the hyperbolic sea surface topography fit was carried out, but such correlations were reduced for either the Gaussian velocity fits or the hyperbolic velocity model fit. A comparison of the parameters derived from 1-year TOPEX data showed good agreement with values derived by Kelly (1991) using 2.5 years of Geosat data near 38 deg N, 66 deg W longitude. Accuracy of the geoid undulations used in the calculations was of order of +/- 16 cm with the accuracy of a geoid undulation difference equal to +/- 15 cm over a 100-km line in areas with good terrestrial data coverage. This paper demonstrates that our knowledge or geoid undulations and undulation differences, in a portion of the Gulf Stream region, is sufficiently accurate to determine characteristics of the jet when used with TOPEX altimeter data. The method used here has not been shown to

  5. Nuclear war: preliminary estimates of the climatic effects of a nuclear exchange

    SciTech Connect

    MacCracken, M.C.

    1983-10-01

    The smoke rising from burning cities, industrial areas, and forests if such areas are attacked as part of a major nuclear exchange is projected to increase the hemispheric average atmospheric burden of highly absorbent carbonaceous material by 100 to 1000 times. As the smoke spreads from these fires, it would prevent sunlight from reaching the surface, leading to a sharp cooling of land areas over a several day period. Within a few weeks, the thick smoke would spread so as to largely cover the mid-latitudes of the Northern Hemisphere, cooling mid-continental smoke-covered areas by, perhaps, a few tens of degrees Celsius. Cooling of near coastal areas would be substantially less, since oceanic heat capacity would help to buffer temperature changes in such regions. The calculations on which these findings are based contain many assumptions, shortcomings and uncertainties that affect many aspects of the estimated response. It seems, nonetheless, quite possible that if a nuclear exchange involves attacks on a very large number of cities and industrial areas, thereby starting fires that generate as much smoke as is suggested by recent studies, substantial cooling could be expected that would last weeks to months over most continental regions of the Northern Hemisphere, but which may have relatively little direct effect on the Southern Hemisphere.

  6. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  7. Bilingual performance on the boston naming test: preliminary norms in Spanish and English.

    PubMed

    Kohnert, K J; Hernandez, A E; Bates, E

    1998-12-01

    A total of 100 young educated bilingual adults were administered the Boston Naming Test (BNT) (Kaplan, Goodglass, & Weintraub, 1983) in both Spanish and English. Three group performance scores were obtained: English only, Spanish only, and a composite score indicating the total number of items correctly named independent of language. The scores for the entire group were significantly greater in English than in Spanish. An additional set of analyses explored individual differences in picture naming performance across the two languages as measured by the BNT. For a subset of the larger group (n = 25) there were significant differences in composite over single language scoring, but no significant differences between Spanish and English. Item analyses of correct responses were conducted in both languages to explore the construct validity of the standardized administration of the BNT with this population. There was much greater variability in responses over the Spanish items for this bilingual group. The results of a correlation analysis of information obtained from the initial questionnaire with the BNT scores in each language is also reported. The practical implications of this preliminary bilingual BNT normative data are discussed.

  8. Considerations for Estimating Electrode Performance in Li-Ion Cells

    NASA Technical Reports Server (NTRS)

    Bennett, William R.

    2012-01-01

    Advanced electrode materials with increased specific capacity and voltage performance are critical to the development of Li-ion batteries with increased specific energy and energy density. Although performance metrics for individual electrodes are critically important, a fundamental understanding of the interactions of electrodes in a full cell is essential to achieving the desired performance, and for establishing meaningful goals for electrode performance. This paper presents practical design considerations for matching positive and negative electrodes in a viable design. Methods for predicting cell-level discharge voltage, based on laboratory data for individual electrodes, are presented and discussed.

  9. Speech and Pause Characteristics in Multiple Sclerosis: A Preliminary Study of Speakers with High and Low Neuropsychological Test Performance

    ERIC Educational Resources Information Center

    Feenaughty, Lynda; Tjaden, Kris; Benedict, Ralph H. B.; Weinstock-Guttman, Bianca

    2013-01-01

    This preliminary study investigated how cognitive-linguistic status in multiple sclerosis (MS) is reflected in two speech tasks (i.e. oral reading, narrative) that differ in cognitive-linguistic demand. Twenty individuals with MS were selected to comprise High and Low performance groups based on clinical tests of executive function and information…

  10. Preliminary Review of Models, Assumptions, and Key Data used in Performance Assessments and Composite Analysis at the Idaho National Laboratory

    SciTech Connect

    Arthur S. Rood; Swen O. Magnuson

    2009-07-01

    This document is in response to a request by Ming Zhu, DOE-EM to provide a preliminary review of existing models and data used in completed or soon to be completed Performance Assessments and Composite Analyses (PA/CA) documents, to identify codes, methodologies, main assumptions, and key data sets used.

  11. Performance adaptive training control strategy for recovering wrist movements in stroke patients: a preliminary, feasibility study

    PubMed Central

    2009-01-01

    Background In the last two decades robot training in neuromotor rehabilitation was mainly focused on shoulder-elbow movements. Few devices were designed and clinically tested for training coordinated movements of the wrist, which are crucial for achieving even the basic level of motor competence that is necessary for carrying out ADLs (activities of daily life). Moreover, most systems of robot therapy use point-to-point reaching movements which tend to emphasize the pathological tendency of stroke patients to break down goal-directed movements into a number of jerky sub-movements. For this reason we designed a wrist robot with a range of motion comparable to that of normal subjects and implemented a self-adapting training protocol for tracking smoothly moving targets in order to facilitate the emergence of smoothness in the motor control patterns and maximize the recovery of the normal RoM (range of motion) of the different DoFs (degrees of Freedom). Methods The IIT-wrist robot is a 3 DoFs light exoskeleton device, with direct-drive of each DoF and a human-like range of motion for Flexion/Extension (FE), Abduction/Adduction (AA) and Pronation/Supination (PS). Subjects were asked to track a variable-frequency oscillating target using only one wrist DoF at time, in such a way to carry out a progressive splinting therapy. The RoM of each DoF was angularly scanned in a staircase-like fashion, from the "easier" to the "more difficult" angular position. An Adaptive Controller evaluated online performance parameters and modulated both the assistance and the difficulty of the task in order to facilitate smoother and more precise motor command patterns. Results Three stroke subjects volunteered to participate in a preliminary test session aimed at verify the acceptability of the device and the feasibility of the designed protocol. All of them were able to perform the required task. The wrist active RoM of motion was evaluated for each patient at the beginning and at the end

  12. Experimental methodologies and preliminary transfer factor data for estimation of dermal exposures to particles.

    PubMed

    Rodes, C E; Newsome, J R; Vanderpool, R W; Antley, J T; Lewis, R G

    2001-01-01

    Developmental efforts and experimental data that focused on quantifying the transfer of particles on a mass basis from indoor surfaces to human skin are described. Methods that utilized a common fluorescein-tagged Arizona Test Dust (ATD) as a possible surrogate for housedust and a uniform surface dust deposition chamber to permit estimation of particle mass transfer for selected dust size fractions were developed. Particle transfers to both wet and dry skin were quantified for contact events with stainless steel, vinyl, and carpeted surfaces that had been pre-loaded with the tagged test dust. To better understand the representativeness of the test dust, a large housedust sample was collected and analyzed for particle size distribution by mass and several metals (Pb, Mn, Cd, Cr, and Ni). The real housedust sample was found to have multimodal size distributions (mg/g) for particle-phase metals. The fluorescein tagging provided surface coatings of 0.11-0.36 ng fluorescein per gram of dust. The predominant surface location of the fluorescein tag would best represent simulated mass transfers for contaminant species coating the surfaces of the particles. The computer-controlled surface deposition chamber provided acceptably uniform surface coatings with known particle loadings on the contact test panels. Significant findings for the dermal transfer factor data were: (a) only about 1/3 of the projected hand surface typically came in contact with the smooth test surfaces during a press; (b) the fraction of particles transferred to the skin decreased as the surface roughness increased, with carpeting transfer coefficients averaging only 1/10 those of stainless steel; (c) hand dampness significantly increased the particle mass transfer; (d) consecutive presses decreased the particle transfer by a factor of 3 as the skin surface became loaded, requiring approximately 100 presses to reach an equilibrium transfer rate; and (e) an increase in metals concentration with decreasing

  13. The design and performance estimates for the propulsion module for the booster of a TSTO vehicle

    NASA Astrophysics Data System (ADS)

    Snyder, Christopher A.; Maldonado, Jaime J.

    1991-09-01

    A NASA study of propulsion systems for possible low-risk replacements for the Space Shuttle is presented. Results of preliminary studies to define the USAF two-stage-to-orbit (TSTO) concept to deliver 10,000 pounds to low polar orbit are described. The booster engine module consists of an over/under turbine bypass engines/ramjet engine design for acceleration from takeoff to the staging point of Mach 6.5 and approximately 100,000 feet altitude. Propulsion system performance and weight are presented with preliminary mission study results of vehicle size.

  14. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  15. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  16. A Monolithic Microconcentrator Receiver For A Hybrid PV-Thermal System: Preliminary Performance

    NASA Astrophysics Data System (ADS)

    Walter, D.; Everett, V.; Vivar, M.; Harvey, J.; Van Scheppingen, R.; Surve, S.; Muric-Nesic, J.; Blakers, A.

    2010-10-01

    An innovative hybrid PV-thermal microconcentrator (MCT) system is being jointly developed by Chromasun Inc., San Jose, California, and at the Centre for Sustainable Energy Systems, Australian National University. The MCT aims to develop the small-scale, roof-top market for grid-integrated linear CPV systems. A low profile, small footprint enclosure isolates system components from the environment, relaxing the demands on supporting structures, tracking, and maintenance. Net costs to the consumer are reduced via an active cooling arrangement that provides thermal energy suitable for water and space heating, ventilation, and air conditioning (HVAC) applications. As part of a simplified, low-cost design, an integrated substrate technology provides electrical interconnection, heat sinking, and mechanical support for the concentrator cells. An existing, high-efficiency, one-sun solar cell technology has been modified for this system. This paper presents an overview of the key design features, and preliminary electrical performance of the MCT. Module efficiencies of up to 19.6% at 20x concentration have been demonstrated.

  17. Performance of mean-frequency estimators for Doppler radar and lidar

    NASA Technical Reports Server (NTRS)

    Frehlich, R. G.; Yadlowsky, M. J.

    1994-01-01

    The performance of mean-frequency estimators for Doppler radar and lidar measurements of winds is presented in terms of two basic parameters: Phi, the ratio of the average signal energy per estimate to the spectral noise level; and Omega, which is proportional to the number of independent samples per estimate. For fixed Phi and Omega, the Cramer-Rao bound (CRB) (theoretical best performance) for unbiased estimators of mean frequency (normalized by the spectral width of the signal), signal power, and spectral width are essentially independent of the number of data samples M. For large Phi, the estimators of mean frequency are unbiased and the performance is independent of M. The spectral domain estimators and covariance based estimators are bounded by the approximate period of M. The spectral domain estimators and covariance based estimators are bounded by the approximate periodogram CRB. The standard deviation of the maximum-likelihood estimator approaches the exact CRB, which can be more than a factor of 2 better than the performance of the spectral domain estimators or covariance-based estimators for typical Omega. For small Phi, the estimators are biased due to the effects of the uncorrelated noise (white noise), which results in uniformly distributed 'bad' estimates. The fraction of bad estimates is a function of Phi and M with weak dependence on the parameter Omega. Simple empirical models describe the standard deviation of the good estimates and the fraction of bad estimates. For Doppler lidar and for large Phi, better performance is obtained by using many low-energy pulses instead of one pulse with the same total energy. For small Phi, the converse is true.

  18. Effects of Emotionally Charged Auditory Stimulation on Gait Performance in the Elderly: A Preliminary Study

    PubMed Central

    Rizzo, John-Ross; Raghavan, Preeti; McCrery, J.R.; Oh-Park, Mooyeon; Verghese, Joe

    2015-01-01

    Objectives To evaluate the effect of a novel divided attention task—walking under auditory constraints—on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Design Validation cohort. Setting General community. Participants Ambulatory older adults without dementia (N=104). Interventions Not applicable. Main Outcome Measures In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. Results The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; P<.001) as compared with trials without auditory stimulation. In contrast, older adults with minimal cognitive impairment (Blessed test score, 5–10; n=8) showed an average reduction of 5.45cm/s in gait velocity (F1.87,190.83=5.62; P=.005) and an average reduction of 3.88 steps/min in cadence (F1.79,183.10=8.21; P=.001) under both auditory stimulation conditions. Neither baseline fall history nor performance of activities of daily living accounted for these differences. Conclusions Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait

  19. Performance vs. Paper-And-Pencil Estimates of Cognitive Abilities.

    ERIC Educational Resources Information Center

    Arima, James K.

    Arima's Discrimination Learning Test (DLT) was reconfigured, made into a self-paced mode, and administered to potential recruits in order to determine if: (1) a previous study indicating a lack of difference in learning performance between white and nonwhites would hold up; and (2) the correlations between scores attained on the DLT and scores…

  20. Evaluating Value-Added Methods of Estimating of Teacher Performance

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.

    2011-01-01

    Accurate indicators of educational effectiveness are needed to advance national policy goals of raising student achievement and closing social/cultural based achievement gaps. If constructed and used appropriately, such indicators for both program evaluation and the evaluation of teacher and school performance could have a transformative effect on…

  1. Estimated and observed performance of a neutron SNM portal monitor for vehicles

    SciTech Connect

    Fehlau, P.E.; Close, D.A.; Coop, K.L.; York, R.

    1996-11-01

    In July 1987, we completed our development of a neutron-detection- based vehicle SNM portal monitor with a conference paper presented at the annual meeting. The paper described the neutron vehicle portal (NVP), described source-response measurements made with it at Los Alamos, and gave our estimate of the monitor`s potential performance. Later, in December 1988, we had a chance to do a performance test with the monitor in a plant environment. This paper discusses how our original performance estimate should vary in different circumstances, and it uses the information to make a comparison between the monitor`s estimated and actual performance during the 1988 performance testing.

  2. Performance Assessment of a Gnss-Based Troposphere Path Delay Estimation Software

    NASA Astrophysics Data System (ADS)

    Mariotti, Gilles; Avanzi, Alessandro; Graziani, Alberto; Tortora, Paolo

    2013-04-01

    Error budgets of Deep Space Radio Science experiments are heavily affected by interplanetary and Earth transmission media, that corrupt, due to their non-unitary refraction index, the radiometric information of signals coming from the spacecraft. An effective removal of these noise sources is crucial to achieve the accuracy and signal stability levels required by radio science applications. Depending on the nature of these refractions, transmission media are divided into dispersive (that consists of ionized particles, i.e. Solar Wind and Ionosphere) and non-dispersive ones (the refraction is caused by neutral particles: Earth Troposphere). While dispersive noises are successfully removed by multifrequency combinations (as for GPS with the well-known ionofree combination), the most accurate estimation of tropospheric noise is obtained using microwave radiometers (MWR). As the use of MWRs suffers from strong operational limitations (rain and heavy clouds conditions), the GNSS-based processing is still widely adopted to provide a cost-effective, all-weather condition estimation of the troposphere path delay. This work describes the development process and reports the results of a GNSS analysis code specifically aimed to the estimation of the path delays introduced by the troposphere above deep space complexes, to be used for the calibration of Range and Doppler radiometric data. The code has been developed by the Radio Science Laboratory of the University of Bologna in Forlì, and is currently in the testing phase. To this aim, the preliminary output is compared to MWR measurements and IGS TropoSINEX products in order to assess the reliability of the estimate. The software works using ionofree carrier-phase observables and is based upon a double-difference approach, in which the GNSS receiver placed nearby the Deep Space receiver acts as the rover station. Several baselines are then created with various IGS and EUREF stations (master or reference stations) in order to

  3. Preliminary performance assessment of biotoxin detection for UWS applications using a MicroChemLab device.

    SciTech Connect

    VanderNoot, Victoria A.; Haroldsen, Brent L.; Renzi, Ronald F.; Shokair, Isaac R.

    2010-03-01

    In a multiyear research agreement with Tenix Investments Pty. Ltd., Sandia has been developing field deployable technologies for detection of biotoxins in water supply systems. The unattended water sensor or UWS employs microfluidic chip based gel electrophoresis for monitoring biological analytes in a small integrated sensor platform. This instrument collects, prepares, and analyzes water samples in an automated manner. Sample analysis is done using the {mu}ChemLab{trademark} analysis module. This report uses analysis results of two datasets collected using the UWS to estimate performance of the device. The first dataset is made up of samples containing ricin at varying concentrations and is used for assessing instrument response and detection probability. The second dataset is comprised of analyses of water samples collected at a water utility which are used to assess the false positive probability. The analyses of the two sets are used to estimate the Receiver Operating Characteristic or ROC curves for the device at one set of operational and detection algorithm parameters. For these parameters and based on a statistical estimate, the ricin probability of detection is about 0.9 at a concentration of 5 nM for a false positive probability of 1 x 10{sup -6}.

  4. [Congenital transmission of Trypanosoma cruzi in Brazil: estimation of prevalence based on preliminary data of national serological surveys in children under 5 years old and other sources].

    PubMed

    Luquetti, Alejandro O; Ferreira, António Walter; Oliveira, Rosângela A; Tavares, Suelene B N; Rassi, Anis; Dias, João Carlos P; Prata, Aluizio

    2005-01-01

    A prevalence estimation of congenital transmission in Brazil is performed, based on several sources of recent data. From a serological survey conducted now in Brazil, with children below 5 years old, preliminary data from the state of Minas Gerais only 19/9,556 children did have antibodies against Trypanosoma cruzi. All 19 mothers were infected, but only one child persisted with antibodies on a second blood collection, hence diagnosed as congenital. The other were just passive transference of maternal antibodies. From a recent publication, 278 children born from 145 infected mothers were studied. Two cases (0.7%) were congenital. In other source, from 1,348 blood donors, 35 were born in non endemic areas. When 10 of them were called, 8 were born from infected mothers and five may be congenital. Finally, no infection was detected in 93 children born from 78 infected mothers. The reasons for this low prevalence are discussed, are lower than in other countries of the South Cone, that harbor also T. cruzi 2, but are unrecognized up to now.

  5. Bubble fusion: Preliminary estimates

    SciTech Connect

    Krakowski, R.A.

    1995-02-01

    The collapse of a gas-filled bubble in disequilibrium (i.e., internal pressure {much_lt} external pressure) can occur with a significant focusing of energy onto the entrapped gas in the form of pressure-volume work and/or acoustical shocks; the resulting heating can be sufficient to cause ionization and the emission of atomic radiations. The suggestion that extreme conditions necessary for thermonuclear fusion to occur may be possible has been examined parametrically in terms of the ratio of initial bubble pressure relative to that required for equilibrium. In this sense, the disequilibrium bubble is viewed as a three-dimensional ``sling shot`` that is ``loaded`` to an extent allowed by the maximum level of disequilibrium that can stably be achieved. Values of this disequilibrium ratio in the range 10{sup {minus}5}--10{sup {minus}6} are predicted by an idealized bubble-dynamics model as necessary to achieve conditions where nuclear fusion of deuterium-tritium might be observed. Harmonic and aharmonic pressurizations/decompressions are examined as means to achieve the required levels of disequilibrium required to create fusion conditions. A number of phenomena not included in the analysis reported herein could enhance or reduce the small levels of nuclear fusions predicted.

  6. A Graphical Method for Estimating Ion-Rocket Performance

    NASA Technical Reports Server (NTRS)

    Reynolds, Thaine W.; Childs, J. Howard

    1960-01-01

    Equations relating the critical temperature and ion current density for surface ionization of cesium on tungsten are derived for the cases of zero and finite electric fields at the ion-emitting surface. These equations are used to obtain a series of graphs that can be used to solve many problems relating to ion-rocket theoretical performance. The effect of operation at less than space-charge-limited current density and the effect of nonuniform propellant flux onto the ion-emitting surface are also treated.

  7. Computational Estimation Performance on Whole-Number Multiplication by Third- and Fifth-Grade Chinese Students

    ERIC Educational Resources Information Center

    Liu, Fuchang

    2009-01-01

    Four hundred and three 3rd- and 5th-grade Chinese students took the Multiplication Estimation Test or participated in the interview on it, designed to assess their computational estimation performance on whole-number multiplication. Students perform better when tasks are presented visually than orally. Third graders tend to use rounding based…

  8. Performance of different detrending methods in turbulent flux estimation

    NASA Astrophysics Data System (ADS)

    Donateo, Antonio; Cava, Daniela; Contini, Daniele

    2015-04-01

    The eddy covariance is the most direct, efficient and reliable method to measure the turbulent flux of a scalar (Baldocchi, 2003). Required conditions for high-quality eddy covariance measurements are amongst others stationarity of the measured data and a fully developed turbulence. The simplest method for obtaining the fluctuating components for covariance calculation according to Reynolds averaging rules under ideal stationary conditions is the so called mean removal method. However steady state conditions rarely exist in the atmosphere, because of the diurnal cycle, changes in meteorological conditions, or sensor drift. All these phenomena produce trends or low-frequency changes superimposed to the turbulent signal. Different methods for trend removal have been proposed in literature; however a general agreement on how separate low frequency perturbations from turbulence has not yet been reached. The most commonly applied methods are the linear detrending (Gash and Culf, 1996) and the high-pass filter, namely the moving average (Moncrieff et al., 2004). Moreover Vickers and Mahrt (2003) proposed a multi resolution decomposition method in order to select an appropriate time scale for mean removal as a function of atmospheric stability conditions. The present work investigates the performance of these different detrending methods in removing the low frequency contribution to the turbulent fluxes calculation, including also a spectral filter by a Fourier decomposition of the time series. The different methods have been applied to the calculation of the turbulent fluxes for different scalars (temperature, ultrafine particles number concentration, carbon dioxide and water vapour concentration). A comparison of the detrending methods will be performed also for different measurement site, namely a urban site, a suburban area, and a remote area in Antarctica. Moreover the performance of the moving average in detrending time series has been analyzed as a function of the

  9. LOD 1 VS. LOD 2 - Preliminary Investigations Into Differences in Mobile Rendering Performance

    NASA Astrophysics Data System (ADS)

    Ellul, C.; Altenbuchner, J.

    2013-09-01

    The increasing availability, size and detail of 3D City Model datasets has led to a challenge when rendering such data on mobile devices. Understanding the limitations to the usability of such models on these devices is particularly important given the broadening range of applications - such as pollution or noise modelling, tourism, planning, solar potential - for which these datasets and resulting visualisations can be utilized. Much 3D City Model data is created by extrusion of 2D topographic datasets, resulting in what is known as Level of Detail (LoD) 1 buildings - with flat roofs. However, in the UK the National Mapping Agency (the Ordnance Survey, OS) is now releasing test datasets to Level of Detail (LoD) 2 - i.e. including roof structures. These datasets are designed to integrate with the LoD 1 datasets provided by the OS, and provide additional detail in particular on larger buildings and in town centres. The availability of such integrated datasets at two different Levels of Detail permits investigation into the impact of the additional roof structures (and hence the display of a more realistic 3D City Model) on rendering performance on a mobile device. This paper describes preliminary work carried out to investigate this issue, for the test area of the city of Sheffield (in the UK Midlands). The data is stored in a 3D spatial database as triangles and then extracted and served as a web-based data stream which is queried by an App developed on the mobile device (using the Android environment, Java and OpenGL for graphics). Initial tests have been carried out on two dataset sizes, for the city centre and a larger area, rendering the data onto a tablet to compare results. Results of 52 seconds for rendering LoD 1 data, and 72 seconds for LoD 1 mixed with LoD 2 data, show that the impact of LoD 2 is significant.

  10. Rainfall estimation with a commercial tool for satellite internet in Ka band: concept and preliminary data analysis

    NASA Astrophysics Data System (ADS)

    Mugnai, Clio; Cuccoli, Fabrizio; Sermi, Francesco

    2014-10-01

    This work presents a real time method for rainfall estimation based on attenuation data acquired via Ka-band satellite link and discusses some results of its application. Data to be processed are recorded with a commercial kit for satellite web supplied by a European provider and operating above the urban area of Florence (Italy). Since the system automatically performs a continuous adjustment of the transmitted power in function of the intensity of the received signal, this information is being exploited to estimate the entity of the precipitation within the area. The adopted model for the attenuation of a microwave link due to hydrometeors is the one suggested by Olsen and Hodge and recommended by the ITU. The results are interpreted together with registered rain-rate measurements provided by three rain gauges dislocated within the area.

  11. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  12. The Spacecraft Materials Selector: An Artificial Intelligence System for Preliminary Design Trade Studies, Materials Assessments, and Estimates of Environments Present

    NASA Technical Reports Server (NTRS)

    Pippin, H. G.; Woll, S. L. B.

    2000-01-01

    Institutions need ways to retain valuable information even as experienced individuals leave an organization. Modern electronic systems have enough capacity to retain large quantities of information that can mitigate the loss of experience. Performance information for long-term space applications is relatively scarce and specific information (typically held by a few individuals within a single project) is often rather narrowly distributed. Spacecraft operate under severe conditions and the consequences of hardware and/or system failures, in terms of cost, loss of information, and time required to replace the loss, are extreme. These risk factors place a premium on appropriate choice of materials and components for space applications. An expert system is a very cost-effective method for sharing valuable and scarce information about spacecraft performance. Boeing has an artificial intelligence software package, called the Boeing Expert System Tool (BEST), to construct and operate knowledge bases to selectively recall and distribute information about specific subjects. A specific knowledge base to evaluate the on-orbit performance of selected materials on spacecraft has been developed under contract to the NASA SEE program. The performance capabilities of the Spacecraft Materials Selector (SMS) knowledge base are described. The knowledge base is a backward-chaining, rule-based system. The user answers a sequence of questions, and the expert system provides estimates of optical and mechanical performance of selected materials under specific environmental conditions. The initial operating capability of the system will include data for Kapton, silverized Teflon, selected paints, silicone-based materials, and certain metals. For situations where a mission profile (launch date, orbital parameters, mission duration, spacecraft orientation) is not precisely defined, the knowledge base still attempts to provide qualitative observations about materials performance and likely

  13. West Village Community: Quality Management Processes and Preliminary Heat Pump Water Heater Performance

    SciTech Connect

    Dakin, B.; Backman, C.; Hoeschele, M.; German, A.

    2012-11-01

    West Village, a multi-use project underway at the University of California Davis, represents a ground-breaking sustainable community incorporating energy efficiency measures and on-site renewable generation to achieve community-level Zero Net Energy (ZNE) goals. The project when complete will provide housing for students, faculty, and staff with a vision to minimize the community's impact on energy use by reducing building energy use, providing on-site generation, and encouraging alternative forms of transportation. This focus of this research is on the 192 student apartments that were completed in 2011 under Phase I of the West Village multi-year project. The numerous aggressive energy efficiency measures implemented result in estimated source energy savings of 37% over the B10 Benchmark. There are two primary objectives of this research. The first is to evaluate performance and efficiency of the central heat pump water heaters as a strategy to provide efficient electric water heating for net-zero all-electric buildings and where natural gas is not available on site. In addition, effectiveness of the quality assurance and quality control processes implemented to ensure proper system commissioning and to meet program participation requirements is evaluated. Recommendations for improvements that could improve successful implementation for large-scale, high performance communities are identified.

  14. West Village Community. Quality Management Processes and Preliminary Heat Pump Water Heater Performance

    SciTech Connect

    Dakin, B.; Backman, C.; Hoeschele, M.; German, A.

    2012-11-01

    West Village, a multi-use project underway at the University of California Davis, represents a ground-breaking sustainable community incorporating energy efficiency measures and on-site renewable generation to achieve community-level Zero Net Energy (ZNE) goals. When complete, the project will provide housing for students, faculty, and staff with a vision to minimize the community’s impact on energy use by reducing building energy use, providing on-site generation, and encouraging alternative forms of transportation. This focus of this research is on the 192 student apartments that were completed in 2011 under Phase I of the West Village multi-year project. The numerous aggressive energy efficiency measures implemented result in estimated source energy savings of 37% over the B10 Benchmark. There are two primary objectives of this research. The first is to evaluate performance and efficiency of the central heat pump water heaters as a strategy to provide efficient electric water heating for net-zero all-electric buildings and where natural gas is not available on site. In addition, effectiveness of the quality assurance and quality control processes implemented to ensure proper system commissioning and to meet program participation requirements is evaluated. Recommendations for improvements that could improve successful implementation for large-scale, high performance communities are identified.

  15. Computer code for estimating installed performance of aircraft gas turbine engines. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Kowalski, E. J.

    1979-01-01

    A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. A user oriented description of the program input requirements, program output, deck setup, and operating instructions is presented.

  16. Preliminary Investigation of a Translating Cowl Technique for Improving Take-off Performance of a Sharp-lip Supersonic Diffuser

    NASA Technical Reports Server (NTRS)

    Cortright, Edgar M , Jr

    1951-01-01

    A preliminary investigation was conducted in quiescent air on a translating cowl technique for improving the take-off performance of a sharp-lip supersonic diffuser. The technique consists of cutting the cowling in a plane normal to its axis and then translating the forepart of the cowling in the forward direction. The leading edge of the fixed portion of the cowling is rounded. Appreciable improved inlet performance was obtained with a cowling translation corresponding to a gap of only 1/4 inlet radius.

  17. Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2011-01-01

    An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation

  18. NPP Clouds and the Earth's Radiant Energy System (CERES) Predicted Sensor Performance Calibration and Preliminary Data Product Performance

    NASA Technical Reports Server (NTRS)

    Priestly, Kory; Smith, George L.; Thomas, Susan; Maddock, Suzanne L.

    2009-01-01

    characterization program benefited from the 30-year operational experience of the CERES EOS sensors, as well as a stronger emphasis of radiometric characterization in the Statement of Work with the sensor provider. Improvements to the pre-flight program included increased spectral, spatial, and temporal sampling under vacuum conditions as well as additional tests to characterize the primary and transfer standards in the calibration facility. Future work will include collaboration with NIST to further enhance the understanding of the radiometric performance of this equipment prior to flight. The current effort summarizes these improvements to the CERES FM-5 pre-flight sensor characterization program, as well as modifications to inflight calibration procedures and operational tasking. In addition, an estimate of the impacts to the system level accuracy and traceability is presented.

  19. Performance analysis of optical coherence tomography in the context of a thickness estimation task

    NASA Astrophysics Data System (ADS)

    Huang, Jinxin; Yao, Jianing; Cirucci, Nick; Ivanov, Trevor; Rolland, Jannick P.

    2015-12-01

    Thickness estimation is a common task in optical coherence tomography (OCT). This study discusses and quantifies the intensity noise of three commonly used broadband sources, such as a supercontinuum source, a superluminescent diode (SLD), and a swept source. The performance of the three optical sources was evaluated for a thickness estimation task using both the fast Fourier transform (FFT) and maximum-likelihood (ML) estimators. We find that the source intensity noise has less impact on a thickness estimation task compared to the width of the axial point-spread function (PSF) and the trigger jittering noise of a swept source. Findings further show that the FFT estimator yields biased estimates, which can be as large as 10% of the thickness under test in the worst case. The ML estimator is by construction asymptotically unbiased and displays a 10× improvement in precision for both the supercontinuum and SLD sources. The ML estimator also shows the ability to estimate thickness that is at least 10× thinner compared to the FFT estimator. Finally, findings show that a supercontinuum source combined with the ML estimator enables unbiased nanometer-class thickness estimation with nanometer-scale precision.

  20. Performance analysis of optical coherence tomography in the context of a thickness estimation task.

    PubMed

    Huang, Jinxin; Yao, Jianing; Cirucci, Nick; Ivanov, Trevor; Rolland, Jannick P

    2015-12-01

    Thickness estimation is a common task in optical coherence tomography (OCT). This study discusses and quantifies the intensity noise of three commonly used broadband sources, such as a supercontinuum source, a superluminescent diode (SLD), and a swept source. The performance of the three optical sources was evaluated for a thickness estimation task using both the fast Fourier transform (FFT) and maximum-likelihood (ML) estimators. We find that the source intensity noise has less impact on a thickness estimation task compared to the width of the axial point-spread function (PSF) and the trigger jittering noise of a swept source. Findings further show that the FFT estimator yields biased estimates, which can be as large as 10% of the thickness under test in the worst case. The ML estimator is by construction asymptotically unbiased and displays a 10× improvement in precision for both the supercontinuum and SLD sources. The ML estimator also shows the ability to estimate thickness that is at least 10× thinner compared to the FFT estimator. Finally, findings show that a supercontinuum source combined with the ML estimator enables unbiased nanometer-class thickness estimation with nanometer-scale precision. PMID:26378988

  1. Estimating the Extreme Behaviors of Students Performance Using Quantile Regression--Evidences from Taiwan

    ERIC Educational Resources Information Center

    Chen, Sheng-Tung; Kuo, Hsiao-I.; Chen, Chi-Chung

    2012-01-01

    The two-stage least squares approach together with quantile regression analysis is adopted here to estimate the educational production function. Such a methodology is able to capture the extreme behaviors of the two tails of students' performance and the estimation outcomes have important policy implications. Our empirical study is applied to the…

  2. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  3. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 1; Validation

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.

  4. Impact of Crosstalk Channel Estimation on the DSM Performance for DSL Networks

    NASA Astrophysics Data System (ADS)

    Lindqvist, Neiva; Lindqvist, Fredrik; Monteiro, Marcio; Dortschy, Boris; Pelaes, Evaldo; Klautau, Aldebaro

    2010-12-01

    The development and assessment of spectrum management methods for the copper access network are usually conducted under the assumption of accurate channel information. Acquiring such information implies, in practice, estimation of the crosstalk coupling functions between the twisted-pair lines in the access network. This type of estimation is not supported or required by current digital subscriber line (DSL) standards. In this work, we investigate the impact of the inaccuracies in crosstalk estimation on the performance of dynamic spectrum management (DSM) algorithms. A recently proposed crosstalk channel estimator is considered and a statistical sensitivity analysis is conducted to investigate the effects of the crosstalk estimation error on the bitloading and on the achievable data rate for a transmission line. The DSM performance is then evaluated based on the achievable data rates obtained through experiments with DSL setups and computer simulations. Since these experiments assume network scenarios consisting of real twisted-pair cables, both crosstalk channel estimates and measurements (for a reference comparison) are considered. The results indicate that the error introduced by the adopted estimation procedure does not compromise the performance of the DSM techniques, that is, the considered crosstalk channel estimator provides enough means for a practical implementation of DSM.

  5. Real-time Strehl and image quality performance estimator at Paranal Observatory

    NASA Astrophysics Data System (ADS)

    Mawet, Dimitri; Smette, Alain; Sarazin, Marc S.; Kuntschner, Harald; Girard, Julien H.

    2014-08-01

    Here we describe a prototype Strehl and image quality performance estimator and its integration into Paranal operations, starting with UT4 and its suite of three infrared instruments: adaptive optics-fed imager/spectrograph NACO (temporarily out of operations) and integral field unit SINFONI, as well as wide-field imager HAWK-I. The real-time estimator processes the ambient conditions (seeing, coherence time, airmass, etc.) from the DIMM, and telescope Shack-Hartmann image analyzer to produce estimates of image quality and Strehl ratio every ~ 30 seconds. The estimate is using ad-hoc instrumental models, based in part on the PAOLA adaptive optics simulator. We discuss the current performance of the estimator vs real IQ and Strehl measurements, its impact on service mode efficiency, prospects for full deployment at other UTs, its use for the adaptive optics facility (AOF), and inclusion of the SLODAR-measured fine turbulence characteristics.

  6. Preliminary Estimates of Loss of Juvenile Anadromous Salmonids to Predators in John Day Reservoir and Development of a Predation Model : Interim Report, 1986.

    SciTech Connect

    Rieman, Bruce E.

    1986-03-01

    We made preliminary estimates of the loss of juvenile salmonids to predation by walleye, Stizostedion v. vitreum, and northern squawfish, Ptychocheilus oregonensis, in John Day Reservoir in 1984 and 1985 using estimates of predator abundance and daily prey consumption rates. Preliminary estimates may be biased and may be adjusted as much as 30%, but indications are that predation could account for the majority of unexplained loss of juvenile salmonids in John Day Reservoir. Total loss was estimated at 4.1 million in 1984 and 3.3 million in 1985. Northern squawfish consumed 76% and 92% of these totals, respectively. The majority of loss occurred in mid reservoir areas, but loss in a small area, the boat-restricted zone immediately below McNary Dam, was disproportionately large. Peaks in loss in May and July corresponded with peaks in availability of salmonids. Estimated mortality from predation for April through June in 1984 and 1985 was 9% and 7% respectively, for chinook salmon, Oncorhynchus tshawytscha, and 10% and 15% for steelhead, Salmogairdneri. Mortality was variable with time but tended to increase over the period of migration. Mortality of chinook was estimated at 26% to 55% during July and August. A model of predation in John Day Reservoir is outlined. The model includes a predation submodel that can calculate loss from predator number and consumption rate; a population submodel that can relate predator abundance and population structure to recruitment, exploitation, natural mortality and growth; and a distribution submodel that can apportion predators among areas of the reservoir over time. Applications of the model are discussed for projecting expected changes in predation over time and identifying management alternatives that might limit the impact of predation.

  7. New Measurements of the Densities of Copper- and Nickel-Sulfide Liquids and Preliminary Estimates of the Partial Molar Volumes of Cu, Ni, S and O

    NASA Astrophysics Data System (ADS)

    Kress, V. C.; Ghiorso, M. S.

    2001-12-01

    We present the results of density measurements in Ni- and Cu-sulfide liquids. Density measurements were performed in-situ at 1250° C under controlled-atmosphere conditions using the modified single-bob (MSB) Archimedean method. The MSB consists of a ~2 mm diameter rod with a ~6 mm long ~7 mm diameter cylindrical bob attached ~7 mm from the base of the rod. The bob and crucible were constructed from Yt stabilized zirconia to minimize reaction with the corrosive sulfide liquid. Zirconia density at temperature was calibrated against the known density of molten Cu metal (Drotning 1981, High Temp-High Press 13: 441-458). Density was determined by measuring buoyancy as a function of immersed volume. Buoyancy was measured with a 0.1 mg resolution analytical balance interfaced with a computer. The crucible is mounted on a micrometer "elevator" allowing regulation of immersion with .005 mm resolution. Temperature was measured with an S-type thermocouple in contact with the bottom of the crucible. We explored log(fO2) from -8.2 to -12.6 and log(fS2) from -1.9 to -3.3. Five measurements have been made so far. Cu-sulfide densities range from 6.32 to 6.36 g/cc and were reproducible to +/-0.7%. Measured Ni-sulfide densities were lower, ranging from 5.27 to 5.79 g/cc. Wetting problems in Ni-sulfide compositions made these measurements more difficult. Reproducibility in Ni-sulfide melts was roughly +/-5%. Measured density values were used to regress preliminary partial molar volumes of sulfide liquids in the Cu-Ni-S-O system. A linear least squares fit was derived from the five density measurements along with the densities of pure molten Cu (Drotning 1981, ibid.) and Ni (Nasch 1995, Phys Chem Liq 29: 43-58) at 1250° C. Melt compositions under experimental conditions were estimated using the thermodynamic model of Kress (submitted). The molar volume of the system (V) can be expressed as: V = 8.18 XCu + 7.38 XNi + 30.33 XS where XI is the mole fraction of component i. Oxygen

  8. Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy

  9. Further result on guaranteed H∞ performance state estimation of delayed static neural networks.

    PubMed

    Huang, He; Huang, Tingwen; Chen, Xiaoping

    2015-06-01

    This brief considers the guaranteed H∞ performance state estimation problem of delayed static neural networks. An Arcak-type state estimator, which is more general than the widely adopted Luenberger-type one, is chosen to tackle this issue. A delay-dependent criterion is derived under which the estimation error system is globally asymptotically stable with a prescribed H∞ performance. It is shown that the design of suitable gain matrices and the optimal performance index are accomplished by solving a convex optimization problem subject to two linear matrix inequalities. Compared with some previous results, much better performance is achieved by our approach, which is greatly benefited from introducing an additional gain matrix in the domain of activation function. An example is finally given to demonstrate the advantage of the developed result.

  10. Time estimation as a secondary task to measure workload. [attention sharing effect on operator performance

    NASA Technical Reports Server (NTRS)

    Hart, S. G.

    1975-01-01

    Variation in the length of time productions and verbal estimates of duration was investigated to determine the influence of concurrent activity on operator time perception. The length of 10-, 20-, and 30-sec intervals produced while performing six different compensatory tracking tasks was significantly longer, 23% on the average, than those produced while performing no other task. Verbal estimates of session duration, taken at the end of each of 27 experimental sessions, reflected a parallel increase in subjective underestimation of the passage of time as the difficulty of the task performed increased. These data suggest that estimates of duration made while performing a manual control task provide stable and sensitive measures of the workload imposed by the primary task, with minimal interference.

  11. Several small-scale vector array performance analysis and simulation of DOA estimation

    NASA Astrophysics Data System (ADS)

    Mei, Yinzhen

    2011-10-01

    To research the application and estimate performance in some small-scale vector sensor array by traditional direction of arrival estimate , we derivate the time delay expression of four small-scale non-uniform vector sensor array, the array direction vector is given, and the MUSIC algorithm is applied successfully to non-uniform vector array for direction of arrival(DOA) estimate, select the better performance of each array element setting method, and compare of beam forming, the probability of success and the mean square error, this shows that the performance of line array is best, followed by L-array and circular array, the performance of cross-array is worst.

  12. Heavy oil recovery process: Conceptual engineering of a downhole methanator and preliminary estimate of facilities cost for application to North Slope Alaska

    SciTech Connect

    Not Available

    1990-01-01

    Results from Tasks 8 and 9 are presented. Task 8 addressed the cost of materials and manufacturing of the Downhole Methanator and the cost of drilling and completing the vertical cased well and two horizontal drain holes in the West Sak reservoir. Task 9 addressed the preliminary design of surface facilities to support the enhanced recovery of heavy oil. Auxiliary facilities include steam reformers for carbon dioxide-rich natural gas reforming, emergency electric generators, nitrogen gas generators, and an ammonia synthesis unit. The ammonia is needed to stabilize the swelling of clays in the reservoir. Cost estimations and a description of how they were obtained are given.

  13. Performance of a low-cost methane sensor for ambient concentration measurements in preliminary studies

    NASA Astrophysics Data System (ADS)

    Eugster, W.; Kling, G. W.

    2012-08-01

    Methane is the second most important greenhouse gas after CO2 and contributes to global warming. Its sources are not uniformly distributed across terrestrial and aquatic ecosystems, and most of the methane flux is expected to stem from hotspots which often occupy a very small fraction of the total landscape area. Continuous time-series measurements of CH4 concentrations can help identify and locate these methane hotspots. Newer, low-cost trace gas sensors such as the Figaro TGS 2600 can detect CH4 even at ambient concentrations. Hence, in this paper we tested this sensor under real-world conditions over Toolik Lake, Alaska, to determine its suitability for preliminary studies before placing more expensive and service-intensive equipment at a given locality. A reasonably good agreement with parallel measurements made using a Los Gatos Research FMA 100 methane analyzer was found after removal of the strong sensitivities for temperature and relative humidity. Correcting for this sensitivity increased the absolute accuracy required for in-depth studies, and the reproducibility between two TGS 2600 sensors run in parallel is very good. We conclude that the relative CH4 concentrations derived from such sensors are sufficient for preliminary investigations in the search of potential methane hotspots.

  14. Rocket experiments for spectral estimation of electron density fine structure in the auroral and equatorial ionosphere and preliminary results

    NASA Technical Reports Server (NTRS)

    Tomei, B. A.; Smith, L. G.

    1986-01-01

    Sounding rockets equipped to monitor electron density and its fine structure were launched into the auroral and equatorial ionosphere in 1980 and 1983, respectively. The measurement electronics are based on the Langmuir probe and are described in detail. An approach to the spectral analysis of the density irregularities is addressed and a software algorithm implementing the approach is given. Preliminary results of the analysis are presented.

  15. The Thermo Scientific HELIX-SFT noble gas mass spectrometer: (preliminary) performance for 40Ar/39Ar geochronology

    NASA Astrophysics Data System (ADS)

    Barfod, D. N.; Mark, D. F.; Morgan, L. E.; Tomkinson, T.; Stuart, F.; Imlach, J.; Hamilton, D.

    2011-12-01

    The Thermo Scientific HELIX-platform Split Flight Tube (HELIX-SFT) noble gas mass spectrometer is specifically designed for simultaneous collection of helium isotopes. The high mass spur houses a switchable 1011 - 1012 Ω resistor Faraday cup and the low mass spur a digital pulse-counting secondary electron multiplier (SEM). We have acquired the HELIX-SFT with the specific intention to measure argon isotopes for 40Ar/39Ar geochronology. This contribution will discuss preliminary performance (resolution, reproducibility, precision etc.) with respect to measuring argon isotope ratios for 40Ar/39Ar dating of geological materials. We anticipate the greatest impact for 40Ar/39Ar dating will be increased accuracy and precision, especially as we approach the techniques younger limit. Working with Thermo Scientific we have subtly modified the source, alpha and collector slits of the HELIX-SFT mass spectrometer to improve its resolution for resolving isobaric interferences at masses 36 to 40. The enhanced performance will allow for accurate and precise measurement of argon isotopes. Preliminary investigations show that we can obtain a valley resolution of >700 and >1300 (compared to standard HELIX-SFT specifications of >400 and >700) for the high and low mass spurs, respectively. The improvement allows for full resolution of hydrocarbons (C3+) at masses 37 - 40 and almost full resolution at mass 36. The HELIX-SFT will collect data in dual collection mode with 40Ar+ ion beams measured using the switchable 1011 - 1012 Ω resistor Faraday cup and 39Ar through 36Ar measured using the SEM. The HELIX-SFT requires Faraday-SEM inter-calibration but negates the necessity to inter-calibrate multiple electron multipliers. We will further present preliminary data from the dating of mineral standards: Alder Creek sanidine, Fish Canyon sanidine and Mount Dromedary biotite (GA1550).

  16. Time estimation and performance on reproduction tasks in subtypes of children with attention deficit hyperactivity disorder.

    PubMed

    Bauermeister, Jose J; Barkley, Russell A; Martinez, Jose V; Cumba, Eduardo; Ramirez, Rafael R; Reina, Graciela; Matos, Maribel; Salas, Carmen C

    2005-03-01

    This study compared Hispanic children (ages 7 to 11) with combined type (CT, n=33) and inattentive type (IT, n=21) attention deficit hyperactivity disorder (ADHD) and a control group (n=25) on time-estimation and time-reproduction tasks. The ADHD groups showed larger errors in time reproduction but not in time estimation than the control group, and the groups did not differ from each other on their performance on this task. Individual differences could not be accounted for by oppositional-defiance ratings and low math or reading scores. Although various measures of executive functioning did not make significant unique contributions to time estimation performance, those of interference control and nonverbal working memory did so to the time-reproduction task. Findings suggest that ADHD is associated with a specific impairment in the capacity to reproduce rather than estimate time durations and that this may be related to the children's deficits in inhibition and working memory.

  17. Beyond Neglect: Preliminary Evidence of Retrospective Time Estimation Abnormalities in Non-Neglect Stroke and Transient Ischemic Attack Patients

    PubMed Central

    Low, Essie; Crewther, Sheila G.; Perre, Diana L.; Ben Ong; Laycock, Robin; Tu, Hans; Wijeratne, Tissa

    2016-01-01

    Perception of the passage of time is essential for safe planning and navigation of everyday activities. Findings from the literature have demonstrated a gross underestimation of time interval in right-hemisphere damaged neglect patients, but not in non-neglect unilaterally-damaged patients, compared to controls. This study aimed to investigate retrospective estimation of the duration of a target detection task over two occasions, in 30 stroke patients (12 left-side stroke 15 right-side stroke, and 3 right-side stroke with neglect) and 10 transient ischemic attack patients, relative to 31 age-matched controls. Performances on visual short-term and working memory tasks were also examined to investigate the associations between timing abilities with residual cognitive functioning. Initial results revealed evidence of perceptual time underestimation, not just in neglect patients, but also in non-neglect unilaterally-damaged stroke patients and transient ischemic attack patients. Three months later, underestimation of time persisted only in left-side stroke and right-side stroke with neglect patients, who also demonstrated reduced short-term and working memory abilities. Findings from this study suggest a predictive role of residual cognitive impairments in determining the prognosis of perceptual timing abnormalities. PMID:26940859

  18. Computer code for estimating installed performance of aircraft gas turbine engines. Volume 3: Library of maps

    NASA Technical Reports Server (NTRS)

    Kowalski, E. J.

    1979-01-01

    A computerized method which utilizes the engine performance data and estimates the installed performance of aircraft gas turbine engines is presented. This installation includes: engine weight and dimensions, inlet and nozzle internal performance and drag, inlet and nacelle weight, and nacelle drag. The use of two data base files to represent the engine and the inlet/nozzle/aftbody performance characteristics is discussed. The existing library of performance characteristics for inlets and nozzle/aftbodies and an example of the 1000 series of engine data tables is presented.

  19. Estimated effects of ionizing radiation upon military task performance: individual combat crewmember assessment

    SciTech Connect

    Anno, G.H.; Wilson, D.B.

    1984-04-01

    Quantitative estimates are developed of the performance levels for selected individual Army combat crewmembers exposed to prompt ionizing radiation from nuclear weapons. The performance levels, expressed in percent of normal (baseline) task performance, provide information for military operations planning, combat training, and computer simulation modeling of combat crew and unit effectiveness. The methodology is described where data from two separate bodies of information: acute radiation sickness symptomatology, and judgment of task performance time from Army combat crew questionnaires - are integrated to compute performance levels as a function of dose (free-in-air) and post-exposure time.

  20. The performance of different propensity score methods for estimating marginal hazard ratios.

    PubMed

    Austin, Peter C

    2013-07-20

    Propensity score methods are increasingly being used to reduce or minimize the effects of confounding when estimating the effects of treatments, exposures, or interventions when using observational or non-randomized data. Under the assumption of no unmeasured confounders, previous research has shown that propensity score methods allow for unbiased estimation of linear treatment effects (e.g., differences in means or proportions). However, in biomedical research, time-to-event outcomes occur frequently. There is a paucity of research into the performance of different propensity score methods for estimating the effect of treatment on time-to-event outcomes. Furthermore, propensity score methods allow for the estimation of marginal or population-average treatment effects. We conducted an extensive series of Monte Carlo simulations to examine the performance of propensity score matching (1:1 greedy nearest-neighbor matching within propensity score calipers), stratification on the propensity score, inverse probability of treatment weighting (IPTW) using the propensity score, and covariate adjustment using the propensity score to estimate marginal hazard ratios. We found that both propensity score matching and IPTW using the propensity score allow for the estimation of marginal hazard ratios with minimal bias. Of these two approaches, IPTW using the propensity score resulted in estimates with lower mean squared error when estimating the effect of treatment in the treated. Stratification on the propensity score and covariate adjustment using the propensity score result in biased estimation of both marginal and conditional hazard ratios. Applied researchers are encouraged to use propensity score matching and IPTW using the propensity score when estimating the relative effect of treatment on time-to-event outcomes.

  1. Preliminary Analysis of the CASES GPS Receiver Performance during Simulated Seismic Displacements

    NASA Astrophysics Data System (ADS)

    De la Rosa-Perkins, A.; Reynolds, A.; Crowley, G.; Azeem, I.

    2014-12-01

    We explore the ability of a new GPS software receiver, called CASES (Connected Autonomous Space Environment Sensor), to measure seismic displacements in realtime. Improvements in GPS technology over the last 20 years allow for precise measurement of ground motion during seismic events. For example, GPS data has been used to calculate displacement histories at an earthquake's epicenter and fault slip estimations with great accuracy. This is supported by the ability to measure displacements directly using GPS, bypassing the double integration that accelerometers require, and by higher clipping limits than seismometers. The CASES receiver developed by ASTRA in collaboration with Cornell University and the University of Texas, Austin represents a new geodetic-quality software-based GPS receiver that measures ionospheric space weather in addition to the usual navigation solution. To demonstrate, in a controlled environment, the ability of the CASES receiver to measure seismic displacements, we simulated ground motions similar to those generated during earthquakes, using a shake box instrumented with an accelerometer and a GPS antenna. The accelerometer measured the box's actual displacement. The box moved on a manually controlled axis that underwent varied one-dimensional motions (from mm to cm) at different frequencies and amplitudes. The CASES receiver was configured to optimize the accuracy of the position solution. We quantified the CASES GPS receiver performance by comparing the GPS solutions against the accelerometer data using various statistical analysis methods. The results of these tests will be presented. The CASES receiver is designed with multiple methods of accessing the data in realtime, ranging from internet connection, blue-tooth, cell-phone modem and Iridium modem. Because the CASES receiver measures ionospheric space weather in addition to the usual navigation solution, CASES provides not only the seimic signal, but also the ionospheric space weather

  2. Site selection for a countrywide temporary network in Austria: noise analysis and preliminary performance

    NASA Astrophysics Data System (ADS)

    Fuchs, F.; Kolínský, P.; Gröschl, G.; Apoloner, M.-T.; Qorbani, E.; Schneider, F.; Bokelmann, G.

    2015-10-01

    Site selection is a crucial part of the work flow for installing seismic stations. Here, we report the preparations for a countrywide temporary seismic network in Austria. We describe the specific requirements for a multi-purpose seismic array with 40 km station spacing that will be operative approximately three years. Reftek 151 60 s sensors and Reftek 130/130S digitizers form the core instrumentation of our seismic stations which are mostly installed inside abandoned or occasionally used basements or cellars. We present probabilistic power spectral density analysis to assess noise conditions at selected sites and show exemplary seismic events that were recorded by the preliminary network by the end of July 2015.

  3. An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1996-01-01

    Autocorrelation based spectral moment estimators are typically derived using the Fourier transform relationship between the power spectrum and the autocorrelation function along with using either an assumed form of the autocorrelation function, e.g., Gaussian, or a generic complex form and applying properties of the characteristic function. Passarelli has used a series expansion of the general complex autocorrelation function and has expressed the coefficients in terms of central moments of the power spectrum. A truncation of this series will produce a closed system of equations which can be solved for the central moments of interest. The autocorrelation function at various lags is estimated from samples of the random process under observation. These estimates themselves are random variables and exhibit a bias and variance that is a function of the number of samples used in the estimates and the operational signal-to-noise ratio. This contributes to a degradation in performance of the moment estimators. This dissertation investigates the use autocorrelation function estimates at higher order lags to reduce the bias and standard deviation in spectral moment estimates. In particular, Passarelli's series expansion is cast in terms of an overdetermined system to form a framework under which the application of additional autocorrelation function estimates at higher order lags can be defined and assessed. The solution of the overdetermined system is the least squares solution. Furthermore, an overdetermined system can be solved for any moment or moments of interest and is not tied to a particular form of the power spectrum or corresponding autocorrelation function. As an application of this approach, autocorrelation based variance estimators are defined by a truncation of Passarelli's series expansion and applied to simulated Doppler weather radar returns which are characterized by a Gaussian shaped power spectrum. The performance of the variance estimators determined

  4. On the estimation algorithm used in adaptive performance optimization of turbofan engines

    NASA Technical Reports Server (NTRS)

    Espana, Martin D.; Gilyard, Glenn B.

    1993-01-01

    The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.

  5. Technical note: Estimating unbiased transfer-function performances in spatially structured environments

    NASA Astrophysics Data System (ADS)

    Trachsel, Mathias; Telford, Richard J.

    2016-05-01

    Conventional cross validation schemes for assessing transfer-function performance assume that observations are independent. In spatially structured environments this assumption is violated, resulting in over-optimistic estimates of transfer-function performance. H-block cross validation, where all samples within h kilometres of the test samples are omitted, is a method for obtaining unbiased transfer-function performance estimates. In this study, we assess three methods for determining the optimal h. Using simulated data, we find that all three methods result in comparable values of h. Applying the three methods to published transfer functions, we find they yield similar values for h. Some transfer functions perform notably worse when h-block cross validation is used.

  6. Technical Note: Estimating unbiased transfer-function performances in spatially structured environments

    NASA Astrophysics Data System (ADS)

    Trachsel, M.; Telford, R. J.

    2015-10-01

    Conventional cross-validation schemes for assessing transfer-function performance assume that observations are independent. In spatially-structured environments this assumption is violated, resulting in over-optimistic estimates of transfer-function performance. H block cross-validation, where all samples within h km of the test samples are omitted is a method for obtaining unbiased transfer function performance estimates. In this study, we assess three methods for determining the optimal h. Using simulated data, we find that all three methods result in comparable values of h. Applying the three methods to published transfer functions, we find they yield similar values for h. Some transfer functions perform notably worse when h block cross-validation is used.

  7. LIFE ESTIMATION OF HIGH LEVEL WASTE TANK STEEL FOR F-TANK FARM CLOSURE PERFORMANCE ASSESSMENT

    SciTech Connect

    Subramanian, K

    2007-10-01

    High level radioactive waste (HLW) is stored in underground storage tanks at the Savannah River Site. The SRS is proceeding with closure of the 22 tanks located in F-Area. Closure consists of removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations and severing/sealing external penetrations. A performance assessment is being performed in support of closure of the F-Tank Farm. Initially, the carbon steel construction materials of the high level waste tanks will provide a barrier to the leaching of radionuclides into the soil. However, the carbon steel liners will degrade over time, most likely due to corrosion, and no longer provide a barrier. The tank life estimation in support of the performance assessment has been completed. The estimation considered general and localized corrosion mechanisms of the tank steel exposed to the contamination zone, grouted, and soil conditions. The estimation was completed for Type I, Type III, and Type IV tanks in the F-Tank Farm. The tank life estimation in support of the F-Tank Farm closure performance assessment has been completed. The estimation considered general and localized corrosion mechanisms of the tank steel exposed to the contamination zone, grouted, and soil conditions. The estimation was completed for Type I, Type III, and Type IV tanks in the F-Tank Farm. Consumption of the tank steel encased in grouted conditions was determined to occur either due to carbonation of the concrete leading to low pH conditions, or the chloride-induced de-passivation of the steel leading to accelerated corrosion. A deterministic approach was initially followed to estimate the life of the tank liner in grouted conditions or in soil conditions. The results of this life estimation are shown in Table 1 and Table 2 for grouted and soil conditions respectively. The tank life has been estimated under conservative assumptions of diffusion rates. However, the same process of

  8. Preliminary evidence of salivary cortisol predicting performance in a controlled setting.

    PubMed

    Lautenbach, Franziska; Laborde, Sylvain; Achtzehn, Silvia; Raab, Markus

    2014-04-01

    The aims of this study were to examine the influence of salivary cortisol on tennis serve performance in a controlled setting and to investigate if cortisol predicts unique variance in performance beyond a subjective anxiety measure (i.e., Competitive State Anxiety Inventory-2 [CSAI-2]). Twenty-three tennis players performed two series of second tennis serves separated by an anxiety induction (i.e., arithmetic task). Cortisol was assessed six times during the experiment. Results show that cortisol response and a drop in serving performance are positively correlated (r=.68, p<.001). Cortisol also explains unique variance in performance (i.e., 19%) beyond CSAI-2 measures. Thus, considering cortisol measurements seems warranted in future research aimed at understanding performance.

  9. Model-Based MR Parameter Mapping with Sparsity Constraints: Parameter Estimation and Performance Bounds

    PubMed Central

    Zhao, Bo; Lam, Fan; Liang, Zhi-Pei

    2014-01-01

    MR parameter mapping (e.g., T1 mapping, T2 mapping, T2∗ mapping) is a valuable tool for tissue characterization. However, its practical utility has been limited due to long data acquisition times. This paper addresses this problem with a new model-based parameter mapping method. The proposed method utilizes a formulation that integrates the explicit signal model with sparsity constraints on the model parameters, enabling direct estimation of the parameters of interest from highly undersampled, noisy k-space data. An efficient greedy-pursuit algorithm is described to solve the resulting constrained parameter estimation problem. Estimation-theoretic bounds are also derived to analyze the benefits of incorporating sparsity constraints and benchmark the performance of the proposed method. The theoretical properties and empirical performance of the proposed method are illustrated in a T2 mapping application example using computer simulations. PMID:24833520

  10. A Functional Look at Goal Orientations: Their Role for Self-Estimates of Intelligence and Performance

    ERIC Educational Resources Information Center

    Bipp, Tanja; Steinmayr, Ricarda; Spinath, Birgit

    2012-01-01

    Building on the notion that motivation energizes and directs resources in achievement situations, we argue that goal orientations affect perceptions of own intelligence and that the effect of goals on performance is partly mediated by self-estimates of intelligence. Studies 1 (n = 89) and 2 (n = 165) investigated the association of goal…

  11. Subject-specific estimation of central aortic blood pressure using an individualized transfer function: a preliminary feasibility study.

    PubMed

    Hahn, Jin-Oh; Reisner, Andrew T; Jaffer, Farouc A; Asada, H Harry

    2012-03-01

    This paper presents a new approach to the estimation of unknown central aortic blood pressure waveform from a directly measured peripheral blood pressure waveform, in which a physics-based model is employed to solve for a subject- and state-specific individualized transfer function (ITF). The ITF provides the means to estimate the unknown central aortic blood pressure from the peripheral blood pressure. Initial proof-of-principle for the ITF is demonstrated experimentally through an in vivo protocol. In swine subjects taken through wide range of physiologic conditions, the ITF was on average able to provide central aortic blood pressure waveforms more accurately than a nonindividualized transfer function. Its usefulness was most evident when the subject's pulse transit time deviated from normative values. In these circumstances, the ITF yielded statistically significant reductions over a nonindividualized transfer function in the following three parameters: 1) 30% reduction in the root-mean-squared error between estimated versus actual central aortic blood pressure waveform (p < 10 (-4)), 2) >50% reduction in the error between estimated versus actual systolic and pulse pressures ( p < 10 (-4)), and 3) a reduction in the overall breakdown rate (i.e., the frequency of estimation errors >3 mmHg, p < 10 (-4)). In conclusion, the ITF may offer an attractive alternative to existing methods that estimates the central aortic blood pressure waveform, and may be particularly useful in nonnormative physiologic conditions.

  12. PRELIMINARY RESULTS OF EPA'S PERFORMANCE EVALUATION OF FEDERAL REFERENCE METHODS AND FEDERAL EQUIVALENT METHODS FOR COARSE PARTICULATE MATTER

    EPA Science Inventory

    The main objective of this study is to evaluate the performance of sampling methods for potential use as a Federal Reference Method (FRM) capable of providing an estimate of coarse particle (PMc: particulate matter with an aerodynamic diameter between 2.5 µm and 10 µm) ...

  13. Relationships between Handwriting Performance and Organizational Abilities among Children with and without Dysgraphia: A Preliminary Study

    ERIC Educational Resources Information Center

    Rosenblum, Sara; Aloni, Tsipi; Josman, Naomi

    2010-01-01

    Organizational ability constitutes one executive function (EF) component essential for common everyday performance. The study aim was to explore the relationship between handwriting performance and organizational ability in school-aged children. Participants were 58 males, aged 7-8 years, 30 with dysgraphia and 28 with proficient handwriting.…

  14. Competitive Performance Correlates of Mental Toughness in Tennis: A Preliminary Analysis.

    PubMed

    Cowden, Richard G

    2016-08-01

    This study investigated relationships between mental toughness and measures of competitive performance in tennis. Forty-three male (N = 25) and female (N = 18) players (M age = 13.6 years, SD = 2.4) completed the mental toughness inventory, and the point-by-point outcomes recorded during a competitive tennis match (singles) were used to generate performance indices for each athlete. The results indicated that mental toughness was associated with several, but not all, macro, micro, and critical moment performance indices. The findings suggest mental toughness may contribute to successful performance during tennis competition, although the importance of the construct appears to depend depend on specific match situations. Future mental toughness research should consider a range of factors related to sport performance, including athletes' and opponents' physical, technical, and tactical abilities. PMID:27502244

  15. Competitive Performance Correlates of Mental Toughness in Tennis: A Preliminary Analysis.

    PubMed

    Cowden, Richard G

    2016-08-01

    This study investigated relationships between mental toughness and measures of competitive performance in tennis. Forty-three male (N = 25) and female (N = 18) players (M age = 13.6 years, SD = 2.4) completed the mental toughness inventory, and the point-by-point outcomes recorded during a competitive tennis match (singles) were used to generate performance indices for each athlete. The results indicated that mental toughness was associated with several, but not all, macro, micro, and critical moment performance indices. The findings suggest mental toughness may contribute to successful performance during tennis competition, although the importance of the construct appears to depend depend on specific match situations. Future mental toughness research should consider a range of factors related to sport performance, including athletes' and opponents' physical, technical, and tactical abilities.

  16. LIFE ESTIMATION OF TRANSFER LINES FOR TANK FARM CLOSURE PERFORMANCE ASSESSMENT

    SciTech Connect

    Subramanian, K

    2007-10-01

    A performance assessment is being performed in support of closure of the F-Tank Farm. The performance assessment includes the life estimation of the transfer lines that are used to transport waste between tanks both within a facility (''intra-area'' transfer) and to other facilities (''inter-area'' transfers). The transfer line materials of construction will initially provide a barrier to contaminant escape. However, the materials will degrade over time, most likely due to corrosion, and will no longer provide a barrier to contaminant escape. The life estimation considered the corrosion of the core pipe under exposure to soil, estimated the thickness loss due to general corrosion, and the percentage of wall area breached due to localized corrosion mechanisms. There are three types of transfer lines that are to be addressed within the performance assessment: Type I, Type II/IIA and Type III. The life of the transfer lines were estimated as exposed to soil. Localized and general corrosion of the transfer lines exposed to soil was estimated to provide input to the fate and transport modeling of the performance assessment. Pitting corrosion was found to be the controlling mechanism for the degradation of the transfer lines and their consequent ability to maintain confinement of contaminants. It is assumed that 75% of the transfer line is needed intact to provide this confinement function, i.e. once 25% of the line wall is breached, the lines are considered incapable of confining contaminants. It is recommended that the percentage breached curves be utilized for each transfer line as shown in Figure 1 for the various stainless steel transfer lines.

  17. A Preliminary Exploration of On-Line Study Question Performance and Response Certitude as Predictors of Future Examination Performance

    ERIC Educational Resources Information Center

    Grabe, Mark; Flannery, Kathryn

    2010-01-01

    This research evaluated an online study task intended to improve the study metacognition and examination performance of inexperienced college students. Existing research has commonly operationalized metacognition as the accuracy of examination score predictions. This research made use of the average discrepancy between rated confidence in…

  18. Performance of Spanish/English bilingual children on a spanish-language neuropsychological battery: preliminary normative data.

    PubMed

    Rosselli, Mónica; Ardila, Alfredo; Navarrete, M Gina; Matute, Esmeralda

    2010-05-01

    Despite a population of close to 40 million Hispanics/Latinos in the USA who have at least some level of Spanish/English bilingualism, there are few neuropsychological tests and norms available for this group, especially when assessing Spanish/English bilingual children. The purpose of the present research was to provide preliminary normative data for a bilingual population on a comprehensive neuropsychological battery developed for Spanish-speaking children (Evaluación Neuropsicológica Infantil). Norms by age are presented on the performance of 108 Spanish/English bilingual children (ages 5-14 years) and are expected to be useful when testing other Spanish/English bilingual children in the USA.

  19. Preliminary performance analysis of an interplanetary navigation system using asteroid based beacons

    NASA Technical Reports Server (NTRS)

    Jee, J. Rodney; Khatib, Ahmad R.; Muellerschoen, Ronald J.; Williams, Bobby G.; Vincent, Mark A.

    1988-01-01

    A futuristic interplanetary navigation system using transmitters placed on selected asteroids is introduced. This network of space beacons is seen as a needed alternative to the overly burdened Deep Space Network. Covariance analyses on the potential performance of these space beacons located on a candidate constellation of eight real asteroids are initiated. Simplified analytic calculations are performed to determine limiting accuracies attainable with the network for geometric positioning. More sophisticated computer simulations are also performed to determine potential accuracies using long arcs of range and Doppler data from the beacons. The results from these computations show promise for this navigation system.

  20. Performance validation of commercially available mobile waste-assay systems: Preliminary report

    SciTech Connect

    Schanfein, M.; Bonner, C.; Maez, R.

    1997-11-01

    Prior to disposal, nuclear waste must be accurately characterized to identify and quantify the radioactive content to reduce the radioactive hazard to the public. Validation of the waste-assay systems` performance is critical for establishing the credibility of the assay results for storage and disposal purposes. Canberra Nuclear has evaluated regulations worldwide and identified standard, modular, neutron- and gamma-waste-assay systems that can be used to characterize a large portion of existing and newly generated transuranic (TRU) and low-level waste. Before making claims of guaranteeing any system`s performance for specific waste types, the standardized systems` performance be evaluated. 7 figs., 11 tabs.

  1. Performances estimation of a rotary traveling wave ultrasonic motor based on two-dimension analytical model.

    PubMed

    Ming, Y; Peiwen, Q

    2001-03-01

    The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.

  2. Preliminary performance of a vertical-attitude takeoff and landing, supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system

    NASA Technical Reports Server (NTRS)

    Robins, A. W.; Beissner, F. L., Jr.; Domack, C. S.; Swanson, E. E.

    1985-01-01

    A performance study was made of a vertical attitude takeoff and landing (VATOL), supersonic cruise aircraft concept having thrust vectoring integrated into the flight control system. Those characteristics considered were aerodynamics, weight, balance, and performance. Preliminary results indicate that high levels of supersonic aerodynamic performance can be achieved. Further, with the assumption of an advanced (1985 technology readiness) low bypass ratio turbofan engine and advanced structures, excellent mission performance capability is indicated.

  3. Compressive sensing based Bayesian sparse channel estimation for OFDM communication systems: high performance and low complexity.

    PubMed

    Gui, Guan; Xu, Li; Shan, Lin; Adachi, Fumiyuki

    2014-01-01

    In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods.

  4. Dual Arm Work Package performance estimates and telerobot task network simulation

    SciTech Connect

    Draper, J.V.; Blair, L.M.

    1997-02-01

    This paper describes the methodology and results of a network simulation study of the Dual Arm Work Package (DAWP), to be employed for dismantling the Argonne National Laboratory CP-5 reactor. The development of the simulation model was based upon the results of a task analysis for the same system. This study was performed by the Oak Ridge National Laboratory (ORNL), in the Robotics and Process Systems Division. Funding was provided the US Department of Energy`s Office of Technology Development, Robotics Technology Development Program (RTDP). The RTDP is developing methods of computer simulation to estimate telerobotic system performance. Data were collected to provide point estimates to be used in a task network simulation model. Three skilled operators performed six repetitions of a pipe cutting task representative of typical teleoperation cutting operations.

  5. A Preliminary Axial Fan Design Method with the Considerat ion of Performance and Noise Characteristics

    NASA Astrophysics Data System (ADS)

    Lee, Chan; Kil, Hyun Gwon

    2010-06-01

    Presented in this paper are a fan's aero-acoustic performance method and its computation procedure which combines aerodynamic flow field data, performances and noise levels of fan. The internal flow field and the performance of fan are analyzed by the through-flow modeling, inviscid pitch-averaged quasi-3D flow analysis combined with flow deviation and pressure loss distribution models. Based on the predicted internal flow field dada by the trough-flow modeling, fan noise is predicted by two models for the discrete frequency noise due to rotating steady aerodynamic thrust and blade interaction and for the broadband noise due to turbulent boundary layer and wake vortex shedding. The present predictions of the flow distribution, the performance and the noise level of fan are well agreed with actual test results.

  6. Preliminary Performance Evaluation of a Near Zero Energy Home in Callaway, Florida

    SciTech Connect

    Martin, Eric; Parker, Danny; Sherwin, John; Colon, Carlos

    2009-02-20

    This case study reports on a near zero energy home in Callaway, FL. This paper briefly reviews the design and then focuses on the first four months of energy performance during the second half of 2008.

  7. Preliminary results from the evaluation of Cockpit Resource Management training - Performance ratings of flightcrews

    NASA Technical Reports Server (NTRS)

    Helmreich, Robert L.; Wilhelm, John A.; Gregorich, Steven E.; Chidester, Thomas R.

    1990-01-01

    The first data from the NASA/University of Texas Crew Performance project on the behavior of flightcrews with and without formal training in Cockpit Resource Management (CRM) is reported. Expert observers made detailed ratings of 15 components of crew behavior in both line operations and in full mission simulations. The results indicate that such training in crew coordination concepts increases the percentage of crews rated as above average in performance and decreases the percentage rated as below average. The data also show high and unexpected degrees of variations in rated performance among crews flying different aircraft within the same organization. It was also found that the specific behaviors that triggered observer ratings of above or below average performance differed markedly between organizations. Characteristics of experts' ratings and future research needs are also discussed.

  8. Preliminary Performance Analyses of the Constellation Program ARES 1 Crew Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Phillips, Mark; Hanson, John; Shmitt, Terri; Dukemand, Greg; Hays, Jim; Hill, Ashley; Garcia, Jessica

    2007-01-01

    By the time NASA's Exploration Systems Architecture Study (ESAS) report had been released to the public in December 2005, engineers at NASA's Marshall Space Flight Center had already initiated the first of a series of detailed design analysis cycles (DACs) for the Constellation Program Crew Launch Vehicle (CLV), which has been given the name Ares I. As a major component of the Constellation Architecture, the CLV's initial role will be to deliver crew and cargo aboard the newly conceived Crew Exploration Vehicle (CEV) to a staging orbit for eventual rendezvous with the International Space Station (ISS). However, the long-term goal and design focus of the CLV will be to provide launch services for a crewed CEV in support of lunar exploration missions. Key to the success of the CLV design effort and an integral part of each DAC is a detailed performance analysis tailored to assess nominal and dispersed performance of the vehicle, to determine performance sensitivities, and to generate design-driving dispersed trajectories. Results of these analyses provide valuable design information to the program for the current design as well as provide feedback to engineers on how to adjust the current design in order to maintain program goals. This paper presents a condensed subset of the CLV performance analyses performed during the CLV DAC-1 cycle. Deterministic studies include development of the CLV DAC-1 reference trajectories, identification of vehicle stage impact footprints, an assessment of launch window impacts to payload performance, and the computation of select CLV payload partials. Dispersion studies include definition of input uncertainties, Monte Carlo analysis of trajectory performance parameters based on input dispersions, assessment of CLV flight performance reserve (FPR), assessment of orbital insertion accuracy, and an assessment of bending load indicators due to dispersions in vehicle angle of attack and side slip angle. A short discussion of the various

  9. A comparison of the performance of time-delay estimators in medical ultrasound.

    PubMed

    Viola, Francesco; Walker, William F

    2003-04-01

    Time-delay estimation (TDE) is a common operation in ultrasound signal processing. In applications such as blood flow estimation, elastography, phase aberration correction, and many more, the quality of final results is heavily dependent upon the performance of the time-delay estimator implemented. In the past years, several algorithms have been developed and applied in medical ultrasound, sonar, radar, and other fields. In this paper we analyze the performances of the widely used normalized and non-normalized correlations, along with normalized covariance, sum absolute differences (SAD), sum squared differences (SSD), hybrid-sign correlation, polarity-coincidence correlation, and the Meyr-Spies method. These techniques have been applied to simulated ultrasound radio frequency (RF) data under a variety of conditions. We show how parameters, which include center frequency, fractional bandwidth, kernel window size, signal decorrelation, and signal-to-noise ratio (SNR) affect the quality of the delay estimate. Simulation results also are compared with a theoretical performance limit set by the Cramér-Rao lower bound (CRLB). Results show that, for high SNR, high signal correlation, and large kernel size, all of the algorithms closely match the theoretical bound, with relative performances that vary by as much as 20%. As conditions degrade, the performances of various algorithms differ more significantly. For signals with a correlation level of 0.98, SNR of 30 dB, center frequency of 5 MHz with a fractional bandwidth of 0.5, and kernel size of 2 micros, the standard deviation of the jitter error is on the order of few nanoseconds. Normalized correlation, normalized covariance, and SSD have an approximately equal jitter error of 2.23 ns (the value predicted by the CRLB is 2.073 ns), whereas the polarity-coincidence correlation performs less well with a jitter error of 2.74 ns.

  10. Global Bare Ground Gain in the First Decade of 21st Century from Landsat Data: the Preliminary Results on Estimation and Distribution

    NASA Astrophysics Data System (ADS)

    Ying, Q.; Potapov, P.; Wang, L.; Hansen, M.

    2015-12-01

    Bare ground gain (BGG) is one of the most intensive land surface transformation because of the complete alteration of ecosystem functioning, the complex nature of temporal nonlinearity and spatial heterogeneity, the fast growing trend along with the global population boom and urbanization. However, it is not yet clear that what are the global dynamics of BGG and how it is spatially distributed. It is therefore important to monitor BGG as an essential component of land cover change on a locally relevant and globally consistent base. In this study, we try to answer these questions using over-a-decade Landsat satellite observations. Recent developments in optical remote sensing hold tremendous promise for global BGG detection. One data source is the legacy of annual Landsat mosaics from the research of Hansen et al. on global forest dynamics. Following previous research by Hansen et al., BGG observed by Landsat data is defined as a process of land cover change featuring permanent or semi-permanent clearing of vegetation cover by human land use or natural disturbances at the 30-m Landsat pixel scale. A sophisticated method has been developed to capture the change signal from high dimension metrics derived from time series of Landsat spectral bands and continuous bare ground field. By examining the contribution of each metric to the effectiveness of BGG detection, 140 metrics were selected and put into a supervised machine learning algorithm, the bagged classification tree. A recursive strategy was adopted to complete training data and improve result. A global BGG training data set counting to around 27.5 million pixels was produced. Additional training was obtained from regional sources like the bare ground gain layer of Web-enabled Landsat data (WELD) and the impervious surface layer of National Land Cover Database (NLCD). Independent validation was performed by interpreting stratified samples on Google Earth high resolution images. The preliminary results of the

  11. A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance

    NASA Technical Reports Server (NTRS)

    Woolley, Ryan C.

    2014-01-01

    The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.

  12. Classification Systems for Individual Differences in Multiple-task Performance and Subjective Estimates of Workload

    NASA Technical Reports Server (NTRS)

    Damos, D. L.

    1984-01-01

    Human factors practitioners often are concerned with mental workload in multiple-task situations. Investigations of these situations have demonstrated repeatedly that individuals differ in their subjective estimates of workload. These differences may be attributed in part to individual differences in definitions of workload. However, after allowing for differences in the definition of workload, there are still unexplained individual differences in workload ratings. The relation between individual differences in multiple-task performance, subjective estimates of workload, information processing abilities, and the Type A personality trait were examined.

  13. Influence of the management strategy model on estimating water system performance under climate change

    NASA Astrophysics Data System (ADS)

    Francois, Baptiste; Hingray, Benoit; Creutin, Jean-Dominique; Hendrickx, Frederic

    2015-04-01

    The performance of water systems used worldwide for the management of water resources is expected to be influenced by future changes in regional climates and water uses. Anticipating possible performance changes of a given system requires a modeling chain simulating its management. Operational management is usually not trivial especially when several conflicting objectives have to be accounted for. Management models are therefore often a crude representation of the real system and they only approximate its performance. Estimated performance changes are expected to depend on the management model used, but this is often not assessed. This communication analyzes the influence of the management strategy representation on the performance of an Alpine reservoir (Serre-Ponçon, South-East of France) for which irrigation supply, hydropower generation and recreational activities are the main objectives. We consider three ways to construct the strategy named as clear-, short- and far-sighted management. They are based on different forecastability degrees of seasonal inflows into the reservoir. The strategies are optimized using a Dynamic Programming algorithm (deterministic for clear-sighted and implicit stochastic for short- and far-sighted). System performance is estimated for an ensemble of future hydro-meteorological projections obtained in the RIWER2030 research project (http://www.lthe.fr/RIWER2030/) from a suite of climate experiments from the EU - ENSEMBLES research project. Our results show that changes in system performance is much more influenced by changes in hydro-meteorological variables than by the choice of strategy modeling. They also show that a simple strategy representation (i.e. clear-sighted management) leads to similar estimates of performance modifications than those obtained with a representation supposedly closer to real world (i.e. the far-sighted management). The Short-Sighted management approach lead to significantly different results, especially

  14. Heavy oil recovery process: Conceptual engineering of a downhole methanator and preliminary estimate of facilities cost for application to North Slope Alaska

    SciTech Connect

    Gondouin, M.

    1991-10-31

    The West Sak (Upper Cretaceous) sands, overlaying the Kuparuk field, would rank among the largest known oil fields in the US, but technical difficulties have so far prevented its commercial exploitation. Steam injection is the most successful and the most commonly-used method of heavy oil recovery, but its application to the West Sak presents major problems. Such difficulties may be overcome by using a novel approach, in which steam is generated downhole in a catalytic Methanator, from Syngas made at the surface from endothermic reactions (Table 1). The Methanator effluent, containing steam and soluble gases resulting from exothermic reactions (Table 1), is cyclically injected into the reservoir by means of a horizontal drainhole while hot produced fluids flow form a second drainhole into a central production tubing. The downhole reactor feed and BFW flow downward to two concentric tubings. The large-diameter casing required to house the downhole reactor assembly is filled above it with Arctic Pack mud, or crude oil, to further reduce heat leaks. A quantitative analysis of this production scheme for the West Sak required a preliminary engineering of the downhole and surface facilities and a tentative forecast of well production rates. The results, based on published information on the West Sak, have been used to estimate the cost of these facilities, per daily barrel of oil produced. A preliminary economic analysis and conclusions are presented together with an outline of future work. Economic and regulatory conditions which would make this approach viable are discussed. 28 figs.

  15. An appraisal of the 1992 preliminary performance assessment for the Waste Isolation Pilot Plant

    SciTech Connect

    Lee, W.W.L.; Chaturvedi, L.; Silva, M.K.; Weiner, R.; Neill, R.H. |

    1994-09-01

    The purpose of the New Mexico Environmental Evaluation Group is to conduct an independent technical evaluation of the Waste Isolation Pilot Plant (WIPP) Project to ensure the protection of the public health and safety and the environment. The WIPP Project, located in southeastern New Mexico, is being constructed as a repository for the disposal of transuranic (TRU) radioactive wastes generated by the national defense programs. The Environmental Evaluation Group (EEG) has reviewed the WIPP 1992 Performance Assessment (Sandia WIPP Performance Assessment Department, 1992). Although this performance assessment was released after the October 1992 passage of the WIPP Land Withdrawal Act (PL 102-579), the work preceded the Act. For individual and ground-water protection, calculations have been done for 1000 years post closure, whereas the US Environmental Protection Agency`s Standards (40 CFR 191) issued in 1993 require calculations for 10,000 years. The 1992 Performance Assessment continues to assimilate improved understanding of the geology and hydrogeology of the site, and evolving conceptual models of natural barriers. Progress has been made towards assessing WIPP`s compliance with the US Environmental Protection Agency`s Standards (40 CFR 191). The 1992 Performance Assessment has addressed several items of major concern to EEG, outlined in the July 1992 review of the 1991 performance assessment (Neill et al., 1992). In particular, the authors are pleased that some key results in this performance assessment deal with sensitivity of the calculated complementary cumulative distribution functions (CCDF) to alterative conceptual models proposed by EEG -- that flow in the Culebra be treated as single-porosity fracture-flow; with no sorption retardation unless substantiated by experimental data.

  16. Preliminary Findings of Serum Creatinine and Estimated Glomerular Filtration Rate (eGFR) in Adolescents with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Lin, Jin-Ding; Lin, Lan-Ping; Hsieh, Molly; Lin, Pei-Ying

    2010-01-01

    The present study aimed to describe the kidney function profile--serum creatinine and estimated glomerular filtration rate (eGFR), and to examine the relationships of predisposing factors to abnormal serum creatinine in people with intellectual disabilities (ID). Data were collected by a cross-sectional study of 827 aged 15-18 years adolescents…

  17. Preliminary Performance of Lithium-ion Cell Designs for Ares I Upper Stage Applications

    NASA Technical Reports Server (NTRS)

    Miller, Thomas B.; Reid, Concha M.; Kussmaul, Michael T.

    2011-01-01

    NASA's Ares I Crew Launch Vehicle (CLV) baselined lithium-ion technology for the Upper Stage (US). Under this effort, the NASA Glenn Research Center investigated three different aerospace lithium-ion cell suppliers to assess the performance of the various lithium-ion cell designs under acceptance and characterization testing. This paper describes the overall testing approaches associated with lithium-ion cells, their ampere-hour capacity as a function of temperature and discharge rates, as well as their performance limitations for use on the Ares I US vehicle.

  18. Preliminary analysis of effects of air cooling turbine blades on turbojet-engine performance

    NASA Technical Reports Server (NTRS)

    Schramm, Wilson B; Nachtigall, Alfred J; Arne, Vernon L

    1950-01-01

    The effects of turbine-blade cooling on engine performance were analytically investigated for a turbojet engine in which cooling air is bled from the engine compressor. The analysis was made for a constant turbine-inlet temperature and a range of altitudes to determine the minimum cooling requirements to permit substitution of nonstrategic materials in turbine blading. The results indicate that, for a constant inlet temperature, air cooling of the turbine blades increases the specific fuel consumption and decreases the thrust of the engine. The highest possible cooling effectiveness is desirable to minimize coolant weight flow and its effects on engine performance.

  19. Preliminary Clinical Evaluation of a 4D-CBCT Estimation Technique using Prior Information and Limited-angle Projections

    PubMed Central

    Zhang, You; Yin, Fang-Fang; Pan, Tinsu; Vergalasova, Irina; Ren, Lei

    2015-01-01

    Background and Purpose A new technique has been previously reported to estimate high-quality 4D-CBCT using prior information and limited-angle projections. This study is to investigate its clinical feasibility through both phantom and patient studies. Materials and Methods The new technique used to estimate 4D-CBCT is called MMFD-NCC. It is based on the previously reported motion-modeling and free-form deformation (MMFD) method, with the introduction of normalized-cross-correlation (NCC) as a new similarity metric. The clinical feasibility of this technique was evaluated by assessing the accuracy of estimated anatomical structures in comparison to those in the ‘ground-truth’ reference 4D-CBCT, using data obtained from a physical phantom and three lung cancer patients. Both volume percentage error (VPE) and center-of-mass error (COME) of the estimated tumor volume were used as the evaluation metrics. Results The average VPE/COME of the tumor in the prior image was 257.1%/10.1 mm for the phantom study and 55.6%/3.8 mm for the patient study. Using only orthogonal-view 30° projections, the MMFD-NCC has reduced the corresponding values to 7.7% /1.2 mm and 9.6%/1.1 mm, respectively. Conclusions The MMFD-NCC technique is able to estimate 4D-CBCT images with geometrical accuracy of the tumor within 10% VPE and 2 mm COME, which can be used to improve the localization accuracy of radiotherapy. PMID:25818396

  20. Academic Performance of Howard Community College Students in Transfer Institutions: Preliminary Findings. Research Report Number 37.

    ERIC Educational Resources Information Center

    Radcliffe, Susan K.

    A study was conducted at Howard Community College (HCC) to determine the performance of HCC students at transfer institutions. Four factors related to transfer success were examined: earning an associate degree at HCC; enrolling in a community college transfer program; length of time spent at HCC; and academic preparation and achievement at the…

  1. Learning Strategies and Self-Efficacy as Predictors of Academic Performance: A Preliminary Study

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2012-01-01

    Empirical research supports the idea that differences in academic performance among students are largely due to their different learning and study strategies. The strategies, in turn, affect the self-efficacy of the students. Two hundred university students were recruited to participate in this study by completing a revised Chinese version of the…

  2. Improving International-Level Chess Players' Performance with an Acceptance-Based Protocol: Preliminary Findings

    ERIC Educational Resources Information Center

    Ruiz, Francisco J.; Luciano, Carmen

    2012-01-01

    This study compared an individual, 4-hr intervention based on acceptance and commitment therapy (ACT) versus a no-contact control condition in improving the performance of international-level chess players. Five participants received the brief ACT protocol, with each matched to another chess player with similar characteristics in the control…

  3. Relationships between handwriting performance and organizational abilities among children with and without dysgraphia: a preliminary study.

    PubMed

    Rosenblum, Sara; Aloni, Tsipi; Josman, Naomi

    2010-01-01

    Organizational ability constitutes one executive function (EF) component essential for common everyday performance. The study aim was to explore the relationship between handwriting performance and organizational ability in school-aged children. Participants were 58 males, aged 7-8 years, 30 with dysgraphia and 28 with proficient handwriting. Group allocation was based on children's scores in the Handwriting Proficiency Screening Questionnaire (HPSQ). They performed the Hebrew Handwriting Evaluation (HHE), and their parents completed the Questionnaire for Assessing Students' Organizational Abilities-for Parents (QASOA-P). Significant differences were found between the groups for handwriting performance (HHE) and organizational abilities (QASOA-P). Significant correlations were found in the dysgraphic group between handwriting spatial arrangement and the QASOA-P mean score. Linear regression indicated that the QASOA-P mean score explained 42% of variance of handwriting proficiency (HPSQ). Based on one discriminant function, 81% of all participants were correctly classified into groups. Study results strongly recommend assessing organizational difficulties in children referred for therapy due to handwriting deficiency.

  4. A Preliminary Investigation of the Effect of Rules on Employee Performance

    ERIC Educational Resources Information Center

    Squires, James; Wilder, David A.

    2010-01-01

    The way in which rules impact workplace performance has been a topic of discussion in the Organizational Behavior Management literature for some time. Despite this interest, there is a dearth of empirical research on the topic. The purpose of this study was to examine the effect of rules and goal setting in the workplace. Participants included two…

  5. Relationship between ELPT™ and TOEFL Performance: A Preliminary Analysis. Research Summary RS-03

    ERIC Educational Resources Information Center

    College Entrance Examination Board, 1998

    1998-01-01

    The College Board conducted a survey of students who took the ELPT test in 1996 and 1997. The survey yielded responses from 141 students with self-reported TOEFL scores. The results suggest a strong, near linear relationship between students' performance levels on these two English language proficiency instruments.

  6. The preliminary development of computer-assisted assessment of Chinese handwriting performance.

    PubMed

    Chang, Shao-Hsia; Yu, Nan-Ying; Shie, Jung-Jiun

    2009-06-01

    This paper describes a pilot study investigating an assessment for Chinese handwriting performance. In an attempt to computerize the existing Tseng Handwriting Problem Checklist (Tseng Checklist), this study employed MATLAB to develop a computer program entitled the Chinese Handwriting Assessment Program (CHAP) to be used for the evaluation of handwriting performance. Through a template-matching approach, the program processed each character by using size-adjustable standard models to calculate the two-dimensional cross-correlation coefficient of a template and a superimposed handwritten character. The program measured the size control, spacing, alignment, and the average resemblance between standard models and handwritten characters. The results of the CHAP's test-retest reliability showed that the high correlation coefficients (from .81 to .94) were statistically significant. Correlations between each CHAP and Tseng Checklist item were statistically significant. As these assessment tools for handwriting performance are required for quantitative and qualitative aspects, the integration of the two tools is a promising means for accomplishing a handwriting performance assessment.

  7. Preliminary estimates of nucleon fluxes in a water target exposed to solar-flare protons: BRYNTRN versus Monte Carlo code

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Lone, M. A.; Wong, P. Y.; Costen, Robert C.

    1994-01-01

    A baryon transport code (BRYNTRN) has previously been verified using available Monte Carlo results for a solar-flare spectrum as the reference. Excellent results were obtained, but the comparisons were limited to the available data on dose and dose equivalent for moderate penetration studies that involve minor contributions from secondary neutrons. To further verify the code, the secondary energy spectra of protons and neutrons are calculated using BRYNTRN and LAHET (Los Alamos High-Energy Transport code, which is a Monte Carlo code). These calculations are compared for three locations within a water slab exposed to the February 1956 solar-proton spectrum. Reasonable agreement was obtained when various considerations related to the calculational techniques and their limitations were taken into account. Although the Monte Carlo results are preliminary, it appears that the neutron albedo, which is not currently treated in BRYNTRN, might be a cause for the large discrepancy seen at small penetration depths. It also appears that the nonelastic neutron production cross sections in BRYNTRN may underestimate the number of neutrons produced in proton collisions with energies below 200 MeV. The notion that the poor energy resolution in BRYNTRN may cause a large truncation error in neutron elastic scattering requires further study.

  8. Preliminary estimates of residence times and apparent ages of ground water in the Chesapeake Bay watershed, and water-quality data from a survey of springs

    USGS Publications Warehouse

    Focazio, Michael J.; Plummer, L. Neil; Bohlke, John K.; Busenberg, Eurybiades; Bachman, L. Joseph; Powars, David S.

    1998-01-01

    Knowledge of the residence times of the ground-water systems in Chesapeake Bay watershed helps resource managers anticipate potential delays between implementation of land-management practices and any improve-ments in river and estuary water quality. This report presents preliminary estimates of ground-water residence times and apparent ages of water in the shallow aquifers of the Chesapeake Bay watershed. A simple reservoir model, published data, and analyses of spring water were used to estimate residence times and apparent ages of ground-water discharge. Ranges of aquifer hydraulic characteristics throughout the Bay watershed were derived from published literature and were used to estimate ground-water residence times on the basis of a simple reservoir model. Simple combinations of rock type and physiographic province were used to delineate hydrogeomorphic regions (HGMR?s) for the study area. The HGMR?s are used to facilitate organization and display of the data and analyses. Illustrations depicting the relation of aquifer characteristics and associated residence times as a continuum for each HGMR were developed. In this way, the natural variation of aquifer characteristics can be seen graphically by use of data from selected representative studies. Water samples collected in September and November 1996, from 46 springs throughout the watershed were analyzed for chlorofluorocarbons (CFC?s) to estimate the apparent age of ground water. For comparison purposes, apparent ages of water from springs were calculated assuming piston flow. Additi-onal data are given to estimate apparent ages assuming an exponential distribution of ages in spring discharge. Additionally, results from previous studies of CFC-dating of ground water from other springs and wells in the watershed were compiled. The CFC data, and the data on major ions, nutrients, and nitrogen isotopes in the water collected from the 46 springs are included in this report. The apparent ages of water

  9. Preliminary Assessment of Suomi-NPP VIIRS On-orbit Radiometric Performance

    NASA Technical Reports Server (NTRS)

    Oudrari, Hassan; DeLuccia, Frank; McIntire, Jeff; Moyer, David; Chiang, Vincent; Xiong, Xiao-xiong; Butler, James

    2012-01-01

    The Visible-Infrared Imaging Radiometer Suite (VIIRS) is a key instrument on-board the Suomi National Polar-orbiting Partnership (NPP) spacecraft that was launched on October 28th 2011. VIIRS was designed to provide moderate and imaging resolution of most of the globe twice daily. It is a wide-swath (3,040 km) cross-track scanning radiometer with spatial resolutions of 370.and 740 m at nadir for imaging and moderate bands, respectively. It has 22 spectral bands covering the spectrum between 0.412 11m and 12.01 11m, including 14 reflective solar bands (RSB), 7 thermal emissive bands (TEB), and 1 day-night band (ON B). VIIRS observations are used to generate 22 environmental data products (EORs). This paper will briefly describe NPP VIIRS calibration strategies performed by the independent government team, for the initial on-orbit Intensive Calibration and Validation (ICV) activities. In addition, this paper will provide an early assessment of the sensor on-orbit radiometric performance, such as the sensor signal to noise ratios (SNRs), dual gain transition verification, dynamic range and linearity, reflective bands calibration based on the solar diffuser (SO) and solar diffuser stability monitor (SOSM), and emissive bands calibration based on the on-board blackbody calibration (OBC). A comprehensive set of performance metrics generated during the pre-launch testing program will be compared to VIIRS on-orbit early performance, and a plan for future cal/val activities and performance enhancements will be presented.

  10. Performance comparison of rigid and affine models for motion estimation using ultrasound radio-frequency signals.

    PubMed

    Pan, Xiaochang; Liu, Ke; Shao, Jinghua; Gao, Jing; Huang, Lingyun; Bai, Jing; Luo, Jianwen

    2015-11-01

    Tissue motion estimation is widely used in many ultrasound techniques. Rigid-model-based and nonrigid-modelbased methods are two main groups of space-domain methods of tissue motion estimation. The affine model is one of the commonly used nonrigid models. The performances of the rigid model and affine model have not been compared on ultrasound RF signals, which have been demonstrated to obtain higher accuracy, precision, and resolution in motion estimation compared with B-mode images. In this study, three methods, i.e., the normalized cross-correlation method with rigid model (NCC), the optical flow method with rigid model (OFRM), and the optical flow method with affine model (OFAM), are compared using ultrasound RF signals, rather than the B-mode images used in previous studies. Simulations, phantom, and in vivo experiments are conducted to make the comparison. In the simulations, the root-mean-square errors (RMSEs) of axial and lateral displacements and strains are used to assess the accuracy of motion estimation, and the elastographic signal-tonoise ratio (SNRe) and contrast-to-noise ratio (CNRe) are used to evaluate the quality of axial strain images. In the phantom experiments, the registration error between the pre- and postdeformation RF signals, as well as the SNRe and CNRe of axial strain images, are utilized as the evaluation criteria. In the in vivo experiments, the registration error is used to evaluate the estimation performance. The results show that the affinemodel- based method (i.e., OFAM) obtains the lowest RMSE or registration error and the highest SNRe and CNRe among all the methods. The affine model is demonstrated to be superior to the rigid model in motion estimation based on RF signals.

  11. Differential Shift Estimation in the Absence of Coherence: Performance Analysis and Benefits of Polarimetry

    NASA Astrophysics Data System (ADS)

    Villano, Michelangelo; Papathanassiou, Konstantinos P.

    2011-03-01

    The estimation of the local differential shift between synthetic aperture radar (SAR) images has proven to be an effective technique for monitoring glacier surface motion. As images acquired over glaciers by short wavelength SAR systems, such as TerraSAR-X, often suffer from a lack of coherence, image features have to be exploited for the shift estimation (feature-tracking).The present paper addresses feature-tracking with special attention to the feasibility requirements and the achievable accuracy of the shift estimation. In particular, the dependence of the performance on image characteristics, such as texture parameters, signal-to-noise ratio (SNR) and resolution, as well as on processing techniques (despeckling, normalised cross-correlation versus maximum likelihood estimation) is analysed by means of Monte-Carlo simulations. TerraSAR-X data acquired over the Helheim glacier, Greenland, and the Aletsch glacier, Switzerland, have been processed to validate the simulation results.Feature-tracking can benefit of the availability of fully-polarimetric data. As some image characteristics, in fact, are polarisation-dependent, the selection of an optimum polarisation leads to improved performance. Furthermore, fully-polarimetric SAR images can be despeckled without degrading the resolution, so that additional (smaller-scale) features can be exploited.

  12. Partitioning tracer test for detection, estimation, and remediation performance assessment of subsurface nonaqueous phase liquids

    SciTech Connect

    Jin, M.; Delshad, M.; Dwarakanath, V.; McKinney, D.C.; Pope, G.A.; Sepehrnoori, K.; Tilburg, C.E.; Jackson, R.E.

    1995-05-01

    In this paper we present a partitioning interwell tracer test (PITT) technique for the detection, estimation, and remediation performance assessment of the subsurface contaminated by nonaqueous phase liquids (NAPLs). We demonstrate the effectiveness of this technique by examples of experimental and simulation results. The experimental results are from partitioning tracer experiments in columns packed with Ottawa sand. Both the method of moments and inverse modeling techniques for estimating NAPL saturation in the sand packs are demonstrated. In the simulation examples we use UTCHEM, a comprehensive three-dimensional, chemical flood compositional simulator developed at the University of Texas, to simulate a hypothetical two-dimensional aquifer with properties similar to the Borden site contaminated by tetrachloroethylene (PCE), and we show how partitioning interwell tracer tests can be used to estimate the amount of PCE contaminant before remedial action and as the remediation process proceeds. Tracer test results from different stages of remediation are compared to determine the quantity of PCE removed and the amount remaining. Both the experimental (small-scale) and simulation (large-scale) results demonstrate that PITT can be used as an innovative and effective technique to detect and estimate the amount of residual NAPL and for remediation performance assessment in subsurface formations. 43 refs., 10 figs., 1 tab.

  13. A preliminary study of nasal airway patency and its potential effect on speech performance.

    PubMed

    Dalston, R M; Warren, D W; Dalston, E T

    1992-07-01

    The relationship between nasal airway size and articulatory performance was studied in a group of cleft palate patients. Articulation analysis revealed that children with bilateral cleft lip and palate were nearly twice as likely to manifest compensatory articulations as children with unilateral cleft lip and palate or with cleft palate only. When subjects were grouped according to speech performance, aerodynamic assessment indicated that children with compensatory articulations had significantly larger nasal cross-sectional areas than children without compensatory articulations. The findings suggest that children with comparatively large nasal airways may be at increased risk for developing abnormal speech patterns. If these findings are confirmed by further research, such children may be candidates for relatively early palate repair.

  14. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    SciTech Connect

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  15. Effect of noble gas mixtures on the performance of regenerative-type cryocoolers analytical estimate

    NASA Astrophysics Data System (ADS)

    Daney, D. E.

    1990-09-01

    The performance of regenerators that use noble gas mixtures is compared to the performance of those that use pure helium gas. Both helium-argon and helium-krypton mixtures are investigated. For some heat transfer surfaces, a modest gain in heat transfer can be achieved with these mixtures. The concomitant increase in pressure drop, however, more than offsets the heat transfer gain so the net regenerator loss increases for all evaluated cases. The dependence of heat transfer on Prandtl number (Pr) was not measured for the range associated with noble gas mixtures, 0.2 less than Pr less than 0.5, and it is estimated that the uncertainty from the source can exceed 20 percent. The estimates for the transport properties (Prandtl number, viscosity, and thermal conductivity) of helium-argon and helium-krypton mixtures because of the absence of experimental data at low temperature are given.

  16. Performance of default risk model with barrier option framework and maximum likelihood estimation: Evidence from Taiwan

    NASA Astrophysics Data System (ADS)

    Chou, Heng-Chih; Wang, David

    2007-11-01

    We investigate the performance of a default risk model based on the barrier option framework with maximum likelihood estimation. We provide empirical validation of the model by showing that implied default barriers are statistically significant for a sample of construction firms in Taiwan over the period 1994-2004. We find that our model dominates the commonly adopted models, Merton model, Z-score model and ZETA model. Moreover, we test the n-year-ahead prediction performance of the model and find evidence that the prediction accuracy of the model improves as the forecast horizon decreases. Finally, we assess the effect of estimated default risk on equity returns and find that default risk is able to explain equity returns and that default risk is a variable worth considering in asset-pricing tests, above and beyond size and book-to-market.

  17. Performance Bounds on Micro-Doppler Estimation and Adaptive Waveform Design Using OFDM Signals

    SciTech Connect

    Sen, Satyabrata; Barhen, Jacob; Glover, Charles Wayne

    2014-01-01

    We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Cram er-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.

  18. Definition and preliminary design of the Laser Atmospheric Wind Sounder (LAWS) phase 1. Volume 3: Program cost estimates

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Cost estimates for phase C/D of the laser atmospheric wind sounder (LAWS) program are presented. This information provides a framework for cost, budget, and program planning estimates for LAWS. Volume 3 is divided into three sections. Section 1 details the approach taken to produce the cost figures, including the assumptions regarding the schedule for phase C/D and the methodology and rationale for costing the various work breakdown structure (WBS) elements. Section 2 shows a breakdown of the cost by WBS element, with the cost divided in non-recurring and recurring expenditures. Note that throughout this volume the cost is given in 1990 dollars, with bottom line totals also expressed in 1988 dollars (1 dollar(88) = 0.93 1 dollar(90)). Section 3 shows a breakdown of the cost by year. The WBS and WBS dictionary are included as an attachment to this report.

  19. The area-time-integral technique to estimate convective rain volumes over areas applied to satellite data - A preliminary investigation

    NASA Technical Reports Server (NTRS)

    Doneaud, Andre A.; Miller, James R., Jr.; Johnson, L. Ronald; Vonder Haar, Thomas H.; Laybe, Patrick

    1987-01-01

    The use of the area-time-integral (ATI) technique, based only on satellite data, to estimate convective rain volume over a moving target is examined. The technique is based on the correlation between the radar echo area coverage integrated over the lifetime of the storm and the radar estimated rain volume. The processing of the GOES and radar data collected in 1981 is described. The radar and satellite parameters for six convective clusters from storm events occurring on June 12 and July 2, 1981 are analyzed and compared in terms of time steps and cluster lifetimes. Rain volume is calculated by first using the regression analysis to generate the regression equation used to obtain the ATI; the ATI versus rain volume relation is then employed to compute rain volume. The data reveal that the ATI technique using satellite data is applicable to the calculation of rain volume.

  20. Comparison of performance of some common Hartmann-Shack centroid estimation methods

    NASA Astrophysics Data System (ADS)

    Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.

    2016-03-01

    The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.

  1. A Preliminary Exploration of Concussion and Strength Performance in Youth Ice Hockey Players.

    PubMed

    Reed, N; Taha, T; Monette, G; Keightley, M

    2016-08-01

    The objective of this study was to describe the effect of concussion on upper and lower body strength in children and youth athletes. The participant group was made up of 178 unique male and female ice hockey players (ages 8-14 years). Using a 3-year prospective longitudinal research design, baseline and post-concussion data on hand grip strength, jump tests, and leg maximal voluntary contraction were collected. Using a linear mixed-effects model, no significant differences were found when comparing the baseline strength performance of individuals who went on to experience a concussion and those who did not. When accounting for sex, multiple concussions, and ongoing changes in strength associated with age, weaker hand grip scores were found following concussion while participants were still symptomatic. Lower squat jump heights were achieved while participants were symptomatic as well as when they were no longer self-reporting symptoms associated with concussion. This study represents an initial step towards better understanding strength performance following concussion that may limit the on and off ice performance of youth ice hockey players, as well as predispose youth to subsequent injuries. PMID:27191209

  2. Investigation on the high efficiency volume Bragg gratings performances for spectrometry in space environment: preliminary results

    NASA Astrophysics Data System (ADS)

    Loicq, Jérôme; Gaspar Venancio, Luis Miguel; Georges, Marc

    2012-09-01

    The special properties of Volume Bragg Gratings (VBGs) make them good candidates for spectrometry applications where high spectral resolution, low level of straylight and low polarisation sensitivity are required. Therefore it is of interest to assess the maturity and suitability of VBGs as enabling technology for future ESA missions with demanding requirements for spectrometry. The VBGs suitability for space application is being investigated in the frame of a project led by CSL and funded by the European Space Agency. The goal of this work is twofold: first the theoretical advantages and drawbacks of VBGs with respect to other technologies with identical functionalities are assessed, and second the performances of VBG samples in a representative space environment are experimentally evaluated. The performances of samples of two VBGs technologies, the Photo-Thermo-Refractive (PTR) glass and the DiChromated Gelatine (DCG), are assessed and compared in the Hα, O2-B and NIR bands. The tests are performed under vacuum condition combined with temperature cycling in the range of 200 K to 300K. A dedicated test bench experiment is designed to evaluate the impact of temperature on the spectral efficiency and to determine the optical wavefront error of the diffracted beam. Furthermore the diffraction efficiency degradation under gamma irradiation is assessed. Finally the straylight, the diffraction efficiency under conical incidence and the polarisation sensitivity is evaluated.

  3. Word memory test performance in Canadian adolescents with learning disabilities: a preliminary study.

    PubMed

    Larochette, Anne-Claire; Harrison, Allyson G

    2012-01-01

    The purpose of this study was to evaluate Word Memory Test (WMT) performances in students with identified learning disabilities (LDs) providing good effort to examine the influence of severe reading or learning problems on WMT performance. Participants were 63 students with LDs aged 11 to 14 years old (M = 12.19 years), who completed psychoeducational assessments as part of a transition program to secondary school. Participants were administered a battery of psychodiagnostic tests including the WMT. Results indicated that 9.5% of students with LD met Criterion A on the WMT (i.e., perform below cut-offs on any of the first three subtests of the WMT), but less than 1% met both criteria necessary for identification of low effort. Failure on the first three subtests of the WMT was associated with word reading at or below the 1st percentile and severely impaired phonetic decoding and phonological awareness skills. These results indicate that the majority of students with a history of LD are capable of passing the WMT, and use of profile analysis reduces the false-positive rate to below 1%. PMID:23428276

  4. Word memory test performance in Canadian adolescents with learning disabilities: a preliminary study.

    PubMed

    Larochette, Anne-Claire; Harrison, Allyson G

    2012-01-01

    The purpose of this study was to evaluate Word Memory Test (WMT) performances in students with identified learning disabilities (LDs) providing good effort to examine the influence of severe reading or learning problems on WMT performance. Participants were 63 students with LDs aged 11 to 14 years old (M = 12.19 years), who completed psychoeducational assessments as part of a transition program to secondary school. Participants were administered a battery of psychodiagnostic tests including the WMT. Results indicated that 9.5% of students with LD met Criterion A on the WMT (i.e., perform below cut-offs on any of the first three subtests of the WMT), but less than 1% met both criteria necessary for identification of low effort. Failure on the first three subtests of the WMT was associated with word reading at or below the 1st percentile and severely impaired phonetic decoding and phonological awareness skills. These results indicate that the majority of students with a history of LD are capable of passing the WMT, and use of profile analysis reduces the false-positive rate to below 1%.

  5. A Preliminary Exploration of Concussion and Strength Performance in Youth Ice Hockey Players.

    PubMed

    Reed, N; Taha, T; Monette, G; Keightley, M

    2016-08-01

    The objective of this study was to describe the effect of concussion on upper and lower body strength in children and youth athletes. The participant group was made up of 178 unique male and female ice hockey players (ages 8-14 years). Using a 3-year prospective longitudinal research design, baseline and post-concussion data on hand grip strength, jump tests, and leg maximal voluntary contraction were collected. Using a linear mixed-effects model, no significant differences were found when comparing the baseline strength performance of individuals who went on to experience a concussion and those who did not. When accounting for sex, multiple concussions, and ongoing changes in strength associated with age, weaker hand grip scores were found following concussion while participants were still symptomatic. Lower squat jump heights were achieved while participants were symptomatic as well as when they were no longer self-reporting symptoms associated with concussion. This study represents an initial step towards better understanding strength performance following concussion that may limit the on and off ice performance of youth ice hockey players, as well as predispose youth to subsequent injuries.

  6. A preliminary study suggests that nicotine and prefrontal dopamine affect cortico-striatal areas in smokers with performance feedback

    PubMed Central

    Lee, M. R.; Gallen, C.L.; Ross, T.J.; Kurup, P.; Salmeron, B.J.; Hodgkinson, C.A.; Goldman, D.; Stein, E. A.; Enoch, M.A.

    2014-01-01

    Nicotine and tonic DA levels (as inferred by COMT Val158Met genotype) interact to affect prefrontal processing. Prefrontal cortical areas are involved in response to performance feedback, which is impaired in smokers. We investigated whether there is a nicotine × COMT genotype interaction in brain circuitry during performance feedback of a reward task. We scanned 23 healthy smokers (10 Val/Val homozygotes, 13 Met allele carriers) during two fMRI sessions while subjects were wearing a nicotine or placebo patch. A significant nicotine × COMT genotype interaction for BOLD signal during performance feedback in corticostriatal areas was seen. Activation in these areas during the nicotine patch condition was greater in Val/Val homozygotes and reduced in Met allele carriers. During negative performance feedback, the change in activation in error detection areas such as anterior cingulate cortex (ACC)/superior frontal gyrus on nicotine compared to placebo was greater in Val/Val homozygotes compared to Met allele carriers. With transdermal nicotine administration, Val/Val homozygotes showed greater activation with performance feedback in the dorsal striatum, areas associated with habitual responding. In response to negative feedback, Val/Val homozygotes had greater activation in error detection areas, including the ACC, suggesting increased sensitivity to loss with nicotine exposure. Although these results are preliminary due to small sample size, nevertheless, they suggest a possible neurobiological mechanism underlying the clinical observation that Val/Val homozygotes, presumably with elevated COMT activity compared to Met allele carriers and therefore reduced prefrontal DA levels, have poorer outcomes with nicotine replacement therapy. PMID:23433232

  7. Estimate of Technical Potential for Minimum Efficiency Performance Standards in 13 Major World Economies

    SciTech Connect

    Letschert, Virginie; Desroches, Louis-Benoit; Ke, Jing; McNeil, Michael

    2012-07-01

    As part of the ongoing effort to estimate the foreseeable impacts of aggressive minimum efficiency performance standards (MEPS) programs in the world’s major economies, Lawrence Berkeley National Laboratory (LBNL) has developed a scenario to analyze the technical potential of MEPS in 13 major economies around the world1 . The “best available technology” (BAT) scenario seeks to determine the maximum potential savings that would result from diffusion of the most efficient available technologies in these major economies.

  8. The performance of phylogenetic algorithms in estimating haplotype genealogies with migration.

    PubMed

    Salzburger, Walter; Ewing, Greg B; Von Haeseler, Arndt

    2011-05-01

    Genealogies estimated from haplotypic genetic data play a prominent role in various biological disciplines in general and in phylogenetics, population genetics and phylogeography in particular. Several software packages have specifically been developed for the purpose of reconstructing genealogies from closely related, and hence, highly similar haplotype sequence data. Here, we use simulated data sets to test the performance of traditional phylogenetic algorithms, neighbour-joining, maximum parsimony and maximum likelihood in estimating genealogies from nonrecombining haplotypic genetic data. We demonstrate that these methods are suitable for constructing genealogies from sets of closely related DNA sequences with or without migration. As genealogies based on phylogenetic reconstructions are fully resolved, but not necessarily bifurcating, and without reticulations, these approaches outperform widespread 'network' constructing methods. In our simulations of coalescent scenarios involving panmictic, symmetric and asymmetric migration, we found that phylogenetic reconstruction methods performed well, while the statistical parsimony approach as implemented in TCS performed poorly. Overall, parsimony as implemented in the PHYLIP package performed slightly better than other methods. We further point out that we are not making the case that widespread 'network' constructing methods are bad, but that traditional phylogenetic tree finding methods are applicable to haplotypic data and exhibit reasonable performance with respect to accuracy and robustness. We also discuss some of the problems of converting a tree to a haplotype genealogy, in particular that it is nonunique. PMID:21457168

  9. Quantifying performance limitations of Kalman filters in state vector estimation problems

    NASA Astrophysics Data System (ADS)

    Bageshwar, Vibhor Lal

    In certain applications, the performance objectives of a Kalman filter (KF) are to compute unbiased, minimum variance estimates of a state mean vector governed by a stochastic system. The KF can be considered as a model based algorithm used to recursively estimate the state mean vector and state covariance matrix. The general objective of this thesis is to investigate the performance limitations of the KF in three state vector estimation applications. Stochastic observability is a property of a system and refers to the existence of a filter for which the errors of the estimated state mean vector have bounded variance. In the first application, we derive a test to assess the stochastic observability of a KF implemented for discrete linear time-varying systems consisting of known, deterministic parameters. This class of system includes discrete nonlinear systems linearized about the true state vector trajectory. We demonstrate the utility of the stochastic observability test using an aided INS problem. Attitude determination systems consist of a sensor set, a stochastic system, and a filter to estimate attitude. In the second application, we design an inertially aided (IA) vector matching algorithm (VMA) architecture for estimating a spacecraft's attitude. The sensor set includes rate gyros and a three-axis magnetometer (TAM). The VMA is a filtering algorithm that solves Wahba's problem. The VMA is then extended by incorporating dynamic and sensor models to formulate the IA VMA architecture. We evaluate the performance of the IA VMA architectures by using an extended KF to blend post-processed spaceflight data. Model predictive control (MPC) algorithms achieve offset-free control by augmenting the nominal system model with a disturbance model. In the third application, we consider an offset-free MPC framework that includes an output integrator disturbance model and a KF to estimate the state and disturbance vectors. Using root locus techniques, we identify sufficient

  10. Tonal cues modulate line bisection performance: preliminary evidence for a new rehabilitation prospect?

    PubMed

    Ishihara, Masami; Revol, Patrice; Jacquin-Courtois, Sophie; Mayet, Romaine; Rode, Gilles; Boisson, Dominique; Farnè, Alessandro; Rossetti, Yves

    2013-01-01

    The effect of the presentation of two different auditory pitches (high and low) on manual line-bisection performance was studied to investigate the relationship between space and magnitude representations underlying motor acts. Participants were asked to mark the midpoint of a given line with a pen while they were listening a pitch via headphones. In healthy participants, the effect of the presentation order (blocked or alternative way) of auditory stimuli was tested (Experiment 1). The results showed no biasing effect of pitch in blocked-order presentation, whereas the alternative presentation modulated the line-bisection. Lower pitch produced leftward or downward bisection biases whereas higher pitch produced rightward or upward biases, suggesting that visuomotor processing can be spatially modulated by irrelevant auditory cues. In Experiment 2, the effect of such alternative stimulations in line bisection in right brain damaged patients with a unilateral neglect and without a neglect was tested. Similar biasing effects caused by auditory cues were observed although the white noise presentation also affected the patient's performance. Additionally, the effect of pitch difference was larger for the neglect patient than for the no-neglect patient as well as for healthy participants. The neglect patient's bisection performance gradually improved during the experiment and was maintained even after 1 week. It is therefore, concluded that auditory cues, characterized by both the pitch difference and the dynamic alternation, influence spatial representations. The larger biasing effect seen in the neglect patient compared to the no-neglect patient and healthy participants suggests that auditory cues could modulate the direction of the attentional bias that is characteristic of neglect patients. Thus, the alternative presentation of auditory cues could be used as rehabilitation for neglect patients. The space-pitch associations are discussed in terms of a generalized

  11. A preliminary study of the performance and characteristics of a supersonic executive aircraft

    NASA Technical Reports Server (NTRS)

    Mascitti, V. R.

    1977-01-01

    The impact of advanced supersonic technologies on the performance and characteristics of a supersonic executive aircraft was studied in four configurations with different engine locations and wing/body blending and an advanced nonafterburning turbojet or variable cycle engine. An M 2.2 design Douglas scaled arrow-wing was used with Learjet 35 accommodations. All four configurations with turbojet engines meet the performance goals of 5926 km (3200 n.mi.) range, 1981 meters (6500 feet) takeoff field length, and 77 meters per second (150 knots) approach speed. The noise levels of of turbojet configurations studied are excessive. However, a turbojet with mechanical suppressor was not studied. The variable cycle engine configuration is deficient in range by 555 km (300 n.mi) but nearly meets subsonic noise rules (FAR 36 1977 edition), if coannular noise relief is assumed. All configurations are in the 33566 to 36287 kg (74,000 to 80,000 lbm) takeoff gross weight class when incorporating current titanium manufacturing technology.

  12. Evaluation of Exercise Tolerance in Dialysis Patients Performing Tai Chi Training: Preliminary Study

    PubMed Central

    Bulińska, Katarzyna; Kusztal, Mariusz; Kowalska, Joanna; Rogowski, Łukasz; Zembroń-Łacny, Agnieszka; Gołębiowski, Tomasz; Ochmann, Bartosz; Pawlaczyk, Weronika; Woźniewski, Marek

    2016-01-01

    Introduction. Patients with end-stage renal disease (ESRD) have poor physical performance and exercise capacity due to frequent dialysis treatments. Tai Chi exercises can be very useful in the area of rehabilitation of people with ESRD. Objectives. The aim of the study was to assess exercise capacity in ESRD patients participating in 6-month Tai Chi training. Patients and Methods. Twenty dialysis patients from Wroclaw took part in the training; at the end of the project, 14 patients remained (age 69.2 ± 8.6 years). A 6-minute walk test (6MWT) and spiroergometry were performed at the beginning and after 6 months of training. Results. After 6 months of Tai Chi, significant improvements were recorded in mean distance in the 6MWT (387.89 versus 436.36 m), rate of perceived exertion (7.4 versus 4.7), and spiroergometry (8.71 versus 10.08 min). Conclusions. In the ESRD patients taking part in Tai Chi training, a definite improvement in exercise tolerance was recorded after the 6-month training. Tai Chi exercises conducted on days without dialysis can be an effective and interesting form of rehabilitation for patients, offering them a chance for a better quality of life and fewer falls and hospitalisations that are the result of it. PMID:27547228

  13. Preliminary performance analysis of the Multi-Conjugate AO system of the EST

    NASA Astrophysics Data System (ADS)

    Montilla, Icíar; Béchet, Clémentine; Langlois, Maud; Tallon, Michel; Collados, Manuel

    2013-12-01

    The European Solar Telescope (EST), a 4-meter diameter world-class facility, has been designed to measure the properties of the solar magnetic field with great accuracy and high spatial resolution. For that reason, it incorporates an innovative built-in Multi-Conjugate Adaptive Optics system (MCAO), featuring 4 high altitude DM's. It combines a narrow field high order wavefront sensor, providing the information to correct the ground layer, and a wide field lower order sensor to control the higher altitude mirrors. Using sensors collecting wide field of view information has several implications, i.e. it averages wavefront information from different sky directions, making the Strehl ratio to drop for low elevation observations. So far these effects have not been studied in MCAO. We analyze this effect by using the Fractal Iterative Method (FrIM), which incorporates a wide field Shack-Hartmann, and we performed end to end simulations of the EST MCAO system to analyze the performance of this system for a large range of elevations, as required in solar observations, and depending on the asterism geometry and number and height of DM's, in order to find the best system configuration.

  14. Performance evaluation of MCT arrays developed for SWIR and hyperspectral applications: test bench and preliminary results

    NASA Astrophysics Data System (ADS)

    Duvet, L.; Martin, E.; Nelms, N.

    2009-07-01

    We report here the first results of a performance evaluation program of two MCT arrays developed under ESA (European Space Agency) contracts by SOFRADIR. The program will first somehow repeat the electro-optical tests performed by the manufacturer and then focus on dark current and quantum efficiency. The first detector has a cut-off at 2.5 um and a format of 1000 x 256 (so called "SHOWMA") with 30 um pitch. A customized dewar has been manufactured in order to allow in particular a proper dark current measurements over a wide range of temperature. The second detector has a cut-off at 2.2 um with a format of 500 x256 (so called "BEPI") and a 30 um pitch. The particularity of the second detector is to have the CdZnTe substrate removed, leading to a sensitivity down to 0.4 um, as was requested for the targeted application (hyperspectral imager on board Bepi Colombo). It was delivered in a sealed dedicated miniaturized dewar with cooler. A complete electro-optical test bench has been developed and its commissioning will also be detailed. The test bench allows in particular quantum efficiency measurements over the full wavelength range 0.3 to 2.5 um.

  15. A preliminary assessment of financial stability, efficiency, health systems and health outcomes using performance-based contracts in Belize.

    PubMed

    Bowser, Diana M; Figueroa, Ramon; Natiq, Laila; Okunogbe, Adeyemi

    2013-01-01

    Over the last 10 years, Belize has implemented a National Health Insurance (NHI) program that uses performance-based contracts with both public and private facilities to improve financial sustainability, efficiency and service provision. Data were collected at the facility, district and national levels in order to assess trends in financial sustainability, efficiency payments, year-end bonuses and health system and health outcomes. A difference-in-difference approach was used to assess the difference in technical efficiency between private and public facilities. The results show that per capita spending on services provided by the NHI program has decreased over the period 2006-2009 from BZ$177 to BZ$136. The private sector has achieved higher levels of technical efficiency, but lower percentages of efficiency and year-end bonus payments. Districts with contracts through the NHI program showed greater improvements in facility births, nurse density, reducing maternal mortality, diabetes deaths and morbidity from bronchitis, emphysema and asthma than districts without contracts over the period 2006-2010. This preliminary assessment of Belize's pay-for-performance system provides some positive results, however further research is needed to use the lessons learned from Belize to implement similar reforms in other systems.

  16. A Preliminary Model for Spacecraft Propulsion Performance Analysis Based on Nuclear Gain and Subsystem Mass-Power Balances

    NASA Technical Reports Server (NTRS)

    Chakrabarti, Suman; Schmidt, George R.; Thio, Y. C.; Hurst, Chantelle M.

    1999-01-01

    A preliminary model for spacecraft propulsion performance analysis based on nuclear gain and subsystem mass-power balances are presented in viewgraph form. For very fast missions with straight-line trajectories, it has been shown that mission trip time is proportional to the cube root of alpha. Analysis of spacecraft power systems via a power balance and examination of gain vs. mass-power ratio has shown: 1) A minimum gain is needed to have enough power for thruster and driver operation; and 2) Increases in gain result in decreases in overall mass-power ratio, which in turn leads to greater achievable accelerations. However, subsystem mass-power ratios and efficiencies are crucial: less efficient values for these can partially offset the effect of nuclear gain. Therefore, it is of interest to monitor the progress of gain-limited subsystem technologies and it is also possible that power-limited systems with sufficiently low alpha may be competitive for such ambitious missions. Topics include Space flight requirements; Spacecraft energy gain; Control theory for performance; Mission assumptions; Round trips: Time and distance; Trip times; Vehicle acceleration; and Minimizing trip times.

  17. Design and Preliminary Thermal Performance of the Mars Science Laboratory Rover Heat Exchangers

    NASA Technical Reports Server (NTRS)

    Mastropietro, A. J.; Beatty, John; Kelly, Frank; Birur, Gajanana; Bhandari, Pradeep; Pauken, Michael; Illsley, Peter; Liu, Yuanming; Bame, David; Miller, Jennifer

    2010-01-01

    The challenging range of proposed landing sites for the Mars Science Laboratory Rover requires a rover thermal management system that is capable of keeping temperatures controlled across a wide variety of environmental conditions. On the Martian surface where temperatures can be as cold as -123 degrees Centigrade and as warm as 38 degrees Centigrade, the Rover relies upon a Mechanically Pumped Fluid Loop (MPFL) and external radiators to maintain the temperature of sensitive electronics and science instruments within a -40 degrees Centigrade to 50 degrees Centigrade range. The MPFL also manages significant waste heat generated from the Rover power source, known as the Multi Mission Radioisotope Thermoelectric Generator (MMRTG). The MMRTG produces 110 Watts of electrical power while generating waste heat equivalent to approximately 2000 Watts. Two similar Heat Exchanger (HX) assemblies were designed to both acquire the heat from the MMRTG and radiate waste heat from the onboard electronics to the surrounding Martian environment. Heat acquisition is accomplished on the interior surface of each HX while heat rejection is accomplished on the exterior surface of each HX. Since these two surfaces need to be at very different temperatures in order for the MPFL to perform efficiently, they need to be thermally isolated from one another. The HXs were therefore designed for high in-plane thermal conductivity and extremely low through-thickness thermal conductivity by using aerogel as an insulator inside composite honeycomb sandwich panels. A complex assembly of hand welded and uniquely bent aluminum tubes are bonded onto the HX panels and were specifically designed to be easily mated and demated to the rest of the Rover Heat Recovery and Rejection System (RHRS) in order to ease the integration effort. During the cruise phase to Mars, the HX assemblies serve the additional function of transferring heat from the Rover MPFL to the separate Cruise Stage MPFL so that heat

  18. Preliminary results on performance testing of a turbocharged rotary combustion engine

    NASA Technical Reports Server (NTRS)

    Meng, P. R.; Rice, W. J.; Schock, H. J.; Pringle, D. P.

    1982-01-01

    The performance of a turbocharged rotary engine at power levels above 75 kW (100 hp) was studied. A twin rotor turbocharged Mazda engine was tested at speeds of 3000 to 6000 rpm and boost pressures to 7 psi. The NASA developed combustion diagnostic instrumentation was used to quantify indicated and pumping mean effect pressures, peak pressure, and face to face variability on a cycle by cycle basis. Results of this testing showed that a 5900 rpm a 36 percent increase in power was obtained by operating the engine in the turbocharged configuration. When operating with lean carburetor jets at 105 hp (78.3 kW) and 4000 rpm, a brake specific fuel consumption of 0.45 lbm/lb-hr was measured.

  19. RTD fluxgate performance for application in magnetic label-based bioassay: preliminary results.

    PubMed

    Ando, B; Ascia, A; Baglio, S; Bulsara, A R; Trigona, C; In, V

    2006-01-01

    Magnetic bioassay is becoming of great interest in several application including magnetic separation, drug delivery, hyperthermia treatments, magnetic resonance imaging (MRI) and magnetic labelling. The latter can be used to localize bio-entities (e.g. cancer tissues) by using magnetic markers and high sensitive detectors. To this aim SQUIDs can be adopted, however this result in a quite sophisticated and complex method involving high cost and complex set-up. In this paper, the possibility to adopt RTD fluxgate magnetometers as alternative low cost solution to perform magnetic bio-sensing is investigated. Some experimental results are shown that encourage to pursue this approach in order to obtain simple devices that can detect a certain number of magnetic particles accumulated onto a small surface such to be useful for diagnosis purposes. PMID:17946280

  20. Preliminary assessment of the basic navigation and precise positioning performance of BDS

    NASA Astrophysics Data System (ADS)

    Zhao, Qile; Hu, Zhigang; Li, Min; Guo, Jing; Shi, Chuang; Liu, Jingnan

    2014-05-01

    Following the general guideline of starting with regional services and then expanding to global services, the BeiDou Navigation Satellite System(BDS) is steadily accelerating the construction. By the end of 2012, the BDS already consists of fourteen networking satellites, including five GEO satellites, five IGSO satellites, and four MEO satellites, and owns full operational capability for China and its surrounding areas. Both basic navigation and precise positioning performance of current BDS (with 5GEO+5IGSO+4MEO satellites) during January to December of 2013 are evaluated in this presentation. In China and its surrounding area, the positioning accuracy using BDS opening service is about 10 meters in both horizontal and vertical direction. Users can get high precise service using BDS only, and both BDS and GPS users can be benefitted from combination of the two systems.

  1. A two-colored chewing gum test for assessing masticatory performance: a preliminary study.

    PubMed

    Endo, Toshiya; Komatsuzaki, Akira; Kurokawa, Hiroomi; Tanaka, Satoshi; Kobayashi, Yoshiki; Kojima, Koji

    2014-01-01

    This study was conducted to compare subjective and objective assessment methods of a two-colored chewing gum test and to find out whether these methods are capable of discriminating masticatory performances between sexes. 31 adults, 16 males and 15 females participated in this study. Each subject chewed five samples of two-colored chewing gum sticks for 5, 10, 20, 30 and 50 chewing strokes, respectively. The subjective color-mixing and shape indices for the gum bolus (SCMI-B, SSI-B) and the subjective color-mixing index and objective color-mixing ratio for the gum wafer (SCMI-W, OCMR-W) were evaluated by two independent examiners and, on a different day, re-evaluated by one of the examiners. The SCMI-B and SCMI-W assessments had inter- and intra-examiner reliable agreement at 20 or more chewing strokes. The OCMR-W measurement demonstrated high accuracy and low reproducibility between and within the examiners. There were significant gender differences in the distribution of SCMI-W scores (P = 0.044) and in the mean OCMI-W (P = 0.007). The SCMI-B and SCMI-W assessments and the OCMR-W measurement were reliable and valid at the 20 and 30 chewing strokes in this two-colored chewing gum test. The subjective color-mixing index (SCMI-W) and objective color-mixing ratio (OCMR-W) for the chewing gum wafer are capable of discriminating masticatory performance between sexes in this two-colored chewing gum test and that the OCMR-W measurement is discriminating better than the SCMI-W assessment. PMID:23076496

  2. A two-colored chewing gum test for assessing masticatory performance: a preliminary study.

    PubMed

    Endo, Toshiya; Komatsuzaki, Akira; Kurokawa, Hiroomi; Tanaka, Satoshi; Kobayashi, Yoshiki; Kojima, Koji

    2014-01-01

    This study was conducted to compare subjective and objective assessment methods of a two-colored chewing gum test and to find out whether these methods are capable of discriminating masticatory performances between sexes. 31 adults, 16 males and 15 females participated in this study. Each subject chewed five samples of two-colored chewing gum sticks for 5, 10, 20, 30 and 50 chewing strokes, respectively. The subjective color-mixing and shape indices for the gum bolus (SCMI-B, SSI-B) and the subjective color-mixing index and objective color-mixing ratio for the gum wafer (SCMI-W, OCMR-W) were evaluated by two independent examiners and, on a different day, re-evaluated by one of the examiners. The SCMI-B and SCMI-W assessments had inter- and intra-examiner reliable agreement at 20 or more chewing strokes. The OCMR-W measurement demonstrated high accuracy and low reproducibility between and within the examiners. There were significant gender differences in the distribution of SCMI-W scores (P = 0.044) and in the mean OCMI-W (P = 0.007). The SCMI-B and SCMI-W assessments and the OCMR-W measurement were reliable and valid at the 20 and 30 chewing strokes in this two-colored chewing gum test. The subjective color-mixing index (SCMI-W) and objective color-mixing ratio (OCMR-W) for the chewing gum wafer are capable of discriminating masticatory performance between sexes in this two-colored chewing gum test and that the OCMR-W measurement is discriminating better than the SCMI-W assessment.

  3. How accurately can students estimate their performance on an exam and how does this relate to their actual performance on the exam?

    NASA Astrophysics Data System (ADS)

    Rebello, N. Sanjay

    2012-02-01

    Research has shown students' beliefs regarding their own abilities in math and science can influence their performance in these disciplines. I investigated the relationship between students' estimated performance and actual performance on five exams in a second semester calculus-based physics class. Students in a second-semester calculus-based physics class were given about 72 hours after the completion of each of five exams, to estimate their individual and class mean score on each exam. Students were given extra credit worth 1% of the exam points for estimating their score correct within 2% of the actual score and another 1% extra credit for estimating the class mean score within 2% of the correct value. I compared students' individual and mean score estimations with the actual scores to investigate the relationship between estimation accuracies and exam performance of the students as well as trends over the semester.

  4. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations.

    PubMed

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task.

  5. ESTIMATING HIGH LEVEL WASTE MIXING PERFORMANCE IN HANFORD DOUBLE SHELL TANKS

    SciTech Connect

    THIEN MG; GREER DA; TOWNSON P

    2011-01-13

    The ability to effectively mix, sample, certify, and deliver consistent batches of high level waste (HLW) feed from the Hanford double shell tanks (DSTs) to the Waste Treatment and Immobilization Plant (WTP) presents a significant mission risk with potential to impact mission length and the quantity of HLW glass produced. The Department of Energy's (DOE's) Tank Operations Contractor (TOC), Washington River Protection Solutions (WRPS) is currently demonstrating mixing, sampling, and batch transfer performance in two different sizes of small-scale DSTs. The results of these demonstrations will be used to estimate full-scale DST mixing performance and provide the key input to a programmatic decision on the need to build a dedicated feed certification facility. This paper discusses the results from initial mixing demonstration activities and presents data evaluation techniques that allow insight into the performance relationships of the two small tanks. The next steps, sampling and batch transfers, of the small scale demonstration activities are introduced. A discussion of the integration of results from the mixing, sampling, and batch transfer tests to allow estimating full-scale DST performance is presented.

  6. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations.

    PubMed

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task. PMID:25784886

  7. Simplified design guide for estimating photovoltaic flat array and system performance

    SciTech Connect

    Evans, D.L.; Facinelli, W.A.; Koehler, L.P.

    1981-03-01

    Simplified, non-computer based methods are presented for predicting photovoltaic array and system performance. The array performance prediction methods are useful for calculating the potential output of passively cooled, flat, south facing max-power tracked arrays. A solar/weather data base for 97 different US and US affiliated stations is provided to aid in these calculations. Also, performance estimates can be made for photovoltaic systems (array, battery, power conditioner) that are backed-up by non-solar reserves capable of meeting the load when the solar system cannot. Such estimates can be made for a total of 41 different sinusoidal, unimodal, and bimodal diurnal load profiles from appropriate graphs included. These allow easy determination of the fraction of the load met by the solar photovoltaic system as a function of array size and (dedicated) battery storage capacity. These performance graphs may also be used for systems without battery storage. Use of array manufacturer's specification sheet data is discussed. Step-by-step procedures, along with suggested worksheets, are provided for carrying out the necessary calculations.

  8. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations

    PubMed Central

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M.; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task. PMID:25784886

  9. Preliminary Performance Assessment for the Waste Management Area C at the Hanford Site in Southeast Washington

    SciTech Connect

    Bergeron, Marcel P.; Singleton, Kristin M.; Eberlein, Susan J.

    2015-01-07

    A performance assessment (PA) of Single-Shell Tank (SST) Waste Management Area C (WMA C) located at the U.S. Department of Energy's (DOE) Hanford Site in southeastern Washington is being conducted to satisfy the requirements of the Hanford Federal Facility Agreement and Consent Order (HFFACO), as well as other Federal requirements and State-approved closure plans and permits. The WMP C PA assesses the fate, transport, and impacts of radionuclides and hazardous chemicals within residual wastes left in tanks and ancillary equipment and facilities in their assumed closed configuration and the subsequent risks to humans into the far future. The part of the PA focused on radiological impacts is being developed to meet the requirements for a closure authorization under DOE Order 435.1 that includes a waste incidental to reprocessing determination for residual wastes remaining in tanks, ancillary equipment, and facilities. An additional part of the PA will evaluate human health and environmental impacts from hazardous chemical inventories in residual wastes remaining in WMA C tanks, ancillary equipment, and facilities needed to meet the requirements for permitted closure under RCRA.

  10. Preliminary investigation on CAD system update: effect of selection of new cases on classifier performance

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Nishimura, Kohei; Hara, Takeshi; Fujita, Hiroshi

    2013-02-01

    When a computer-aided diagnosis (CAD) system is used in clinical practice, it is desirable that the system is constantly and automatically updated with new cases obtained for performance improvement. In this study, the effect of different case selection methods for the system updates was investigated. For the simulation, the data for classification of benign and malignant masses on mammograms were used. Six image features were used for training three classifiers: linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbors (kNN). Three datasets, including dataset I for initial training of the classifiers, dataset T for intermediate testing and retraining, and dataset E for evaluating the classifiers, were randomly sampled from the database. As a result of intermediate testing, some cases from dataset T were selected to be added to the previous training set in the classifier updates. In each update, cases were selected using 4 methods: selection of (a) correctly classified samples, (b) incorrectly classified samples, (c) marginally classified samples, and (d) random samples. For comparison, system updates using all samples in dataset T were also evaluated. In general, the average areas under the receiver operating characteristic curves (AUCs) were almost unchanged with method (a), whereas AUCs generally degraded with method (b). The AUCs were improved with method (c) and (d), although use of all available cases generally provided the best or nearly best AUCs. In conclusion, CAD systems may be improved by retraining with new cases accumulated during practice.

  11. Preliminary Investigation of Performance and Starting Characteristics of Liquid Fluorine : Liquid Oxygen Mixtures with Jet Fuel

    NASA Technical Reports Server (NTRS)

    Rothenberg, Edward A; Ordin, Paul M

    1954-01-01

    The performance of jet fuel with an oxidant mixture containing 70 percent liquid fluorine and 30 percent liquid oxygen by weight was investigated in a 500-pound-thrust engine operating at a chamber pressure of 300 pounds per square inch absolute. A one-oxidant-on-one-fuel skewed-hole impinging-jet injector was evaluated in a chamber of characteristic length equal to 50 inches. A maximum experimental specific impulse of 268 pound-seconds per pound was obtained at 25 percent fuel, which corresponds to 96 percent of the maximum theoretical specific impulse based on frozen composition expansion. The maximum characteristic velocity obtained was 6050 feet per second at 23 percent fuel, or 94 percent of the theoretical maximum. The average thrust coefficient was 1.38 for the 500-pound thrust combustion-chamber nozzle used, which was 99 percent of the theoretical (frozen) maximum. Mixtures of fluorine and oxygen were found to be self-igniting with jet fuel with fluorine concentrations as low as 4 percent, when low starting propellant flow rated were used.

  12. Preparation and Preliminary Dialysis Performance Research of Polyvinylidene Fluoride Hollow Fiber Membranes

    PubMed Central

    Zhang, Qinglei; Lu, Xiaolong; Liu, Juanjuan; Zhao, Lihua

    2015-01-01

    In this study, the separation properties of Polyvinylidene fluoride (PVDF) hollow fiber hemodialysis membranes were improved by optimizing membrane morphology and structure. The results showed that the PVDF membrane had better mechanical and separation properties than Fresenius Polysulfone High-Flux (F60S) membrane. The PVDF membrane tensile stress at break, tensile elongation and bursting pressure were 11.3 MPa, 395% and 0.625 MPa, respectively. Ultrafiltration (UF) flux of pure water reached 108.2 L∙h−1∙m−2 and rejection of Albumin from bovine serum was 82.3%. The PVDF dialyzers were prepared by centrifugal casting. The influences of membrane area and simulate fluid flow rate on dialysis performance were investigated. The results showed that the clearance rate of urea and Lysozyme (LZM) were improved with increasing membrane area and fluid flow rate while the rejection of albumin from bovine serum (BSA) had little influence. The high-flux PVDF dialyzer UF coefficient reached 62.6 mL/h/mmHg. The PVDF dialyzer with membrane area 0.69 m2 has the highest clearance rate to LZM and urea. The clearance rate of LZM was 66.8% and urea was 87.7%. PMID:25807890

  13. Preliminary analysis of effect of random segment errors on coronagraph performance

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  14. Early effects of traumatic brain injury on young children's language performance: a preliminary linguistic analysis.

    PubMed

    Morse, S; Haritou, F; Ong, K; Anderson, V; Catroppa, C; Rosenfeld, J

    1999-01-01

    Language skills undergo rapid development during the early childhood years, so that by the time children start school they are competent communicators with well established syntactic, semantic and pragmatic abilities for their age. Little is known about the effects of traumatic brain injury (TBI) on the acquisition of these language skills during the early childhood years. This study used a prospective, cross-sectional design to compare the language abilities of young children following their head injury. Fifteen brain injured children, aged between 4-6 years, were divided into three injury groups depending on severity of injury, i.e. mild, moderate and severe, and compared with a matched community control group. They were assessed within 3 months of sustaining their injury on a range of expressive and receptive language tests, and free speech conversation samples, which were analysed pragmatically and syntactically. Results indicated that the severe group performed most poorly on language tasks. It is suggested that linguistic evaluation is an important component of follow up at least for the severe head injured population. PMID:10819426

  15. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is

  16. Estimating Bedrock Topography beneath Ice and Sediment Fillings in High Mountain Valleys: Preliminary Results from a Method Comparison Study

    NASA Astrophysics Data System (ADS)

    Mey, J.; Scherler, D.; Strecker, M. R.; Zeilinger, G.

    2012-12-01

    Knowledge about the thickness distribution of ice and sediment fillings in high mountain valleys is important for many applications in the fields of Hydrology, Geology, Glaciology, Geohazards and Geomorphology. However, direct geophysical measurements of ice/sediment thickness are laborious and require infrastructure and logistics that is often not available, particularly in remote mountain regions. In the past years, several methods have been developed to approximate the valley fill thicknesses primarily based on digital elevation data. In the case of sediment fillings, the thickness estimates are mostly based on simple morphometric considerations, whereas in the case of ice, more complex methods have been established using glacier mass balance and ice-flow dynamics. In this study we compare three of these methods that have been frequently applied in the past. These include a physically based approach for estimating ice-thickness distribution of valley glaciers using mass fluxes and flow mechanics. Further we adopt a method that uses the prediction capability of artificial neural networks (ANN) and we investigate a method that is based on the extrapolation of the slopes of the valley walls into the subsurface. We set up a test series in which all methods are applied to four glaciers and two sediment-filled valleys in the European Alps. The resulting bedrock topography derived from each method is checked against available ground truth data, comprising ground penetrating radar-, seismic reflection- and borehole measurements. Obviously, the method developed for estimation of ice-thickness is applicable only to the cases where valleys are occupied by ice, whereas the ANN approach and the slope extrapolation method are independent of the sort of valley fill. Thus a direct comparison is restricted to glacier settings. First results show that all methods can qualitatively reconstruct bedrock topography with typical overdeepenings and trough-shaped cross-profiles. Due to

  17. Realizations and performances of least-squares estimation and Kalman filtering by systolic arrays

    SciTech Connect

    Chen, M.J.

    1987-01-01

    Fast least-squares (LS) estimation and Kalman-filtering algorithms utilizing systolic-array implementation are studied. Based on a generalized systolic QR algorithm, a modified LS method is proposed and shown to have superior computational and inter-cell connection complexities, and is more practical for systolic-array implementation. After whitening processing, the Kalman filter can be formulated as a SRIF data-processing problem followed by a simple LS operation. This approach simplifies the computational structure, and is more reliable when the system has singular or near singular coefficient matrix. To improve the throughput rate of the systolic Kalman filter, a topology for stripe QR processing is also proposed. By skewing the order of input matrices, a fully pipelined systolic Kalman-filtering operation can be achieved. With the number of processing units of the O(n/sup 2/), the system throughput rate becomes of the O(n). The numerical properties of the systolic LS estimation and the Kalman filtering algorithms under finite word-length effect are studied via analysis and computer simulations, and are compared with that of conventional approaches. Fault tolerance of the LS estimation algorithm is also discussed. It is shown that by using a simple bypass register, reasonable estimation performance is still possible for a transient defective processing unit.

  18. Performance of statistical methods to correct food intake distribution: comparison between observed and estimated usual intake.

    PubMed

    Verly-Jr, Eliseu; Oliveira, Dayan C R S; Fisberg, Regina M; Marchioni, Dirce Maria L

    2016-09-01

    There are statistical methods that remove the within-person random error and estimate the usual intake when there is a second 24-h recall (24HR) for at least a subsample of the study population. We aimed to compare the distribution of usual food intake estimated by statistical models with the distribution of observed usual intake. A total of 302 individuals from Rio de Janeiro (Brazil) answered twenty, non-consecutive 24HR; the average length of follow-up was 3 months. The usual food intake was considered as the average of the 20 collection days of food intake. Using data sets with a pair of 2 collection days, usual percentiles of intake of the selected foods using two methods were estimated (National Cancer Institute (NCI) method and Multiple Source Method (MSM)). These estimates were compared with the percentiles of the observed usual intake. Selected foods comprised a range of parameter distributions: skewness, percentage of zero intakes and within- and between-person intakes. Both methods performed well but failed in some situations. In most cases, NCI and MSM produced similar percentiles between each other and values very close to the true intake, and they better represented the usual intake compared with 2-d mean. The smallest precision was observed in the upper tail of the distribution. In spite of the underestimation and overestimation of percentiles of intake, from a public health standpoint, these biases appear not to be of major concern. PMID:27523187

  19. Alignment estimation performances of merit function regression with differential wavefront sampling in multiple design configuration optimization

    NASA Astrophysics Data System (ADS)

    Oh, Eunsong; Kim, Sug-Whan; Cho, Seongick; Ryu, Joo-Hyung

    2011-10-01

    In our earlier study[12], we suggested a new alignment algorithm called Multiple Design Configuration Optimization (MDCO hereafter) method combining the merit function regression (MFR) computation with the differential wavefront sampling method (DWS). In this study, we report alignment state estimation performances of the method for three target optical systems (i.e. i) a two-mirror Cassegrain telescope of 58mm in diameter for deep space earth observation, ii) a three-mirror anastigmat of 210mm in aperture for ocean monitoring from the geostationary orbit, and iii) on-axis/off-axis pairs of a extremely large telescope of 27.4m in aperture). First we introduced known amounts of alignment state disturbances to the target optical system elements. Example alignment parameter ranges may include, but not limited to, from 800microns to 10mm in decenter, and from 0.1 to 1.0 degree in tilt. We then ran alignment state estimation simulation using MDCO, MFR and DWS. The simulation results show that MDCO yields much better estimation performance than MFR and DWS over the alignment disturbance level of up to 150 times larger than the required tolerances. In particular, with its simple single field measurement, MDCO exhibits greater practicality and application potentials for shop floor optical testing environment than MFR and DWS.

  20. Estimation of Crop Gross Primary Production (GPP). 2; Do Scaled (MODIS) Vegetation Indices Improve Performance?

    NASA Technical Reports Server (NTRS)

    Zhang, Qingyuan; Cheng, Yen-Ben; Lyapustin, Alexei I.; Wang, Yujie; Zhang, Xiaoyang; Suyker, Andrew; Verma, Shashi; Shuai, Yanmin; Middleton, Elizabeth M.

    2015-01-01

    Satellite remote sensing estimates of Gross Primary Production (GPP) have routinely been made using spectral Vegetation Indices (VIs) over the past two decades. The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), the green band Wide Dynamic Range Vegetation Index (WDRVIgreen), and the green band Chlorophyll Index (CIgreen) have been employed to estimate GPP under the assumption that GPP is proportional to the product of VI and photosynthetically active radiation (PAR) (where VI is one of four VIs: NDVI, EVI, WDRVIgreen, or CIgreen). However, the empirical regressions between VI*PAR and GPP measured locally at flux towers do not pass through the origin (i.e., the zero X-Y value for regressions). Therefore they are somewhat difficult to interpret and apply. This study investigates (1) what are the scaling factors and offsets (i.e., regression slopes and intercepts) between the fraction of PAR absorbed by chlorophyll of a canopy (fAPARchl) and the VIs, and (2) whether the scaled VIs developed in (1) can eliminate the deficiency and improve the accuracy of GPP estimates. Three AmeriFlux maize and soybean fields were selected for this study, two of which are irrigated and one is rainfed. The four VIs and fAPARchl of the fields were computed with the MODerate resolution Imaging Spectroradiometer (MODIS) satellite images. The GPP estimation performance for the scaled VIs was compared to results obtained with the original VIs and evaluated with standard statistics: the coefficient of determination (R2), the root mean square error (RMSE), and the coefficient of variation (CV). Overall, the scaled EVI obtained the best performance. The performance of the scaled NDVI, EVI and WDRVIgreen was improved across sites, crop types and soil/background wetness conditions. The scaled CIgreen did not improve results, compared to the original CIgreen. The scaled green band indices (WDRVIgreen, CIgreen) did not exhibit superior performance to either the

  1. Performance of pond-wetland complexes as a preliminary processor of drinking water sources.

    PubMed

    Wang, Weidong; Zheng, Jun; Wang, Zhongqiong; Zhang, Rongbin; Chen, Qinghua; Yu, Xinfeng; Yin, Chengqing

    2016-01-01

    Shijiuyang Constructed Wetland (110 hm(2)) is a drinking water source treatment wetland with primary structural units of ponds and plant-bed/ditch systems. The wetland can process about 250,000 tonnes of source water in the Xincheng River every day and supplies raw water for Shijiuyang Drinking Water Plant. Daily data for 28 months indicated that the major water quality indexes of source water had been improved by one grade. The percentage increase for dissolved oxygen and the removal rates of ammonia nitrogen, iron and manganese were 73.63%, 38.86%, 35.64%, and 22.14% respectively. The treatment performance weight of ponds and plant-bed/ditch systems was roughly equal but they treated different pollutants preferentially. Most water quality indexes had better treatment efficacy with increasing temperature and inlet concentrations. These results revealed that the pond-wetland complexes exhibited strong buffering capacity for source water quality improvement. The treatment cost of Shijiuyang Drinking Water Plant was reduced by about 30.3%. Regional rainfall significantly determined the external river water levels and adversely deteriorated the inlet water quality, thus suggesting that the "hidden" diffuse pollution in the multitudinous stream branches as well as their catchments should be the controlling emphases for river source water protection in the future. The combination of pond and plant-bed/ditch systems provides a successful paradigm for drinking water source pretreatment. Three other drinking water source treatment wetlands with ponds and plant-bed/ditch systems are in operation or construction in the stream networks of the Yangtze River Delta and more people will be benefited. PMID:26899651

  2. Performance of pond-wetland complexes as a preliminary processor of drinking water sources.

    PubMed

    Wang, Weidong; Zheng, Jun; Wang, Zhongqiong; Zhang, Rongbin; Chen, Qinghua; Yu, Xinfeng; Yin, Chengqing

    2016-01-01

    Shijiuyang Constructed Wetland (110 hm(2)) is a drinking water source treatment wetland with primary structural units of ponds and plant-bed/ditch systems. The wetland can process about 250,000 tonnes of source water in the Xincheng River every day and supplies raw water for Shijiuyang Drinking Water Plant. Daily data for 28 months indicated that the major water quality indexes of source water had been improved by one grade. The percentage increase for dissolved oxygen and the removal rates of ammonia nitrogen, iron and manganese were 73.63%, 38.86%, 35.64%, and 22.14% respectively. The treatment performance weight of ponds and plant-bed/ditch systems was roughly equal but they treated different pollutants preferentially. Most water quality indexes had better treatment efficacy with increasing temperature and inlet concentrations. These results revealed that the pond-wetland complexes exhibited strong buffering capacity for source water quality improvement. The treatment cost of Shijiuyang Drinking Water Plant was reduced by about 30.3%. Regional rainfall significantly determined the external river water levels and adversely deteriorated the inlet water quality, thus suggesting that the "hidden" diffuse pollution in the multitudinous stream branches as well as their catchments should be the controlling emphases for river source water protection in the future. The combination of pond and plant-bed/ditch systems provides a successful paradigm for drinking water source pretreatment. Three other drinking water source treatment wetlands with ponds and plant-bed/ditch systems are in operation or construction in the stream networks of the Yangtze River Delta and more people will be benefited.

  3. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 2; Applications

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    In this paper, preliminary studies on two turbine engine applications relevant to the tilt-rotor rotary wing aircraft are performed. The first case-study is the application of variable pitch turbine for the turbine performance improvement when operating at a substantially lower shaft speed. The calculations are made on the 75 percent speed and the 50 percent speed of operations. Our results indicate that with the use of the variable pitch turbines, a nominal (3 percent (probable) to 5 percent (hypothetical)) efficiency improvement at the 75 percent speed, and a notable (6 percent (probable) to 12 percent (hypothetical)) efficiency improvement at the 50 percent speed, without sacrificing the turbine power productions, are achievable if the technical difficulty of turning the turbine vanes and blades can be circumvented. The second casestudy is the contingency turbine power generation for the tilt-rotor aircraft in the One Engine Inoperative (OEI) scenario. For this study, calculations are performed on two promising methods: throttle push and steam injection. By isolating the power turbine and limiting its air mass flow rate to be no more than the air flow intake of the take-off operation, while increasing the turbine inlet total temperature (simulating the throttle push) or increasing the air-steam mixture flow rate (simulating the steam injection condition), our results show that an amount of 30 to 45 percent extra power, to the nominal take-off power, can be generated by either of the two methods. The methods of approach, the results, and discussions of these studies are presented in this paper.

  4. Traceable calibration, performance metrics, and uncertainty estimates of minirhizotron digital imagery for fine-root measurements.

    PubMed

    Roberti, Joshua A; SanClements, Michael D; Loescher, Henry W; Ayres, Edward

    2014-01-01

    Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified.

  5. Traceable calibration, performance metrics, and uncertainty estimates of minirhizotron digital imagery for fine-root measurements.

    PubMed

    Roberti, Joshua A; SanClements, Michael D; Loescher, Henry W; Ayres, Edward

    2014-01-01

    Even though fine-root turnover is a highly studied topic, it is often poorly understood as a result of uncertainties inherent in its sampling, e.g., quantifying spatial and temporal variability. While many methods exist to quantify fine-root turnover, use of minirhizotrons has increased over the last two decades, making sensor errors another source of uncertainty. Currently, no standardized methodology exists to test and compare minirhizotron camera capability, imagery, and performance. This paper presents a reproducible, laboratory-based method by which minirhizotron cameras can be tested and validated in a traceable manner. The performance of camera characteristics was identified and test criteria were developed: we quantified the precision of camera location for successive images, estimated the trueness and precision of each camera's ability to quantify root diameter and root color, and also assessed the influence of heat dissipation introduced by the minirhizotron cameras and electrical components. We report detailed and defensible metrology analyses that examine the performance of two commercially available minirhizotron cameras. These cameras performed differently with regard to the various test criteria and uncertainty analyses. We recommend a defensible metrology approach to quantify the performance of minirhizotron camera characteristics and determine sensor-related measurement uncertainties prior to field use. This approach is also extensible to other digital imagery technologies. In turn, these approaches facilitate a greater understanding of measurement uncertainties (signal-to-noise ratio) inherent in the camera performance and allow such uncertainties to be quantified and mitigated so that estimates of fine-root turnover can be more confidently quantified. PMID:25391023

  6. Provesicular granisetron hydrochloride buccal formulations: in vitro evaluation and preliminary investigation of in vivo performance.

    PubMed

    Ahmed, Sami; El-Setouhy, Doaa Ahmed; El-Latif Badawi, Alia Abd; El-Nabarawi, Mohamed Ahmed

    2014-08-18

    Granisetron hydrochloride (granisetron) is a potent antiemetic that has been proven to be effective in acute and delayed emesis in cancer chemotherapy. Granisetron suffers from reduced oral bioavailability (≈60%) due to hepatic metabolism. In this study the combined advantage of provesicular carriers and buccal drug delivery has been explored aiming to sustain effect and improve bioavailability of granisetron via development of granisetron provesicular buccoadhesive tablets with suitable quality characteristics (hardness, drug content, in vitro release pattern, exvivo bioadhesion and in vivo bioadhesion behavior). Composition of the reconstituted niosomes from different prepared provesicular carriers regarding type of surfactant used and cholesterol concentration significantly affected both entrapment efficiency (%EE) and vesicle size. Span 80 proniosome-derived niosomes exhibited higher encapsulation efficiency and smaller particle size than those derived from span 20. Also, the effect of %EE and bioadhesive polymer type on in vitro drug release and in vivo performance of buccoadhesive tablets was investigated. Based on achievement of required in vitro release pattern (20-30% at 2h, 40-65% at 6h and 80-95% at 12h), in vivo swelling behavior, and in vivo adhesion time (>14 h) granisetron formulation (F19, 1.4 mg) comprising HPMC:carbopol 974P (7:3) and maltodextrin coated with the vesicular precursors span 80 and cholesterol (9:1) was chosen for in vivo study. In vivo pharmacokinetic study revealed higher bioavailability of buccal formulation relative to conventional oral formulation of granisetron (AUC0-∞ is 89.97 and 38.18 ng h/ml for buccal and oral formulation, respectively). A significantly lower and delayed Cmax (12.09±4.47 ng/ml, at 8h) was observed after buccal application compared to conventional oral tablet (31.66±10.15 ng/ml, at 0.5 h). The prepared provesicular buccoadhesive tablet of granisetron (F19) might help bypass hepatic first

  7. Assessment of myocardial elastography performance in phantoms under combined physiologic motion configurations with preliminary in vivo feasibility

    NASA Astrophysics Data System (ADS)

    Okrasinski, S. J.; Ramachandran, B.; Konofagou, E. E.

    2012-09-01

    Myocardial elastography (ME) is a non-invasive, ultrasound-based strain imaging technique, which can detect and localize abnormalities in myocardial function. By acquiring radio-frequency (RF) frames at high frame rates, the deformation of the myocardium can be estimated, and used to identify regions of abnormal deformation indicative of cardiovascular disease. In this study, the primary objective is to evaluate the effect of torsion on the performance of ME, while the secondary objective is to image inclusions during different motion schemes. Finally, the phantom findings are validated with an in vivo human case. Phantoms of homogeneous stiffness, or containing harder inclusions, were fixed to a pump and motors, and imaged. Incremental displacements were estimated from the RF signals, and accumulated over a motion cycle, and rotation angle, radial strain and circumferential strain were estimated. Phantoms were subjected to four motion schemes: rotation, torsion, deformation, and a combination of torsion and deformation. Sonomicrometry was used as a gold standard during deformation and combined motion schemes. In the rotation scheme, the input and estimated rotation angle agree in both the homogeneous and inclusion phantoms. In the torsion scheme, the estimated rotation angle was found to be highest, closest to the source of torsion and lowest farthest from the source of torsion. In the deformation scheme, if an inclusion was not present, the estimated strain patterns accurately depicted homogeneity, while if an inclusion was present, abnormalities were observed which enabled detection of the inclusion. In addition, no significant rotation was detected. In the combined scheme, if an inclusion was not present, the estimated strain patterns accurately depicted homogeneity, while, if an inclusion was present, abnormalities were observed which enabled detection of the inclusion. Also, torsion was separated from the combined scheme and was found to be similar to the

  8. Experimental Demonstration and Performance Estimation of a new Relaxation Oscillator Using a Superconducting Schmitt Trigger Inverter

    NASA Astrophysics Data System (ADS)

    Onomi, Takeshi

    An experimental demonstration and performance estimation of a superconducting relaxation oscillator using the Schmitt trigger inverter are reported. The superconducting Schmitt trigger inverter is composed of a threshold gate that uses coupled superconducting quantum interference devices. The oscillator is based on the general concept of using the Schmitt trigger inverter and a delayed feedback loop. The oscillation frequency is characterized by the circuit parameters of the delayed feedback loop and the hysteresis structure of the Schmitt trigger. The circuit parameter dependence of the oscillation frequency is estimated by numerical simulations. In order to confirm the circuit operation, the proposed relaxation oscillator is fabricated by a Nb/AlOx/Nb standard process and tested. The operation of the oscillator is demonstrated successfully.

  9. Aerodynamic Parameters of High Performance Aircraft Estimated from Wind Tunnel and Flight Test Data

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Murphy, Patrick C.

    1998-01-01

    A concept of system identification applied to high performance aircraft is introduced followed by a discussion on the identification methodology. Special emphasis is given to model postulation using time invariant and time dependent aerodynamic parameters, model structure determination and parameter estimation using ordinary least squares an mixed estimation methods, At the same time problems of data collinearity detection and its assessment are discussed. These parts of methodology are demonstrated in examples using flight data of the X-29A and X-31A aircraft. In the third example wind tunnel oscillatory data of the F-16XL model are used. A strong dependence of these data on frequency led to the development of models with unsteady aerodynamic terms in the form of indicial functions. The paper is completed by concluding remarks.

  10. Aerodynamic Parameters of High Performance Aircraft Estimated from Wind Tunnel and Flight Test Data

    NASA Technical Reports Server (NTRS)

    Klein, Vladislav; Murphy, Patrick C.

    1999-01-01

    A concept of system identification applied to high performance aircraft is introduced followed by a discussion on the identification methodology. Special emphasis is given to model postulation using time invariant and time dependent aerodynamic parameters, model structure determination and parameter estimation using ordinary least squares and mixed estimation methods. At the same time problems of data collinearity detection and its assessment are discussed. These parts of methodology are demonstrated in examples using flight data of the X-29A and X-31A aircraft. In the third example wind tunnel oscillatory data of the F-16XL model are used. A strong dependence of these data on frequency led to the development of models with unsteady aerodynamic terms in the form of indicial functions. The paper is completed by concluding remarks.

  11. Semi-Supervised Multimodal Relevance Vector Regression Improves Cognitive Performance Estimation from Imaging and Biological Biomarkers

    PubMed Central

    Cheng, Bo; Chen, Songcan; Kaufer, Daniel I.

    2013-01-01

    Accurate estimation of cognitive scores for patients can help track the progress of neurological diseases. In this paper, we present a novel semi-supervised multimodal relevance vector regression (SM-RVR) method for predicting clinical scores of neurological diseases from multimodal imaging and biological biomarker, to help evaluate pathological stage and predict progression of diseases, e.g., Alzheimer’s diseases (AD). Unlike most existing methods, we predict clinical scores from multimodal (imaging and biological) biomarkers, including MRI, FDG-PET, and CSF. Considering that the clinical scores of mild cognitive impairment (MCI) subjects are often less stable compared to those of AD and normal control (NC) subjects due to the heterogeneity of MCI, we use only the multimodal data of MCI subjects, but no corresponding clinical scores, to train a semi-supervised model for enhancing the estimation of clinical scores for AD and NC subjects. We also develop a new strategy for selecting the most informative MCI subjects. We evaluate the performance of our approach on 202 subjects with all three modalities of data (MRI, FDG-PET and CSF) from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our SM-RVR method achieves a root-mean-square error (RMSE) of 1.91 and a correlation coefficient (CORR) of 0.80 for estimating the MMSE scores, and also a RMSE of 4.45 and a CORR of 0.78 for estimating the ADAS-Cog scores, demonstrating very promising performances in AD studies. PMID:23504659

  12. Assessing the Potential of Using Value-Added Estimates of Teacher Job Performance for Making Tenure Decisions. Working Paper 31

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Hansen, Michael

    2010-01-01

    Reforming teacher tenure is an idea that appears to be gaining traction with the underlying assumption being that one can infer to a reasonable degree how well a teacher will perform over her career based on estimates of her early-career effectiveness. Here we explore the potential for using value-added models to estimate performance and inform…

  13. Bayesian methodology to estimate and update safety performance functions under limited data conditions: a sensitivity analysis.

    PubMed

    Heydari, Shahram; Miranda-Moreno, Luis F; Lord, Dominique; Fu, Liping

    2014-03-01

    In road safety studies, decision makers must often cope with limited data conditions. In such circumstances, the maximum likelihood estimation (MLE), which relies on asymptotic theory, is unreliable and prone to bias. Moreover, it has been reported in the literature that (a) Bayesian estimates might be significantly biased when using non-informative prior distributions under limited data conditions, and that (b) the calibration of limited data is plausible when existing evidence in the form of proper priors is introduced into analyses. Although the Highway Safety Manual (2010) (HSM) and other research studies provide calibration and updating procedures, the data requirements can be very taxing. This paper presents a practical and sound Bayesian method to estimate and/or update safety performance function (SPF) parameters combining the information available from limited data with the SPF parameters reported in the HSM. The proposed Bayesian updating approach has the advantage of requiring fewer observations to get reliable estimates. This paper documents this procedure. The adopted technique is validated by conducting a sensitivity analysis through an extensive simulation study with 15 different models, which include various prior combinations. This sensitivity analysis contributes to our understanding of the comparative aspects of a large number of prior distributions. Furthermore, the proposed method contributes to unification of the Bayesian updating process for SPFs. The results demonstrate the accuracy of the developed methodology. Therefore, the suggested approach offers considerable promise as a methodological tool to estimate and/or update baseline SPFs and to evaluate the efficacy of road safety countermeasures under limited data conditions.

  14. A robust and accurate center-frequency estimation (RACE) algorithm for improving motion estimation performance of SinMod on tagged cardiac MR images without known tagging parameters.

    PubMed

    Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei

    2014-11-01

    A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod.

  15. Eigenstructure-Based Performance Analysis and Toeplitz Approximation for Direction-Of Estimators.

    NASA Astrophysics Data System (ADS)

    Wang, Huili

    1990-01-01

    Two problems are considered in this dissertation: (1) Direction-of-arrival estimation analysis, and (2) Incorporating a priori knowledge about covariance matrices, including Toeplitz matrix structure, matrix definiteness, and matrix rank. The first problem is studied based on Wilkinson eigenstructure perturbation analysis for covariance matrices. The second problem is formulated as an optimal Toeplitz approximation problem for observed covariance matrices with constraints including matrix definiteness and matrix rank. For performance analysis, compact formulas for the expectation of products of linear functionals of a complex Gaussian vector and the expectation of products of bilinear forms of a complex Wishart matrix are derived. Applications of the formulas to multivariate statistics are demonstrated, including the calculation of the moments of complex Wishart distributions as well as the moments of Blackman-Tukey's power spectrum estimates. The performance analysis is conducted for small -sample conditions. The analysis predicts that the bias absolute value and the variance of direction-of-arrival estimation for MUSIC are inversely proportional to signal -to-noise ratio and the number of snapshots. The prediction errors are shown to be within 3 dB based on Monte Carlo simulations. For optimal Toeplitzation, it is shown that the usual unbiased estimates of autocorrelation lags in spectrum estimation coincide with the entries of unconstrained least -squares Toeplitz approximates of observed covariance matrices. For matrix positive definiteness, the constrained optimal Toeplitz approximation problem is reduced to an unconstrained problem with a follow-up linear test using Fast Fourier Transform (FFT) for the unconstrained solution. For nonnegative definiteness, an optimization problem constrained by a set of linear inequalities is proposed, whose solution, if it exists, is a suboptimal solution of the nonnegative definite optimal Toeplitz approximation problem. Two

  16. Combined correlation estimation of axial displacement in optical coherence elastography: assessment of axial displacement sensitivity performance relative to existing methods

    NASA Astrophysics Data System (ADS)

    Grimwood, A.; Messa, A.; Bamber, J. C.

    2015-03-01

    A combined correlation method is introduced to optical coherence elastography for axial displacement estimation. Its performance is compared with that of amplitude correlation tracking and phase shift estimation. Relative sensitivities to small (sub-micron), and large (pixel-scale) axial displacements are analysed for a Perspex test object and gelatine phantom. The combined correlation method exhibited good overall performance, with a larger dynamic range than phase shift estimation and higher sensitivity than amplitude correlation tracking.

  17. Fitting multilevel models with ordinal outcomes: performance of alternative specifications and methods of estimation.

    PubMed

    Bauer, Daniel J; Sterba, Sonya K

    2011-12-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ when instead fitting multilevel cumulative logit models to ordinal data, maximum likelihood (ML), or penalized quasi-likelihood (PQL). ML and PQL are compared across variations in sample size, magnitude of variance components, number of outcome categories, and distribution shape. Fitting a multilevel linear model to ordinal outcomes is shown to be inferior in virtually all circumstances. PQL performance improves markedly with the number of ordinal categories, regardless of distribution shape. In contrast to binary data, PQL often performs as well as ML when used with ordinal data. Further, the performance of PQL is typically superior to ML when the data include a small to moderate number of clusters (i.e., ≤ 50 clusters).

  18. Effects of instrument settings on radiofrequency ultrasound local estimator images: a preliminary study in a gallbladder model.

    PubMed

    Wang, Jian; Kang, Chunsong; Feng, Tinghua; Xue, Jiping; Shi, Kailing; Li, Tingting; Liu, Xiaofang; Wang, Yu

    2013-10-01

    The aim of the present study was to evaluate the changes in radiofrequency ultrasound local estimator (RULES) images with different instrument settings. An Esaote Technos MPX Color Doppler Ultrasound Machine and RULES were used to capture images of a gallbladder model. The percentages of various colored areas (color filling rates) within the area of interest were calculated using different instrument gains, transducer frequencies and scan depths. Blue was predominant in the lumen of the model gallbladder, while red and green were primarily located near the inner edge of the lumen. When the depth was set at 62 mm and the gain at 105, the total color filling rates did not vary with different transducer frequencies. The blue color filling rate was greatest with a transducer frequency of 8.0 MHz, and the red and green color filling rates were greatest with a frequency of 12.5 MHz. Color variety was greatest when the transducer frequency was 12.5 MHz. When the transducer frequency was 12.5 MHz and the depth was 62 mm, the blue color filling rate was greatest with gains of 105 and 110, the red color filling rate was greatest with gains of 95 and 100 and the green color filling rate was greatest when the gain was 100. The total color filling rate was greatest at gains of 100 and 105. In conclusion, images obtained using RULES may be affected by the instrument gain and, to a certain extent, by transducer frequency.

  19. Landfill mining: Development of a theoretical method for a preliminary estimate of the raw material potential of landfill sites.

    PubMed

    Wolfsberger, Tanja; Nispel, Jörg; Sarc, Renato; Aldrian, Alexia; Hermann, Robert; Höllen, Daniel; Pomberger, Roland; Budischowsky, Andreas; Ragossnig, Arne

    2015-07-01

    In recent years, the rising need for raw materials by emerging economies (e.g. China) has led to a change in the availability of certain primary raw materials, such as ores or coal. The accompanying rising demand for secondary raw materials as possible substitutes for primary resources, the soaring prices and the global lack of specific (e.g. metallic) raw materials pique the interest of science and economy to consider landfills as possible secondary sources of raw materials. These sites often contain substantial amounts of materials that can be potentially utilised materially or energetically. To investigate the raw material potential of a landfill, boreholes and excavations, as well as subsequent hand sorting have proven quite successful. These procedures, however, are expensive and time consuming as they frequently require extensive construction measures on the landfill body or waste mass. For this reason, this article introduces a newly developed, affordable, theoretical method for the estimation of landfill contents. The article summarises the individual calculation steps of the method and demonstrates this using the example of a selected Austrian sanitary landfill. To assess the practicality and plausibility, the mathematically determined raw material potential is compared with the actual results from experimental studies of excavated waste from the same landfill (actual raw material potential). PMID:26185166

  20. Landfill mining: Development of a theoretical method for a preliminary estimate of the raw material potential of landfill sites.

    PubMed

    Wolfsberger, Tanja; Nispel, Jörg; Sarc, Renato; Aldrian, Alexia; Hermann, Robert; Höllen, Daniel; Pomberger, Roland; Budischowsky, Andreas; Ragossnig, Arne

    2015-07-01

    In recent years, the rising need for raw materials by emerging economies (e.g. China) has led to a change in the availability of certain primary raw materials, such as ores or coal. The accompanying rising demand for secondary raw materials as possible substitutes for primary resources, the soaring prices and the global lack of specific (e.g. metallic) raw materials pique the interest of science and economy to consider landfills as possible secondary sources of raw materials. These sites often contain substantial amounts of materials that can be potentially utilised materially or energetically. To investigate the raw material potential of a landfill, boreholes and excavations, as well as subsequent hand sorting have proven quite successful. These procedures, however, are expensive and time consuming as they frequently require extensive construction measures on the landfill body or waste mass. For this reason, this article introduces a newly developed, affordable, theoretical method for the estimation of landfill contents. The article summarises the individual calculation steps of the method and demonstrates this using the example of a selected Austrian sanitary landfill. To assess the practicality and plausibility, the mathematically determined raw material potential is compared with the actual results from experimental studies of excavated waste from the same landfill (actual raw material potential).

  1. Study on the performance evaluation of quantitative precipitation estimation and quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yang, H.; Chang, K.; Suk, M.; cha, J.; Choi, Y.

    2011-12-01

    Rainfall estimation and short-term (several hours) quantitative prediction of precipitation based on meteorological radar data is one of the intensely studied topics. The Korea Peninsula has the horizontally narrow land area and complex topography with many of mountains, and so it has the characteristics that the rainfall system changes in many cases. Quantitative precipitation estimation (QPE) and quantitative precipitation forecasts (QPF) are the crucial information for severe weather or water management. We have been conducted the performance evaluation of QPE/QPF of Korea Meteorological Administration (KMA), which is the first step for optimizing QPE/QPF system in South Korea. The real-time adjusted RAR (Radar-AWS-Rainrate) system gives better agreement with the observed rain-rate than that of the fixed Z-R relation, and the additional bias correction of RAR yields the slightly better results. A correlation coefficient of R2 = 0.84 is obtained between the daily accumulated observed and RAR estimated rainfall. The RAR will be available for the hydrological applications such as the water budget. The VSRF (Very Short Range Forecast) shows better performance than the MAPLE (McGill Algorithm for Precipitation Nowcasting by Lagrangian) within 40 minutes, but the MAPLE better than the VSRF after 40 minutes. In case of hourly forecast, MAPLE shows better performance than the VSRF. QPE and QPF are thought to be meaningful for the nowcasting (1~2 hours) except the model forecast. The long-term forecast longer than 3 hours by meteorological model is especially meaningful for such as water management.

  2. Large-scale Advanced Propfan (LAP) performance, acoustic and weight estimation, January, 1984

    NASA Technical Reports Server (NTRS)

    Parzych, D.; Shenkman, A.; Cohen, S.

    1985-01-01

    In comparison to turbo-prop applications, the Prop-Fan is designed to operate in a significantly higher range of aircraft flight speeds. Two concerns arise regarding operation at very high speeds: aerodynamic performance and noise generation. This data package covers both topics over a broad range of operating conditions for the eight (8) bladed SR-7L Prop-Fan. Operating conditions covered are: Flight Mach Number 0 - 0.85; blade tip speed 600-800 ft/sec; and cruise power loading 20-40 SHP/D2. Prop-Fan weight and weight scaling estimates are also included.

  3. Aerodynamic design guidelines and computer program for estimation of subsonic wind tunnel performance

    NASA Technical Reports Server (NTRS)

    Eckert, W. T.; Mort, K. W.; Jope, J.

    1976-01-01

    General guidelines are given for the design of diffusers, contractions, corners, and the inlets and exits of non-return tunnels. A system of equations, reflecting the current technology, has been compiled and assembled into a computer program (a user's manual for this program is included) for determining the total pressure losses. The formulation presented is applicable to compressible flow through most closed- or open-throat, single-, double-, or non-return wind tunnels. A comparison of estimated performance with that actually achieved by several existing facilities produced generally good agreement.

  4. Food Provisioning and Parental Status in Songbirds: Can Occupancy Models Be Used to Estimate Nesting Performance?

    PubMed Central

    Corbani, Aude Catherine; Hachey, Marie-Hélène; Desrochers, André

    2014-01-01

    Indirect methods to estimate parental status, such as the observation of parental provisioning, have been problematic due to potential biases associated with imperfect detection. We developed a method to evaluate parental status based on a novel combination of parental provisioning observations and hierarchical modeling. In the summers of 2009 to 2011, we surveyed 393 sites, each on three to four consecutive days at Forêt Montmorency, Québec, Canada. We assessed parental status of 2331 adult songbirds based on parental food provisioning. To account for imperfect detection of parental status, we applied MacKenzie et al.'s (2002) two-state hierarchical model to obtain unbiased estimates of the proportion of sites with successfully nesting birds, and the proportion of adults with offspring. To obtain an independent evaluation of detection probability, we monitored 16 active nests in 2010 and conducted parental provisioning observations away from them. The probability of detecting food provisioning was 0.31 when using nest monitoring, a value within the 0.11 to 0.38 range that was estimated by two-state models. The proportion of adults or sites with broods approached 0.90 and varied depending on date during the sampling season and year, exemplifying the role of eastern boreal forests as highly productive nesting grounds for songbirds. This study offers a simple and effective sampling design for studying avian reproductive performance that could be implemented in national surveys such as breeding bird atlases. PMID:24999969

  5. The Longview/Lakeview Barite Deposits, Southern National Petroleum Reserve, Alaska (NPRA) - Potential-Field Models and Preliminary Size Estimates

    USGS Publications Warehouse

    Schmidt, Jeanine M.; Glen, Jonathan M.G.; Morin, Robert L.

    2009-01-01

    Longview and Lakeview are two of the larger stratiform barite deposits hosted in Mississippian Akmalik Chert in the Cutaway Basin area (Howard Pass C-3 quadrangle) of the southern National Petroleum Reserve, Alaska (NPRA). Geologic studies for the South NPRA Integrated Activity Plan and Environmental Impact Statement process included an attempt to evaluate the possible size of barite resources at Longview and Lakeview by using potential-field geophysical methods (gravity and magnetics). Gravity data from 227 new stations measured by the U.S. Geological Survey, sparse regional gravity data, and new, high-resolution aeromagnetic data were forward modeled simultaneously along seven profiles perpendicular to strike and two profiles along strike of the Longview and Lakeview deposits. These models indicate details of the size and shape of the barite deposits and suggest thicknesses of 15 to 24 m, and 9 to 24 m for the Longview and Lakeview deposits, respectively. Two groups of outcrops span 1.8 km of strike length and are likely connected below the surface by barite as much as 10 m thick. Barite of significant thickness (>-5 m) is unlikely to occur north of the presently known exposures of the Longview deposit. The barite bodies have irregular (nonplanar) bases suggestive of folding; northwest-trending structures of small apparent offset cross strike at several locations. Dip of the barite is 10 to 25 degrees to the southeast. True width of the bodies (the least certain dimension) is estimated to be 160 to 200 m for Longview and 220 to 260 m for Lakeview. The two bodies contain a minimum of 4.5 million metric tons of barite and more than 38 million metric tons are possible. Grades of the barite are relatively high, with high specific gravities and low impurities. The potential for the Cutaway Basin to host economically minable quantities of barite is uncertain. Heavy-mineral concentrate samples from streams in the area, trace-element analyses, and physicalproperty

  6. Integrated gasification-combined-cycle power plants - Performance and cost estimates

    SciTech Connect

    Tsatsaronis, G.; Tawfik, T.; Lin, L. )

    1990-04-01

    Several studies of Integrated Gasification-combined-cycle (IGCC) power plants have indicated that these plants have the potential for providing performance and cost improvements over conventional coal-fired steam power plants with flue gas desulfurization. Generally, IGCC power plants have a higher energy-conversion efficiency, require less water, conform with existing environmental standards at lower cost, and are expected to convert coal to electricity at lower costs than coal-fired steam plants. This study compares estimated costs and performance of various IGCC plant design configurations. A second-law analysis identifies the real energy waste in each design configuration. In addition, a thermoeconomic analysis reveals the potential for reducing the cost of electricity generated by an IGCC power plant.

  7. Estimating maximum bite performance in Tyrannosaurus rex using multi-body dynamics.

    PubMed

    Bates, K T; Falkingham, P L

    2012-08-23

    Bite mechanics and feeding behaviour in Tyrannosaurus rex are controversial. Some contend that a modest bite mechanically limited T. rex to scavenging, while others argue that high bite forces facilitated a predatory mode of life. We use dynamic musculoskeletal models to simulate maximal biting in T. rex. Models predict that adult T. rex generated sustained bite forces of 35 000-57 000 N at a single posterior tooth, by far the highest bite forces estimated for any terrestrial animal. Scaling analyses suggest that adult T. rex had a strong bite for its body size, and that bite performance increased allometrically during ontogeny. Positive allometry in bite performance during growth may have facilitated an ontogenetic change in feeding behaviour in T. rex, associated with an expansion of prey range in adults to include the largest contemporaneous animals.

  8. Effects of exercise on perceptual estimation and short-term recall of shooting performance in a biathlon.

    PubMed

    Grebot, Christelle; Groslambert, Alain; Pernin, Jean-Noel; Burtheret, Alain; Rouillon, Jean-Denis

    2003-12-01

    Little is known about the effects of exercise on cognitive function, but in a biathlon it is known that intense skiing exercise decreases shooting performance. So the present study was designed to assess the cognitive origin of this decrease by examining the influence of skiing exercise on perceptual estimation and short-term verbal recall of shooting performance in a biathlon. 10 elite biathletes (6 men, 4 women) performed five trials of five shots in standing position in two conditions, at rest and after a standardised skiing exercise. At the end of each trial, the shooting performance was investigated by measuring the actual shooting performance and the perceptual estimation of the shooting performance. A two-way analysis of variance and the effect size indicated a significant decrease in shooting performance after skiing, but none between the actual and estimated shooting performance. At rest .4% of the shots were not estimated (1 out of 250), whereas after exercise the biathletes were not able to estimate 4.8% of the shots (12 out of 250). Further, only .01% of the nonestimated shots after exercise missed the target, i.e., 3 out of 250. The results suggest that the perceptual estimation of the shooting is not significantly affected by skiing exercise and do not explain the decrease in shooting performance observed after intense exercise. However, intense exercise could increase the difficulty of recall shooting performance and may force biathletes to use their memory selectively.

  9. Influence of time scale on performance of a psychrometric energy balance method to estimate precipitation phase

    NASA Astrophysics Data System (ADS)

    Harder, P.; Pomeroy, J. W.

    2012-12-01

    Precipitation phase determination is fundamental to estimating catchment hydrological response to precipitation in cold regions and is especially variable over time and space in mountains. Hydrological methods to estimate phase are predominantly calibrated, depend on air temperature and use daily time steps. Air temperature is not physically related to phase and precipitation events are very dynamic, adding significant uncertainty to the use of daily air temperature indices to estimate phase. Data for this study comes from high quality, high temporal resolution precipitation phase and meteorological observations at multiple elevations in a small Canadian Rockies catchment, the Marmot Creek Research Basin, from 2005 to 2012. The psychrometric energy balance of a falling hydrometeor, requiring air temperature and humidity observations, was employed to examine precipitation phase with respect to meteorological conditions via calculation of a hydrometeor temperature. The hydrometeor temperature-precipitation phase relationship was used to quantify temporal scaling in phase observations and to develop a method to estimate precipitation phase. Temporal scaling results show that the transition range of the distribution of hydrometeor temperatures associated with mixed rainfall and snowfall decreases with decreasing time interval. The amount of precipitation also has an influence as larger events lead to smaller transition ranges across all time scales. The uncertainty of the relationship between the hydrometeor temperature and phase was quantified and degrades significantly with an increase in time interval. The errors associated with the 15 minute and hourly intervals are small. Comparisons with other methods indicate that the psychrometric energy balance method performs much better than air temperature methods and that this improvement increases with decreasing time interval. These findings suggest that the physically based psychrometric method, employed on sub

  10. The VELOPT code for estimating performance of a Fabry-Perot velocimeter

    SciTech Connect

    Goosman, D.R.

    1992-04-09

    The VELOPT code calculates an estimate of the performance of a Fabry- Perot (FP) velocimeter. The code is a macro-driven, Symphony spreadsheet written for an IBM PC. VELOPT is designed to be used in conjunction with the POWER codes, which estimate the amount of light entering a collection fiber and the ratio of collected light to light leaving the laser fiber. In this model a velocimeter system, single- frequency laser output illuminates a moving test surface through a lens. Reflected light from the test surface is concentrated by a lens into an optical collection fiber. The collected light is presented to a mode scrambler, a cylinder lens, a filter, and then to a striped Fabry-Perot interferometer (FPI). Light leaving the FPI is imaged via spherical lenses and one mirror onto the slit of an electronic streak camera. The image is intensified within the camera, and then is recorded on film. VELOPT takes 47 user inputs that describe the FP velocimeter system. The primary outputs from the code include the following estimates for each of the first four fringes: Energy per unit area reaching the film; optical density expected on both Polaroid 667 and TMAX3200 films; velocity and time resolution; and statistical smoothness of the streak records. Twenty-six other secondary output quantities for each fringe are also calculated. The finesse limitation due to the finite size of the mirrors is calculated in detail by the routine WALKOFF, which is internal to VELOPT. An estimate of the reduction in effective fill time of the FPI due to the finite spatial resolution of the streak camera is also calculated by VELPOPT.

  11. HPC Usage Behavior Analysis and Performance Estimation with Machine Learning Techniques

    SciTech Connect

    Zhang, Hao; You, Haihang; Hadri, Bilel; Fahey, Mark R

    2012-01-01

    Most researchers with little high performance computing (HPC) experience have difficulties productively using the supercomputing resources. To address this issue, we investigated usage behaviors of the world s fastest academic Kraken supercomputer, and built a knowledge-based recommendation system to improve user productivity. Six clustering techniques, along with three cluster validation measures, were implemented to investigate the underlying patterns of usage behaviors. Besides manually defining a category for very large job submissions, six behavior categories were identified, which cleanly separated the data intensive jobs and computational intensive jobs. Then, job statistics of each behavior category were used to develop a knowledge-based recommendation system that can provide users with instructions about choosing appropriate software packages, setting job parameter values, and estimating job queuing time and runtime. Experiments were conducted to evaluate the performance of the proposed recommendation system, which included 127 job submissions by users from different research fields. Great feedback indicated the usefulness of the provided information. The average runtime estimation accuracy of 64.2%, with 28.9% job termination rate, was achieved in the experiments, which almost doubled the average accuracy in the Kraken dataset.

  12. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    SciTech Connect

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  13. Hip fracture risk estimation based on bone mineral density of a biomechanically guided region of interest: a preliminary study

    NASA Astrophysics Data System (ADS)

    Li, Wenjun; Kornak, John; Li, Caixia; Koyama, Alain; Saeed, Isra; Lu, Ying; Lang, Thomas

    2008-03-01

    We aim to define a biomechanically-guided region of interest inside the proximal femur for improving fracture risk prediction based on bone density measurements. The central hypothesis is that by identifying and focusing on the proximal femoral tissues strongly associated with hip fracture risk, we can provide a better densitometric evaluation of fracture risk compared to current evaluations based on anatomically defined regions of interest using DXA or CT. To achieve this, we have constructed a hip statistical atlas of quantitative computed tomography (QCT) images by applying rigid and non-rigid inter-subject image registration to transform hip QCT scans of 15 fractured patients and 15 controls into a common reference space, and performed voxel-by-voxel t-tests between the two groups to identify bone tissues that showed the strongest relevance to hip fracture. Based on identification of this fracture-relevant tissue volume, we have generated a biomechanically-guided region of interest (B-ROI). We have applied BMD measured from this new region of interest to discriminate the fractured patients and controls, and compared it to BMD measured in the total proximal femur. For the femur ROI approach, the BMD values of the fractured patients and the controls had an overlap of 60 mg/cm 3, and only 1 out of 15 fractured patients had BMD below the overlap region; for the B-ROI approach, a much narrower BMD overlap region of 28 mg/cm 3 was observed, and 11 out of 15 fractured patients had BMDs below the overlap region.

  14. Preliminary estimates of the quantities of rare-earth elements contained in selected products and in imports of semimanufactured products to the United States, 2010

    USGS Publications Warehouse

    Bleiwas, Donald I.; Gambogi, Joseph

    2013-01-01

    Rare-earth elements (REEs) are contained in a wide range of products of economic and strategic importance to the Nation. The REEs may or may not represent a significant component of that product by mass, value, or volume; however, in many cases, the embedded REEs are critical for the device’s function. Domestic sources of primary supply and the manufacturing facilities to produce products are inadequate to meet U.S. requirements; therefore, a significant percentage of the supply of REEs and the products that contain them are imported to the United States. In 2011, mines in China produced roughly 97 percent of the world’s supply of REEs, and the country’s production of these elements will likely dominate global supply until at least 2020. Preliminary estimates of the types and amount of rare-earth elements, reported as oxides, in semimanufactured form and the amounts used for electric vehicle batteries, catalytic converters, computers, and other applications were developed to provide a perspective on the Nation’s use of these elements. The amount of rare-earth metals recovered from recycling, remanufacturing, and reuse is negligible when the tonnage of products that contain REEs deposited in landfills and retained in storage is considered. Under favorable market conditions, the recovery of REEs from obsolete products could potentially displace a portion of the supply from primary sources.

  15. Artificial Intelligence Techniques for the Estimation of Direct Methanol Fuel Cell Performance

    NASA Astrophysics Data System (ADS)

    Hasiloglu, Abdulsamet; Aras, Ömür; Bayramoglu, Mahmut

    2016-04-01

    Artificial neural networks and neuro-fuzzy inference systems are well known artificial intelligence techniques used for black-box modelling of complex systems. In this study, Feed-forward artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are used for modelling the performance of direct methanol fuel cell (DMFC). Current density (I), fuel cell temperature (T), methanol concentration (C), liquid flow-rate (q) and air flow-rate (Q) are selected as input variables to predict the cell voltage. Polarization curves are obtained for 35 different operating conditions according to a statistically designed experimental plan. In modelling study, various subsets of input variables and various types of membership function are considered. A feed -forward architecture with one hidden layer is used in ANN modelling. The optimum performance is obtained with the input set (I, T, C, q) using twelve hidden neurons and sigmoidal activation function. On the other hand, first order Sugeno inference system is applied in ANFIS modelling and the optimum performance is obtained with the input set (I, T, C, q) using sixteen fuzzy rules and triangular membership function. The test results show that ANN model estimates the polarization curve of DMFC more accurately than ANFIS model.

  16. Predictive performance estimation for a dual-battery system in mild-hybrid vehicles

    NASA Astrophysics Data System (ADS)

    Renner, Daniel; Jansen, Patrick; Vergossen, David; John, Werner; Frei, Stephan

    2016-09-01

    Continuously increasing requirements for on-board system performances lead to new topologies for the energy distribution in vehicles. One promising concept is the usage of a dual-battery system instead of the conventional lead-acid "starting lightening ignition" battery. As this system is not able to control the current share between the two batteries, its performance depends on the actual battery specific operating points.The initial conditions of state of charge, voltage level and temperature influence the current share and lead to a different voltage drop of the system. This paper yields to, the basic understanding of the current share between the two batteries. The conventional performance estimation method for standalone lead-acid batteries can no longer be applied to this system. Therefore, a new algorithm for the voltage drop calculation of the dual-battery system is proposed. Measurements at different temperatures, states of charge and voltage levels show the system behavior and prove the functionality of the algorithm.

  17. Estimating Gaia's performance for O stars in the Outer Galactic plane using Herschel data

    NASA Astrophysics Data System (ADS)

    Rygl, K. L. J.; Molinari, S.; Prusti, T.; Antoja, T.; Elia, D.; de Bruijne, J.

    2014-07-01

    It is in the less dense Outer Galaxy where Gaia can contribute much to stellar studies of the Galactic Plane. As O stars are by definition young objects, their positions and kinematics can still be related to their formation site and history. O star astrometry will not only be important for studies of high-mass star formation, such as triggered star-formation in shells, but also an interesting complement to the radio maser astrometry of star-forming regions and the structure of spiral arms. With the TLUSTY (Lanz & Hubeny 2013) model atmospheres and the nominal Gaia parallax uncertainty, we estimate the parallax uncertainty for all subtypes of main sequence O stars given a visual extinction. The expected extinction is an important limitation for Gaia's astrometric performance and we estimate the extinction from the column density maps calculated from the Herschel Infrared Galactic Plane survey (Molinari et al. 2010), a thermal cold dust emission survey of unprecedented angular resolution and sensitivity. In the 10∘ strip, taken to represent the first estimate of the average extinction in the Outer Galaxy, we find that most of the visual extinction is less than 10 mag. Only the most dense parts of the clouds have AV > 10 mag. Given these extinctions toward the Outer Galaxy, Gaia will provide accurate (5σ) astrometry for O stars in the Outer Galaxy up to distances of at least 4-6 kpc, which means that Gaia's O star astrometry will be able to transgress the Perseus arm and reach the less-known Outer Arm of the Milky Way (Rygl et al.https://gaia.ub.edu/Twiki/pub/GREATITNFC/ProgramFinalconference/Poster_Rygl%2cK.pdf).

  18. Brush Seals for Cryogenic Applications: Performance, Stage Effects, and Preliminary Wear Results in LN2 and LH2

    NASA Technical Reports Server (NTRS)

    Proctor, Margaret P.; Walker, James F.; Perkins, H. Douglas; Hoopes, Joan F.; Williamson, G. Scott

    1996-01-01

    Brush seals are compliant contacting seals and have significantly lower leakage than labyrinth seals in gas turbine applications. Their long life and low leakage make them candidates for use in rocket engine turbopumps. Brush seals, 50.8 mm (2 in.) in diameter with a nominal 127-micron (0.005-in.) radial interference, were tested in liquid nitrogen (LN2) and liquid hydrogen (LH2) at shaft speeds up to 35,000 and 65,000 rpm, respectively, and at pressure drops up to 1.21 MPa (175 psid) per brush. A labyrinth seal was also tested in liquid nitrogen to provide a baseline. The LN2 leakage rate of a single brush seal with an initial radial shaft interference of 127 micron (0.005 in.) measured one-half to one-third the leakage rate of a 12-tooth labyrinth seal with a radial clearance of 127 micron (0.005 in.). Two brushes spaced 7.21 micron (0.248 in.) apart leaked about one-half as much as a single brush, and two brushes tightly packed together leaked about three-fourths as much as a single brush. The maximum measured groove depth on the Inconel 718 rotor with a surface finish of 0.81 micron (32 microinch) was 25 micron (0.0010 in.) after 4.3 hr of shaft rotation in liquid nitrogen. The Haynes-25 bristles wore approximately 25 to 76 micron (0.001 to 0.003 in.) under the same conditions. Wear results in liquid hydrogen were significantly different. In liquid hydrogen the rotor did not wear, but the bristle material transferred onto the rotor and the initial 127 micron (0.005 in.) radial interference was consumed. Relatively high leakage rates were measured in liquid hydrogen. More testing is required to verify the leakage performance, to validate and calibrate analysis techniques, and to determine the wear mechanisms. Performance, staging effects, and preliminary wear results are presented.

  19. A simulation environment for estimation of the performance of RSA cages.

    PubMed

    Gammuto, M; Martelli, S; Trozzi, C; Bragonzoni, L; Russo, A

    2008-09-01

    Roentgen stereophotogrammetric analysis (RSA) is an important technique for in vivo evaluation of joint kinematics and surgical outcome. However, its accuracy is highly affected by the experimental set-up. In this paper we present a new software environment for assessing the impact of calibration cage design on the accuracy of the reconstruction of 3D points, which can be easily used for preliminary evaluations also by non-expert users. The paper presents methods of the simulator and preliminary results in a clinical standard and custom environment. The software was realized using MATLAB and developed for the PC/Windows operating system. It is freeware under request to authors.

  20. A computer program for wing subsonic aerodynamic performance estimates including attainable thrust and vortex lift effects

    NASA Technical Reports Server (NTRS)

    Carlson, H. W.; Walkley, K. B.

    1982-01-01

    Numerical methods incorporated into a computer program to provide estimates of the subsonic aerodynamic performance of twisted and cambered wings of arbitrary planform with attainable thrust and vortex lift considerations are described. The computational system is based on a linearized theory lifting surface solution which provides a spanwise distribution of theoretical leading edge thrust in addition to the surface distribution of perturbation velocities. The approach used relies on a solution by iteration. The method also features a superposition of independent solutions for a cambered and twisted wing and a flat wing of the same planform to provide, at little additional expense, results for a large number of angles of attack or lift coefficients. A previously developed method is employed to assess the portion of the theoretical thrust actually attainable and the portion that is felt as a vortex normal force.

  1. Can the fluctuations of motion be used to estimate the performance of kayak paddlers?

    NASA Astrophysics Data System (ADS)

    Vadai, Gergely; Gingl, Zoltán

    2016-05-01

    Today many compact and efficient on-water data acquisition units help modern coaching by measuring and analyzing various inertial signals during kayaking. One of the most challenging problems is how these signals can be used to estimate performance and to develop the technique. Recently we have introduced indicators based on the fluctuations of the inertial signals as promising additions to the existing parameters. In this work we report on our more detailed analysis, compare new indicators, and discuss the possible advantages of the applied methods. Our primary aim is to draw attention to several exciting and inspiring open problems and to initiate further research even in several related multidisciplinary fields. More detailed information can be found on the dedicated webpage, www.noise.inf.u-szeged.hu/kayak.

  2. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  3. The role of interior watershed processes in improving parameter estimation and performance of watershed models.

    PubMed

    Yen, Haw; Bailey, Ryan T; Arabi, Mazdak; Ahmadi, Mehdi; White, Michael J; Arnold, Jeffrey G

    2014-09-01

    Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the large number of parameters at the disposal of these models, circumstances may arise in which excellent global results are achieved using inaccurate magnitudes of these "intra-watershed" responses. When used for scenario analysis, a given model hence may inaccurately predict the global, in-stream effect of implementing land-use practices at the interior of the watershed. In this study, data regarding internal watershed behavior are used to constrain parameter estimation to maintain realistic intra-watershed responses while also matching available in-stream monitoring data. The methodology is demonstrated for the Eagle Creek Watershed in central Indiana. Streamflow and nitrate (NO) loading are used as global in-stream comparisons, with two process responses, the annual mass of denitrification and the ratio of NO losses from subsurface and surface flow, used to constrain parameter estimation. Results show that imposing these constraints not only yields realistic internal watershed behavior but also provides good in-stream comparisons. Results further demonstrate that in the absence of incorporating intra-watershed constraints, evaluation of nutrient abatement strategies could be misleading, even though typical performance criteria are satisfied. Incorporating intra-watershed responses yields a watershed model that more accurately represents the observed behavior of the system and hence a tool that can be used with confidence in scenario evaluation.

  4. Simultaneous Truth and Performance Level Estimation Through Fusion of Probabilistic Segmentations

    PubMed Central

    Akhondi-Asl, Alireza; Warfield, Simon K.

    2013-01-01

    Recent research has demonstrated that improved image segmentation can be achieved by multiple template fusion utilizing both label and intensity information. However, intensity weighted fusion approaches use local intensity similarity as a surrogate measure of local template quality for predicting target segmentation and do not seek to characterize template performance. This limits both the usefulness and accuracy of these techniques. Our work here was motivated by the observation that the local intensity similarity is a poor surrogate measure for direct comparison of the template image with the true image target segmentation. Although the true image target segmentation is not available, a high quality estimate can be inferred, and this in turn allows a principled estimate to be made of the local quality of each template at contributing to the target segmentation. We developed a fusion algorithm that uses probabilistic segmentations of the target image to simultaneously infer a reference standard segmentation of the target image and the local quality of each probabilistic segmentation. The concept of comparing templates to a hidden reference standard segmentation enables accurate assessments of the contribution of each template to inferring the target image segmentation to be made, and in practice leads to excellent target image segmentation. We have used the new algorithm for the multiple-template-based segmentation and parcellation of magnetic resonance (MR) images of the brain. Intensity and label map images of each one of the aligned templates are used to train a local Gaussian mixture model based classifier. Then, each classifier is used to compute the probabilistic segmentations of the target image. Finally, the generated probabilistic segmentations are fused together using the new fusion algorithm to obtain the segmentation of the target image. We evaluated our method in comparison to other state-of-the-art segmentation methods. We demonstrated that our new

  5. An Evaluation of Empirical Bayes's Estimation of Value-Added Teacher Performance Measures

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Maxfield, Michelle; Reckase, Mark D.; Thompson, Paul N.; Wooldridge, Jeffrey M.

    2015-01-01

    Empirical Bayes's (EB) estimation has become a popular procedure used to calculate teacher value added, often as a way to make imprecise estimates more reliable. In this article, we review the theory of EB estimation and use simulated and real student achievement data to study the ability of EB estimators to properly rank teachers. We compare the…

  6. Is It Just a Bad Class? Assessing the Long-Term Stability of Estimated Teacher Performance. Working Paper 73

    ERIC Educational Resources Information Center

    Goldhaber, Dan; Hansen, Michael

    2012-01-01

    In this paper we report on work estimating the stability of value-added estimates of teacher effects, an important area of investigation given public interest in workforce policies that implicitly assume effectiveness is a stable attribute within teachers. The results strongly reject the hypothesis that teacher performance is completely stable…

  7. Nuclear Air-Brayton Combined Cycle Power Conversion Design, Physical Performance Estimation and Economic Assessment

    NASA Astrophysics Data System (ADS)

    Andreades, Charalampos

    The combination of an increased demand for electricity for economic development in parallel with the widespread push for adoption of renewable energy sources and the trend toward liberalized markets has placed a tremendous amount of stress on generators, system operators, and consumers. Non-guaranteed cost recovery, intermittent capacity, and highly volatile market prices are all part of new electricity grids. In order to try and remediate some of these effects, this dissertation proposes and studies the design and performance, both physical and economic, of a novel power conversion system, the Nuclear Air-Brayton Combined Cycle (NACC). The NACC is a power conversion system that takes a conventional industrial frame type gas turbine, modifies it to accept external nuclear heat at 670°C, while also maintaining its ability to co-fire with natural gas to increase temperature and power output at a very quick ramp rate. The NACC addresses the above issues by allowing the generator to gain extra revenue through the provision of ancillary services in addition to energy payments, the grid operator to have a highly flexible source of capacity to back up intermittent renewable energy sources, and the consumer to possibly see less volatile electricity prices and a reduced probability of black/brown outs. This dissertation is split into six sections that delve into specific design and economic issues related to the NACC. The first section describes the basic design and modifications necessary to create a functional externally heated gas turbine, sets a baseline design based upon the GE 7FB, and estimates its physical performance under nominal conditions. The second section explores the off-nominal performance of the NACC and characterizes its startup and shutdown sequences, along with some of its safety measures. The third section deals with the power ramp rate estimation of the NACC, a key performance parameter in a renewable-heavy grid that needs flexible capacity. The

  8. Dark Matter Burners: Preliminary Estimate

    SciTech Connect

    Moskalenko, Igor V.; Wai, L.; /SLAC

    2006-09-11

    We show that a star orbiting close enough to an adiabatically grown supermassive black hole can capture a large number of weakly interacting massive particles (WIMPs) during its lifetime. WIMP annihilation energy release in low- to medium-mass stars is comparable with or even exceeds the luminosity of such stars due to thermonuclear burning. The excessive energy release in the stellar core may result in an evolution scenario different from what is expected for a regular star. The model thus predicts the existence of unusual stars within the central parsec of galactic nuclei. If found, such stars would provide evidence for the existence of particle dark matter. The excess luminosity of such stars attributed to WIMP ''burning'' can be used to infer the local WIMP matter density. A white dwarf with a highly eccentric orbit around the central black hole may exhibit variations in brightness correlated with the orbital phase. On the other hand, white dwarfs shown to lack such orbital brightness variations can be used to provide constraints on WIMP matter density, WIMP-nucleus scattering and pair annihilation cross sections.

  9. Performance of the split-symbol moments SNR estimator in the presence of inter-symbol interference

    NASA Technical Reports Server (NTRS)

    Shah, B.; Hinedi, S.

    1989-01-01

    The Split-Symbol Moments Estimator (SSME) is an algorithm that is designed to estimate symbol signal-to-noise ratio (SNR) in the presence of additive white Gaussian noise (AWGN). The performance of the SSME algorithm in band-limited channels is examined. The effects of the resulting inter-symbol interference (ISI) are quantified. All results obtained are in closed form and can be easily evaluated numerically for performance prediction purposes. Furthermore, they are validated through digital simulations.

  10. Evaluating the predictive performance of empirical estimators of natural mortality rate using information on over 200 fish species

    USGS Publications Warehouse

    Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.

    2015-01-01

    Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly's method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899t−0.916max, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L−0.33∞ , prediction error = 0.6) otherwise.

  11. Preliminary MIPCC Enhanced F-4 and F-15 Performance Characteristics for a First Stage Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Kloesel, Kurt J.

    2013-01-01

    Performance increases in turbojet engines can theoretically be achieved through Mass Injection Pre-Compressor Cooling (MIPCC), a process involving injecting water or oxidizer or both into an afterburning turbojet engine. The injection of water results in pre-compressor cooling, allowing the propulsion system to operate at high altitudes and Mach numbers. In this way, a MIPCC-enhanced turbojet engine could be used to power the first stage of a reusable launch vehicle or be integrated into an existing aircraft that could launch a 100-lbm payload to a reference 100-nm altitude orbit at 28 deg inclination. The two possible candidates for MIPCC flight demonstration that are evaluated in this study are the F-4 Phantom II airplane and the F-15 Eagle airplane (both of McDonnell Douglas, now The Boeing Company, Chicago, Illinois), powered by two General Electric Company (Fairfield, Connecticut) J79 engines and two Pratt & Whitney (East Hartford, Connecticut) F100-PW-100 engines, respectively. This paper presents a conceptual discussion of the theoretical performance of each of these aircraft using MIPCC propulsion techniques. Trajectory studies were completed with the Optimal Trajectories by Implicit Simulation (OTIS) software (NASA Glenn Research Center, Cleveland, Ohio) for a standard F-4 airplane and a standard F-15 airplane. Standard aircraft simulation models were constructed, and the thrust in each was altered in accordance with estimated MIPCC performance characteristics. The MIPCC and production aircraft model results were then reviewed to assess the feasibility of a MIPCC-enhanced propulsion system for use as a first-stage reusable launch vehicle; it was determined that the MIPCC-enhanced F-15 model showed a significant performance advantage over the MIPCC-enhanced F-4 model.

  12. A Long-Pulse Spallation Source at Los Alamos: Facility description and preliminary neutronic performance for cold neutrons

    SciTech Connect

    Russell, G.J.; Weinacht, D.J.; Pitcher, E.J.; Ferguson, P.D.

    1998-03-01

    The Los Alamos National Laboratory has discussed installing a new 1-MW spallation neutron target station in an existing building at the end of its 800-MeV proton linear accelerator. Because the accelerator provides pulses of protons each about 1 msec in duration, the new source would be a Long Pulse Spallation Source (LPSS). The facility would employ vertical extraction of moderators and reflectors, and horizontal extraction of the spallation target. An LPSS uses coupled moderators rather than decoupled ones. There are potential gains of about a factor of 6 to 7 in the time-averaged neutron brightness for cold-neutron production from a coupled liquid H{sub 2} moderator compared to a decoupled one. However, these gains come at the expense of putting ``tails`` on the neutron pulses. The particulars of the neutron pulses from a moderator (e.g., energy-dependent rise times, peak intensities, pulse widths, and decay constant(s) of the tails) are crucial parameters for designing instruments and estimating their performance at an LPSS. Tungsten is the reference target material. Inconel 718 is the reference target canister and proton beam window material, with Al-6061 being the choice for the liquid H{sub 2} moderator canister and vacuum container. A 1-MW LPSS would have world-class neutronic performance. The authors describe the proposed Los Alamos LPSS facility, and show that, for cold neutrons, the calculated time-averaged neutronic performance of a liquid H{sub 2} moderator at the 1-MW LPSS is equivalent to about 1/4th the calculated neutronic performance of the best liquid D{sub 2} moderator at the Institute Laue-Langevin reactor. They show that the time-averaged moderator neutronic brightness increases as the size of the moderator gets smaller.

  13. Multiple loading conditions analysis can improve the association between finite element bone strength estimates and proximal femur fractures: a preliminary study in elderly women.

    PubMed

    Falcinelli, Cristina; Schileo, Enrico; Balistreri, Luca; Baruffaldi, Fabio; Bordini, Barbara; Viceconti, Marco; Albisinni, Ugo; Ceccarelli, Francesco; Milandri, Luigi; Toni, Aldo; Taddei, Fulvia

    2014-10-01

    This is a preliminary case-control study on osteopenic/osteoporotic elderly women, testing the association of proximal femur fracture with minimum femoral strength, as derived from finite element (FE) analysis in multiple loading conditions. Fracture cases (n=22) in acute conditions were enrolled among low-trauma fractures admitted in various hospitals in the Emilia Romagna Region, Italy. Women with no history of low-trauma fractures were enrolled as controls (n=33). Patients were imaged with DXA to obtain aBMD, and with a bilateral full femur CT scan. FE-strength was derived in stance and fall configurations: (i) as the minimum strength among those obtained for multiple loading conditions spanning a domain of plausible force directions, and (ii) as the strength associated to the most commonly used single loading conditions. The association of FE-strength and aBMD with fractures was tested with logistic regression models, deriving odds ratios (ORs) and area under the receiver operating characteristic curve (AUC). FE-strength from multiple loading conditions better classified fracture cases from controls (OR per SD change=9.6, 95% CI=3.0-31.3, AUC=0.87 in stance; OR=9.5, 95% CI=2.9-31.2, AUC=0.88 in fall) compared to aBMD (OR=3.6, 95% CI=1.6-8.2, AUC=0.79 for total femur aBMD), while FE-strength results from the most commonly used single loading conditions were similar to aBMD. Only FE-strength from multiple loading conditions remained significant in age- and aBMD-adjusted models (OR=10.5, 95% CI=1.8-61.3, AUC=0.95). In summary, we highlighted the importance of considering different loading conditions to identify bone weakness, and confirmed that femoral FE-strength estimates may add value to aBMD predictions in elderly osteopenic/osteoporotic women.

  14. High-performance object tracking and fixation with an online neural estimator.

    PubMed

    Kumarawadu, Sisil; Watanabe, Keigo; Lee, Tsu-Tian

    2007-02-01

    Vision-based target tracking and fixation to keep objects that move in three dimensions in view is important for many tasks in several fields including intelligent transportation systems and robotics. Much of the visual control literature has focused on the kinematics of visual control and ignored a number of significant dynamic control issues that limit performance. In line with this, this paper presents a neural network (NN)-based binocular tracking scheme for high-performance target tracking and fixation with minimum sensory information. The procedure allows the designer to take into account the physical (Lagrangian dynamics) properties of the vision system in the control law. The design objective is to synthesize a binocular tracking controller that explicitly takes the systems dynamics into account, yet needs no knowledge of dynamic nonlinearities and joint velocity sensory information. The combined neurocontroller-observer scheme can guarantee the uniform ultimate bounds of the tracking, observer, and NN weight estimation errors under fairly general conditions on the controller-observer gains. The controller is tested and verified via simulation tests in the presence of severe target motion changes.

  15. Performance of a microenviromental model for estimating personal NO2 exposure in children

    NASA Astrophysics Data System (ADS)

    Mölter, Anna; Lindley, Sarah; de Vocht, Frank; Agius, Raymond; Kerry, Gina; Johnson, Katy; Ashmore, Mike; Terry, Andrew; Dimitroulopoulou, Sani; Simpson, Angela

    2012-05-01

    A common problem in epidemiological studies on air pollution is exposure misclassification, because investigators often assume exposure is equivalent to outdoor concentrations at participants' homes or at the nearest urban monitor. The aims of this study were: (1) to develop a new microenvironmental exposure model (MEEM), combining time-activity data with modelled outdoor and indoor NO2 concentrations; (2) to evaluate MEEM against data collected with Ogawa™ personal samplers (OPS); (3) to compare its performance against datasets typically used in epidemiological studies. Schoolchildren wore a personal NO2 sampler, kept a time-activity diary and completed a questionnaire. This information was used by MEEM to estimate individuals' exposures. These were then compared against concentrations measured by OPS, modelled outdoor concentrations at the children's home (HOME) and concentrations measured at the nearest urban monitoring station (NUM). The mean exposure predicted by MEEM (mean = 19.6 μg m-³) was slightly lower than the mean exposure measured by OPS (mean = 20.4 μg m-³). The normalised mean bias factor (0.01) and normalised mean absolute error factor (0.25) suggested good agreement. In contrast, the HOME (mean = 31.2 μg m-³) and NUM (mean = 28.6 μg m-³) methods overpredicted exposure and showed systematic errors. The results indicate that personal exposure can be modelled by MEEM with an acceptable level of agreement, while methods such as HOME and NUM show a poorer performance.

  16. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 1, Third comparison with 40 CFR 191, Subpart B

    SciTech Connect

    Not Available

    1992-12-01

    Before disposing of transuranic radioactive wastes in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This volume contains an overview of WIPP performance assessment and a preliminary comparison with the long-term requirements of the Environmental Radiation Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B).

  17. Performance evaluation of diverse T-wave alternans estimators under variety of noise characterizations and alternans distributions.

    PubMed

    Bakhshi, Asim Dilawer; Bashir, Sajid; Shafi, Imran; Maud, Mohammad Ali

    2012-12-01

    Prognostic significance of microvolt T-wave alternans (TWA) has been established since their inclusion among important risk stratifiers for sudden cardiac death. Signal processing schemes employed for TWA estimation have their peculiar theoretical assumptions and reported statistics. An unbiased comparison of all these techniques is still a challenge. Choosing three classical schemes, this study aims to achieve holistic performance evaluation of diverse TWA estimators from a three dimensional standpoint, i.e., estimation statistics, alternan distribution and ECG signal quality. Three performance indices called average deviation (ϑ( L )), moment of deviation (ϑ( m )) and coefficient of deviation ([Formula: see text]) are devised to quantify estimator performance and consistency. Both synthetic and real physiological noises, as well as variety of temporal distributions of alternan waveforms are simulated to evaluate estimators' responses. Results show that modification of original estimation statistics, consideration of relevant noise models and a priori knowledge of alternan distribution is necessary for an unbiased performance comparison. Spectral method proves to be the most accurate for stationary TWA, even at SNRs as low as 5 dB. Correlation method's strength lies in accurately detecting temporal origins of multiple alternan episodes within a single analysis window. Modified moving average method gives best estimation at lower noise levels (SNR >25 dB) for non-stationary TWA. Estimation of both MMAM and CM is adversely effected by even small baseline drifts due to respiration, although CM gives considerably higher deviation levels than MMAM. Performance of SM is only effected when fundamental frequency of baseline drift due to respiration falls within the estimation band around 0.5 cpb. PMID:23225303

  18. Orion Exploration Flight Test-1 Post-Flight Navigation Performance Assessment Relative to the Best Estimated Trajectory

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.; Holt, Greg N.; Zanetti, Renato

    2016-01-01

    This paper details the post-flight navigation performance assessment of the Orion Exploration Flight Test-1 (EFT-1). Results of each flight phase are presented: Ground Align, Ascent, Orbit, and Entry Descent and Landing. This study examines the on-board Kalman Filter uncertainty along with state deviations relative to the Best Estimated Trajectory (BET). Overall the results show that the Orion Navigation System performed as well or better than expected. Specifically, the Global Positioning System (GPS) measurement availability was significantly better than anticipated at high altitudes. In addition, attitude estimation via processing GPS measurements along with Inertial Measurement Unit (IMU) data performed very well and maintained good attitude throughout the mission.

  19. Label fusion in atlas-based segmentation using a selective and iterative method for performance level estimation (SIMPLE).

    PubMed

    Langerak, Thomas Robin; van der Heide, Uulke A; Kotte, Alexis N T J; Viergever, Max A; van Vulpen, Marco; Pluim, Josien P W

    2010-12-01

    In a multi-atlas based segmentation procedure, propagated atlas segmentations must be combined in a label fusion process. Some current methods deal with this problem by using atlas selection to construct an atlas set either prior to or after registration. Other methods estimate the performance of propagated segmentations and use this performance as a weight in the label fusion process. This paper proposes a selective and iterative method for performance level estimation (SIMPLE), which combines both strategies in an iterative procedure. In subsequent iterations the method refines both the estimated performance and the set of selected atlases. For a dataset of 100 MR images of prostate cancer patients, we show that the results of SIMPLE are significantly better than those of several existing methods, including the STAPLE method and variants of weighted majority voting. PMID:20667809

  20. Investigating the performance of neural network backpropagation algorithms for TEC estimations using South African GPS data

    NASA Astrophysics Data System (ADS)

    Habarulema, J. B.; McKinnell, L.-A.

    2012-05-01

    In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC) estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP), backpropagation with weight delay (BPWD), backpropagation with momentum (BPM) term, backpropagation with chunkwise weight update (BPC) and backpropagation for batch (BPB) training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS) and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS) observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP), which achieves convergence after the least number of iterations during training. In this paper, neural network (NN) models were developed using hourly TEC data (for 8 years: 2000-2007) derived from GPS observations over a receiver station located at Sutherland (SUTH) (32.38° S, 20.81° E), South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN) (33.95° S, 18.47° E) and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.

  1. Performance and operational economics estimates for a coal gasification combined-cycle cogeneration powerplant

    NASA Technical Reports Server (NTRS)

    Nainiger, J. J.; Burns, R. K.; Easley, A. J.

    1982-01-01

    A performance and operational economics analysis is presented for an integrated-gasifier, combined-cycle (IGCC) system to meet the steam and baseload electrical requirements. The effect of time variations in steam and electrial requirements is included. The amount and timing of electricity purchases from sales to the electric utility are determined. The resulting expenses for purchased electricity and revenues from electricity sales are estimated by using an assumed utility rate structure model. Cogeneration results for a range of potential IGCC cogeneration system sizes are compared with the fuel consumption and costs of natural gas and electricity to meet requirements without cogeneration. The results indicate that an IGCC cogeneration system could save about 10 percent of the total fuel energy presently required to supply steam and electrical requirements without cogeneration. Also for the assumed future fuel and electricity prices, an annual operating cost savings of 21 percent to 26 percent could be achieved with such a cogeneration system. An analysis of the effects of electricity price, fuel price, and system availability indicates that the IGCC cogeneration system has a good potential for economical operation over a wide range in these assumptions.

  2. High-Performance Motion Estimation for Image Sensors with Video Compression

    PubMed Central

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME. PMID:26307996

  3. Rapid estimation of concentration of aromatic classes in middistillate fuels by high-performance liquid chromatography

    NASA Technical Reports Server (NTRS)

    Otterson, D. A.; Seng, G. T.

    1985-01-01

    An high performance liquid chromatography (HPLC) method to estimate four aromatic classes in middistillate fuels is presented. Average refractive indices are used in a correlation to obtain the concentrations of each of the aromatic classes from HPLC data. The aromatic class concentrations can be obtained in about 15 min when the concentration of the aromatic group is known. Seven fuels with a wide range of compositions were used to test the method. Relative errors in the concentration of the two major aromatic classes were not over 10 percent. Absolute errors of the minor classes were all less than 0.3 percent. The data show that errors in group-type analyses using sulfuric acid derived standards are greater for fuels containing high concentrations of polycyclic aromatics. Corrections are based on the change in refractive index of the aromatic fraction which can occur when sulfuric acid and the fuel react. These corrections improved both the precision and the accuracy of the group-type results.

  4. High-Performance Motion Estimation for Image Sensors with Video Compression.

    PubMed

    Xu, Weizhi; Yin, Shouyi; Liu, Leibo; Liu, Zhiyong; Wei, Shaojun

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  5. Time estimates in a long-term time-free environment. [human performance

    NASA Technical Reports Server (NTRS)

    Lavie, P.; Webb, W. B.

    1975-01-01

    Subjects in a time-free environment for 14 days estimated the hour and day several times a day. Half of the subjects were under a heavy exercise regime. During the waking hours, the no-exercise group showed no difference between estimated and real time, whereas the exercise group showed significantly shorter estimated than real time. Neither group showed a difference after the sleeping periods. However, the mean accumulated error for the two groups was 48.73 hours and was strongly related to the displacements of sleep/waking behavior. It is concluded that behavioral cues are the primary determinants of time estimates in time-free environments.

  6. Monitoring the Microgravity Environment Quality On-board the International Space Station Using Soft Computing Techniques. Part 2; Preliminary System Performance Results

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Lin, Paul P.; Weiss, Daniel S.

    2002-01-01

    This paper presents the preliminary performance results of the artificial intelligence monitoring system in full operational mode using near real time acceleration data downlinked from the International Space Station. Preliminary microgravity environment characterization analysis result for the International Space Station (Increment-2), using the monitoring system is presented. Also, comparison between the system predicted performance based on ground test data for the US laboratory "Destiny" module and actual on-orbit performance, using measured acceleration data from the U.S. laboratory module of the International Space Station is presented. Finally, preliminary on-orbit disturbance magnitude levels are presented for the Experiment of Physics of Colloids in Space, which are compared with on ground test data. The ground test data for the Experiment of Physics of Colloids in Space were acquired from the Microgravity Emission Laboratory, located at the NASA Glenn Research Center, Cleveland, Ohio. The artificial intelligence was developed by the NASA Glenn Principal Investigator Microgravity Services Project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment of time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a dynamic graphical display, implemented in Java, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, structural modes, etc., and decide whether or not to run their experiments, whenever that is an option, based on the acceleration magnitude and frequency sensitivity associated with that experiment. This monitoring system detects primarily the vibratory disturbance sources. The system has built-in capability to detect both known

  7. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  8. Fitting Multilevel Models with Ordinal Outcomes: Performance of Alternative Specifications and Methods of Estimation

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Sterba, Sonya K.

    2011-01-01

    Previous research has compared methods of estimation for fitting multilevel models to binary data, but there are reasons to believe that the results will not always generalize to the ordinal case. This article thus evaluates (a) whether and when fitting multilevel linear models to ordinal outcome data is justified and (b) which estimator to employ…

  9. Time Estimation and Performance on Reproduction Tasks in Subtypes of Children with Attention Deficit Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Salas, Carmen C.; Bauermeister, Jose J.; Barkley, Russell A.; Martinez, Jose V.; Cumba, Eduardo; Ramirez, Rafael R.; Reina, Graciela; Matos, Maribel

    2005-01-01

    This study compared Hispanic children (ages 7 to 11) with combined type (CT, n = 33) and inattentive type (IT, n = 21) attention deficit hyperactivity disorder (ADHD) and a control group (n = 25) on time-estimation and time-reproduction tasks. The ADHD groups showed larger errors in time reproduction but not in time estimation than the control…

  10. Security-reliability performance of cognitive AF relay-based wireless communication system with channel estimation error

    NASA Astrophysics Data System (ADS)

    Gu, Qi; Wang, Gongpu; Gao, Li; Peng, Mugen

    2014-12-01

    In this paper, both the security and the reliability performance of the cognitive amplify-and-forward (AF) relay system are analyzed in the presence of the channel estimation error. The security and the reliability performance are represented by the outage probability and the intercept probability, respectively. Instead of perfect channel state information (CSI) predominantly assumed in the literature, a certain channel estimation algorithm and the influence of the corresponding channel estimation error are considered in this study. Specifically, linear minimum mean square error estimation (LMMSE) is utilized by the destination node and the eavesdropper node to obtain the CSI, and the closed form for the outage probability and that for the intercept probability are derived with the channel estimation error. It is shown that the transmission security (reliability) can be improved by loosening the reliability (security) requirement. Moreover, we compare the security and reliability performance of this relay-based cognitive radio system with those of the direct communication system without relay. Interestingly, it is found that the AF relay-based system has less reliability performance than the direct cognitive radio system; however, it can lower the sum of the outage probability and the intercept probability than the direct communication system. It is also found that there exists an optimal training number to minimize the sum of the outage probability and the intercept probability.

  11. Preliminary parametric performance assessment of potential final waste forms for alpha low-level waste at the Idaho National Engineering Laboratory. Revision 1

    SciTech Connect

    Smith, T.H.; Sussman, M.E.; Myers, J.; Djordjevic, S.M.; DeBiase, T.A.; Goodrich, M.T.; DeWitt, D.

    1995-08-01

    This report presents a preliminary parametric performance assessment (PA) of potential waste disposal systems for alpha-contaminated, mixed, low-level waste (ALLW) currently stored at the Transuranic Storage Area of INEL. The ALLW, which contains from 10 to 100 nCi/g of transuranic (TRU) radionuclides, is awaiting treatment and disposal. The purpose of this study was to examine the effects of several parameters on the radiological-confinement performance of potential disposal systems for the ALLW. The principal emphasis was on the performance of final waste forms (FWFs). Three categories of FWF (cement, glass, and ceramic) were addressed by evaluating the performance of two limiting FWFs for each category. Performance at five conceptual disposal sites was evaluated to illustrate the effects of site characteristics on the performance of the total disposal system. Other parameters investigated for effects on receptor dose included inventory assumptions, TRU radionuclide concentration, FWF fracture, disposal depth, water infiltration rates, subsurface-transport modeling assumptions, receptor well location, intrusion scenario assumptions, and the absence of waste immobilization. These and other factors were varied singly and in some combinations. The results indicate that compliance of the treated and disposed ALLW with the performance objectives depends on the assumptions made, as well as on the FWF and the disposal site. Some combinations result in compliance, while others do not. The implications of these results for decision making relative to treatment and disposal of the INEL ALLW are discussed. The report compares the degree of conservatism in this preliminary parametric PA against that in four other PAs and one risk assessment. All of the assessments addressed the same disposal site, but different wastes. The report also presents a qualitative evaluation of the uncertainties in the PA and makes recommendations for further study.

  12. Using artificial neural networks to predict the performance of a liquid metal reflux solar receiver: Preliminary results

    SciTech Connect

    Fowler, M.M.

    1995-12-31

    Three and four-layer backpropagation artificial neural networks have been used to predict the power output of a liquid metal reflux solar receiver. The networks were trained using on-sun test data recorded at Sandia National Laboratories in Albuquerque, New Mexico. The preliminary results presented in this paper are a comparison of how different size networks train on this particular data. The results give encouragement that it will be possible to predict output power of a liquid metal receiver under a variety of operating conditions using artificial neural networks.

  13. A CT-ultrasound-coregistered augmented reality enhanced image-guided surgery system and its preliminary study on brain-shift estimation

    NASA Astrophysics Data System (ADS)

    Huang, C. H.; Hsieh, C. H.; Lee, J. D.; Huang, W. C.; Lee, S. T.; Wu, C. T.; Sun, Y. N.; Wu, Y. T.

    2012-08-01

    With the combined view on the physical space and the medical imaging data, augmented reality (AR) visualization can provide perceptive advantages during image-guided surgery (IGS). However, the imaging data are usually captured before surgery and might be different from the up-to-date one due to natural shift of soft tissues. This study presents an AR-enhanced IGS system which is capable to correct the movement of soft tissues from the pre-operative CT images by using intra-operative ultrasound images. First, with reconstructing 2-D free-hand ultrasound images to 3-D volume data, the system applies a Mutual-Information based registration algorithm to estimate the deformation between pre-operative and intra-operative ultrasound images. The estimated deformation transform describes the movement of soft tissues and is then applied to the pre-operative CT images which provide high-resolution anatomical information. As a result, the system thus displays the fusion of the corrected CT images or the real-time 2-D ultrasound images with the patient in the physical space through a head mounted display device, providing an immersive augmented-reality environment. For the performance validation of the proposed system, a brain phantom was utilized to simulate brain-shift scenario. Experimental results reveal that when the shift of an artificial tumor is from 5mm ~ 12mm, the correction rates can be improved from 32% ~ 45% to 87% ~ 95% by using the proposed system.

  14. Computer programs for estimation of STOL takeoff, landing, and static performance

    NASA Technical Reports Server (NTRS)

    Post, S. E.

    1972-01-01

    A set of computer programs has been developed for evaluating the performance of powered-lift STOL aircraft. Included are a static performance summary and dynamic calculations of takeoff and landing performance. The input, output, options, and calculations for each program are described. The programs are written in FORTRAN IV and are currently available on TSS 360. Three independent sections are presented corresponding to the three programs: (1) static performance, (2) takeoff performance, and (3) landing performance.

  15. Do trout swim better than eels? Challenges for estimating performance based on the wake of self-propelled bodies

    NASA Astrophysics Data System (ADS)

    Tytell, Eric D.

    Engineers and biologists have long desired to compare propulsive performance for fishes and underwater vehicles of different sizes, shapes, and modes of propulsion. Ideally, such a comparison would be made on the basis of either propulsive efficiency, total power output or both. However, estimating the efficiency and power output of self-propelled bodies, and particularly fishes, is methodologically challenging because it requires an estimate of thrust. For such systems traveling at a constant velocity, thrust and drag are equal, and can rarely be separated on the basis of flow measured in the wake. This problem is demonstrated using flow fields from swimming American eels, Anguilla rostrata, measured using particle image velocimetry (PIV) and high-speed video. Eels balance thrust and drag quite evenly, resulting in virtually no wake momentum in the swimming (axial) direction. On average, their wakes resemble those of self-propelled jet propulsors, which have been studied extensively. Theoretical studies of such wakes may provide methods for the estimation of thrust separately from drag. These flow fields are compared with those measured in the wakes of rainbow trout, Oncorhynchus mykiss, and bluegill sunfish, Lepomis macrochirus. In contrast to eels, these fishes produce wakes with axial momentum. Although the net momentum flux must be zero on average, it is neither spatially nor temporally homogeneous; the heterogeneity may provide an alternative route for estimating thrust. This review shows examples of wakes and velocity profiles from the three fishes, indicating challenges in estimating efficiency and power output and suggesting several routes for further experiments. Because these estimates will be complicated, a much simpler method for comparing performance is outlined, using as a point of comparison the power lost producing the wake. This wake power, a component of the efficiency and total power, can be estimated in a straightforward way from the flow

  16. Do trout swim better than eels? Challenges for estimating performance based on the wake of self-propelled bodies

    NASA Astrophysics Data System (ADS)

    Tytell, Eric D.

    2007-11-01

    Engineers and biologists have long desired to compare propulsive performance for fishes and underwater vehicles of different sizes, shapes, and modes of propulsion. Ideally, such a comparison would be made on the basis of either propulsive efficiency, total power output or both. However, estimating the efficiency and power output of self-propelled bodies, and particularly fishes, is methodologically challenging because it requires an estimate of thrust. For such systems traveling at a constant velocity, thrust and drag are equal, and can rarely be separated on the basis of flow measured in the wake. This problem is demonstrated using flow fields from swimming American eels, Anguilla rostrata, measured using particle image velocimetry (PIV) and high-speed video. Eels balance thrust and drag quite evenly, resulting in virtually no wake momentum in the swimming (axial) direction. On average, their wakes resemble those of self-propelled jet propulsors, which have been studied extensively. Theoretical studies of such wakes may provide methods for the estimation of thrust separately from drag. These flow fields are compared with those measured in the wakes of rainbow trout, Oncorhynchus mykiss, and bluegill sunfish, Lepomis macrochirus. In contrast to eels, these fishes produce wakes with axial momentum. Although the net momentum flux must be zero on average, it is neither spatially nor temporally homogeneous; the heterogeneity may provide an alternative route for estimating thrust. This review shows examples of wakes and velocity profiles from the three fishes, indicating challenges in estimating efficiency and power output and suggesting several routes for further experiments. Because these estimates will be complicated, a much simpler method for comparing performance is outlined, using as a point of comparison the power lost producing the wake. This wake power, a component of the efficiency and total power, can be estimated in a straightforward way from the flow

  17. Application of a Constant Gain Extended Kalman Filter for In-Flight Estimation of Aircraft Engine Performance Parameters

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.; Litt, Jonathan S.

    2005-01-01

    An approach based on the Constant Gain Extended Kalman Filter (CGEKF) technique is investigated for the in-flight estimation of non-measurable performance parameters of aircraft engines. Performance parameters, such as thrust and stall margins, provide crucial information for operating an aircraft engine in a safe and efficient manner, but they cannot be directly measured during flight. A technique to accurately estimate these parameters is, therefore, essential for further enhancement of engine operation. In this paper, a CGEKF is developed by combining an on-board engine model and a single Kalman gain matrix. In order to make the on-board engine model adaptive to the real engine s performance variations due to degradation or anomalies, the CGEKF is designed with the ability to adjust its performance through the adjustment of artificial parameters called tuning parameters. With this design approach, the CGEKF can maintain accurate estimation performance when it is applied to aircraft engines at offnominal conditions. The performance of the CGEKF is evaluated in a simulation environment using numerous component degradation and fault scenarios at multiple operating conditions.

  18. DESIGN OF AQUIFER REMEDIATION SYSTEMS: (2) Estimating site-specific performance and benefits of partial source removal

    EPA Science Inventory

    A Lagrangian stochastic model is proposed as a tool that can be utilized in forecasting remedial performance and estimating the benefits (in terms of flux and mass reduction) derived from a source zone remedial effort. The stochastic functional relationships that describe the hyd...

  19. Preliminary Performance Data on Westinghouse Electronic Power Regulator Operating on J34-WE-32 Turbojet Engine in Altitude Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Ketchum, James R.; Blivas, Darnold; Pack, George J.

    1950-01-01

    The behavior of the Westinghouse electronic power regulator operating on a J34-WE-32 turbojet engine was investigated in the NACA Lewis altitude wind tunnel at the request of the Bureau of Aeronautics, Department of the Navy. The object of the program was to determine the, steady-state stability and transient characteristics of the engine under control at various altitudes and ram pressure ratios, without afterburning. Recordings of the response of the following parameters to step changes in power lever position throughout the available operating range of the engine were obtained; ram pressure ratio, compressor-discharge pressure, exhaust-nozzle area, engine speed, turbine-outlet temperature, fuel-valve position, jet thrust, air flow, turbine-discharge pressure, fuel flow, throttle position, and boost-pump pressure. Representative preliminary data showing the actual time response of these variables are presented. These data are presented in the form of reproductions of oscillographic traces.

  20. Clarifying the Blurred Image: Estimating the Inter-Rater Reliability of Performance Assessments.

    ERIC Educational Resources Information Center

    Moore, Alan D.; Young, Suzanne

    As schools move toward performance assessment, there is increasing discussion of using these assessments for accountability purposes. When used for making decisions, performance assessments must meet high standards of validity and reliability. One major source of unreliability in performance assessments is interrater disagreement. In this paper,…

  1. Adaptive feedforward of estimated ripple improves the closed loop system performance significantly

    SciTech Connect

    Kwon, S.; Regan, A.; Wang, Y.M.; Rohlev, T.

    1998-12-31

    The Low Energy Demonstration Accelerator (LEDA) being constructed at Los Alamos National Laboratory will serve as the prototype for the low energy section of Acceleration Production of Tritium (APT) accelerator. This paper addresses the problem of LLRF control system for LEDA. The authors propose an estimator of the ripple and its time derivative and a control law which is based on PID control and adaptive feedforward of estimated ripple. The control law reduces the effect of the deterministic cathode ripple that is due to high voltage power supply and achieves tracking of desired set points.

  2. Estimation of the probability of error without ground truth and known a priori probabilities. [remote sensor performance

    NASA Technical Reports Server (NTRS)

    Havens, K. A.; Minster, T. C.; Thadani, S. G.

    1976-01-01

    The probability of error or, alternatively, the probability of correct classification (PCC) is an important criterion in analyzing the performance of a classifier. Labeled samples (those with ground truth) are usually employed to evaluate the performance of a classifier. Occasionally, the numbers of labeled samples are inadequate, or no labeled samples are available to evaluate a classifier's performance; for example, when crop signatures from one area from which ground truth is available are used to classify another area from which no ground truth is available. This paper reports the results of an experiment to estimate the probability of error using unlabeled test samples (i.e., without the aid of ground truth).

  3. Performance assessment of different day-of-the-year-based models for estimating global solar radiation - Case study: Egypt

    NASA Astrophysics Data System (ADS)

    Hassan, Gasser E.; Youssef, M. Elsayed; Ali, Mohamed A.; Mohamed, Zahraa E.; Shehata, Ali I.

    2016-11-01

    Different models are introduced to predict the daily global solar radiation in different locations but there is no specific model based on the day of the year is proposed for many locations around the world. In this study, more than 20 years of measured data for daily global solar radiation on a horizontal surface are used to develop and validate seven models to estimate the daily global solar radiation by day of the year for ten cities around Egypt as a case study. Moreover, the generalization capability for the best models is examined all over the country. The regression analysis is employed to calculate the coefficients of different suggested models. The statistical indicators namely, RMSE, MABE, MAPE, r and R2 are calculated to evaluate the performance of the developed models. Based on the validation with the available data, the results show that the hybrid sine and cosine wave model and 4th order polynomial model have the best performance among other suggested models. Consequently, these two models coupled with suitable coefficients can be used for estimating the daily global solar radiation on a horizontal surface for each city, and also for all the locations around the studied region. It is believed that the established models in this work are applicable and significant for quick estimation for the average daily global solar radiation on a horizontal surface with higher accuracy. The values of global solar radiation generated by this approach can be utilized in the design and estimation of the performance of different solar applications.

  4. Temperature profiles from Salt Valley, Utah, thermal conductivity of 10 samples from drill hole DOE 3, and preliminary estimates of heat flow

    SciTech Connect

    Sass, J.H.; Lachenbruch, A.H.; Smith, E.P.

    1983-01-01

    As part of a thermal study of the Salt Valley anticline, Paradox Basin, Utah, temperature profiles were obtained in nine wells drilled by the Department of Energy. Thermal conductivities were also measured on ten samples judged to be representative of the rocks encountered in the deepest hole (DOE 3) (R. J. Hite, personal communication, November 21, 1980). In this interim report, the temperature profiles and thermal conductivities are presented, together with some preliminary interpretive remarks and some suggestions for additional work.

  5. Alcohol and Student Performance: Estimating the Effect of Legal Access. NBER Working Paper No. 17637

    ERIC Educational Resources Information Center

    Lindo, Jason M.; Swensen, Isaac D.; Waddell, Glen R.

    2011-01-01

    We consider the effect of legal access to alcohol on student achievement. We first estimate the effect using an RD design but argue that this approach is not well suited to the research question in our setting. Our preferred approach instead exploits the longitudinal nature of the data, identifying the effect by measuring the extent to which a…

  6. A review and preliminary evaluation of methodological factors in performance assessments of time-varying aircraft noise effects

    NASA Technical Reports Server (NTRS)

    Coates, G. D.; Alluisi, E. A.

    1975-01-01

    The effects of aircraft noise on human performance is considered. Progress is reported in the following areas: (1) review of the literature to identify the methodological and stimulus parameters involved in the study of noise effects on human performance; (2) development of a theoretical framework to provide working hypotheses as to the effects of noise on complex human performance; and (3) data collection on the first of several experimental investigations designed to provide tests of the hypotheses.

  7. Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model

    SciTech Connect

    Hansen, Clifford

    2015-10-01

    We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .

  8. Computer programs for estimating aircraft takeoff performance in three dimensional space

    NASA Technical Reports Server (NTRS)

    Bowles, J. V.

    1974-01-01

    A set of computer programs has been developed to estimate the takeoff and initial climb-out maneuver of a given aircraft in three-dimensional space. The program is applicable to conventional, vectored lift and power-lift concept aircraft. The aircraft is treated as a point mass flying over a flat earth with no side slip, and the rotational dynamics have been neglected. The required input is described and a sample case presented.

  9. Subjective Estimates of Job Performance after Job Preview: Determinants of Anticipated Learning Curves

    ERIC Educational Resources Information Center

    Ackerman, Phillip L.; Shapiro, Stacey; Beier, Margaret E.

    2011-01-01

    When people choose a particular occupation, they presumably make an implicit judgment that they will perform well on a job at some point in the future, typically after extensive education and/or on-the-job experience. Research on learning and skill acquisition has pointed to a power law of practice, where large gains in performance come early in…

  10. Performance of Chronic Kidney Disease Epidemiology Collaboration Creatinine-Cystatin C Equation for Estimating Kidney Function in Cirrhosis

    PubMed Central

    Mindikoglu, Ayse L.; Dowling, Thomas C.; Weir, Matthew R.; Seliger, Stephen L.; Christenson, Robert H.; Magder, Laurence S.

    2013-01-01

    Conventional creatinine-based glomerular filtration rate (GFR) equations are insufficiently accurate for estimating GFR in cirrhosis. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) recently proposed an equation to estimate GFR in subjects without cirrhosis using both serum creatinine and cystatin C levels. Performance of the new CKD-EPI creatinine-cystatin C equation (2012) was superior to previous creatinine- or cystatin C-based GFR equations. To evaluate the performance of the CKD-EPI creatinine-cystatin C equation in subjects with cirrhosis, we compared it to GFR measured by non-radiolabeled iothalamate plasma clearance (mGFR) in 72 subjects with cirrhosis. We compared the “bias”, “precision” and “accuracy” of the new CKD-EPI creatinine-cystatin C equation to that of 24-hour urinary creatinine clearance (CrCl), Cockcroft-Gault (CG) and previously reported creatinine- and/or cystatin C-based GFR-estimating equations. Accuracy of CKD-EPI creatinine-cystatin C equation as quantified by root mean squared error of difference scores [differences between mGFR and estimated GFR (eGFR) or between mGFR and CrCl, or between mGFR and CG equation for each subject] (RMSE=23.56) was significantly better than that of CrCl (37.69, P=0.001), CG (RMSE=36.12, P=0.002) and GFR-estimating equations based on cystatin C only. Its accuracy as quantified by percentage of eGFRs that differed by greater than 30% with respect to mGFR was significantly better compared to CrCl (P=0.024), CG (P=0.0001), 4-variable MDRD (P=0.027) and CKD-EPI creatinine 2009 (P=0.012) equations. However, for 23.61% of the subjects, GFR estimated by CKD-EPI creatinine-cystatin C equation differed from the mGFR by more than 30%. CONCLUSIONS The diagnostic performance of CKD-EPI creatinine-cystatin C equation (2012) in patients with cirrhosis was superior to conventional equations in clinical practice for estimating GFR. However, its diagnostic performance was substantially worse than

  11. The performance of different propensity score methods for estimating absolute effects of treatments on survival outcomes: A simulation study

    PubMed Central

    Schuster, Tibor

    2014-01-01

    Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. PMID:24463885

  12. A comparison of three adsorption equations and sensitivity study of parameter uncertainty effects on adsorption refrigeration thermal performance estimation

    NASA Astrophysics Data System (ADS)

    Zhao, Yongling; Hu, Eric; Blazewicz, Antoni

    2012-02-01

    This paper presents isosteric-based adsorption equilibrium tests of three activated carbon samples with methanol as an adsorbate. Experimental data was fitted into Langmuir equation, Freundlich equation and Dubinin-Astakov (D-A) equation, respectively. The fitted adsorption equations were compared in terms of agreement with experimental data. Moreover, equation format's impacts on calculation of the coefficient of performance (COP) and refrigeration capacity of an adsorption refrigeration system was analyzed. In addition, the sensitivity of each parameter in each adsorption equation format to the estimation of cycle's COP and refrigeration capacity was investigated. It was found that the D-A equation is the best form for presenting the adsorptive property of a carbon-methanol working pair. The D-A equation is recommended for estimating thermal performance of an adsorption refrigeration system because simulation results obtained using the D-A equation are less sensitive to errors of experimentally determined D-A equation's parameters.

  13. The performance of ML, DWLS, and ULS estimation with robust corrections in structural equation models with ordinal variables.

    PubMed

    Li, Cheng-Hsien

    2016-09-01

    Three estimation methods with robust corrections-maximum likelihood (ML) using the sample covariance matrix, unweighted least squares (ULS) using a polychoric correlation matrix, and diagonally weighted least squares (DWLS) using a polychoric correlation matrix-have been proposed in the literature, and are considered to be superior to normal theory-based maximum likelihood when observed variables in latent variable models are ordinal. A Monte Carlo simulation study was carried out to compare the performance of ML, DWLS, and ULS in estimating model parameters, and their robust corrections to standard errors, and chi-square statistics in a structural equation model with ordinal observed variables. Eighty-four conditions, characterized by different ordinal observed distribution shapes, numbers of response categories, and sample sizes were investigated. Results reveal that (a) DWLS and ULS yield more accurate factor loading estimates than ML across all conditions; (b) DWLS and ULS produce more accurate interfactor correlation estimates than ML in almost every condition; (c) structural coefficient estimates from DWLS and ULS outperform ML estimates in nearly all asymmetric data conditions; (d) robust standard errors of parameter estimates obtained with robust ML are more accurate than those produced by DWLS and ULS across most conditions; and (e) regarding robust chi-square statistics, robust ML is inferior to DWLS and ULS in controlling for Type I error in almost every condition, unless a large sample is used (N = 1,000). Finally, implications of the findings are discussed, as are the limitations of this study as well as potential directions for future research. (PsycINFO Database Record PMID:27571021

  14. Preliminary estimate of coal resources in the Gillette coalfield affected by the location of the Burlington Northern/Union Pacific joint mainline railroad

    USGS Publications Warehouse

    Rohrbacher, Timothy J.; Haacke, Jon E.; Scott, David C.; Osmonson, Lee M.; Luppens, James A.

    2006-01-01

    This publication, primarily in graphic form, presents a preliminary resource assessment related to a major, near-term restriction to mining in that portion of the Gillette coalfield, Wyoming, that is traversed by the Burlington Northern/Union Pacific joint mainline railroad. This assessment is part of a current Powder River Basin regional coal assessment, including both resources and reserves, being conducted by the U.S. Geological Survey. The slides were used to illustrate a presentation of study results at a meeting of the Bureau of Land Management's Regional Coal Team in Casper, Wyoming on April 19, 2006 by the senior author.

  15. An Improved Performance Frequency Estimation Algorithm for Passive Wireless SAW Resonant Sensors

    PubMed Central

    Liu, Boquan; Zhang, Chenrui; Ji, Xiaojun; Chen, Jing; Han, Tao

    2014-01-01

    Passive wireless surface acoustic wave (SAW) resonant sensors are suitable for applications in harsh environments. The traditional SAW resonant sensor system requires, however, Fourier transformation (FT) which has a resolution restriction and decreases the accuracy. In order to improve the accuracy and resolution of the measurement, the singular value decomposition (SVD)-based frequency estimation algorithm is applied for wireless SAW resonant sensor responses, which is a combination of a single tone undamped and damped sinusoid signal with the same frequency. Compared with the FT algorithm, the accuracy and the resolution of the method used in the self-developed wireless SAW resonant sensor system are validated. PMID:25429410

  16. Performance study of a new time-delay estimation algorithm in ultrasonic echo signals and ultrasound elastography.

    PubMed

    Shaswary, Elyas; Xu, Yuan; Tavakkoli, Jahan

    2016-07-01

    Time-delay estimation has countless applications in ultrasound medical imaging. Previously, we proposed a new time-delay estimation algorithm, which was based on the summation of the sign function to compute the time-delay estimate (Shaswary et al., 2015). We reported that the proposed algorithm performs similar to normalized cross-correlation (NCC) and sum squared differences (SSD) algorithms, even though it was significantly more computationally efficient. In this paper, we study the performance of the proposed algorithm using statistical analysis and image quality analysis in ultrasound elastography imaging. Field II simulation software was used for generation of ultrasound radio frequency (RF) echo signals for statistical analysis, and a clinical ultrasound scanner (Sonix® RP scanner, Ultrasonix Medical Corp., Richmond, BC, Canada) was used to scan a commercial ultrasound elastography tissue-mimicking phantom for image quality analysis. The statistical analysis results confirmed that, in overall, the proposed algorithm has similar performance compared to NCC and SSD algorithms. The image quality analysis results indicated that the proposed algorithm produces strain images with marginally higher signal-to-noise and contrast-to-noise ratios compared to NCC and SSD algorithms. PMID:27010697

  17. Addressing Cultural Context in the Development of Performance-based Assessments and Computer-adaptive Testing: Preliminary Validity Considerations.

    ERIC Educational Resources Information Center

    Boodoo, Gwyneth M.

    1998-01-01

    Discusses the research and steps needed to develop performance-based and computer-adaptive assessments that are culturally responsive. Supports the development of a new conceptual framework and more explicit guidelines for designing culturally responsive assessments. (SLD)

  18. Design of aquifer remediation systems: (2) estimating site-specific performance and benefits of partial source removal.

    PubMed

    Wood, A Lynn; Enfield, Carl G; Espinoza, Felipe P; Annable, Michael; Brooks, Michael C; Rao, P S C; Sabatini, David; Knox, Robert

    2005-12-01

    A Lagrangian stochastic model is proposed as a tool that can be utilized in forecasting remedial performance and estimating the benefits (in terms of flux and mass reduction) derived from a source zone remedial effort. The stochastic functional relationships that describe the hydraulic "structure" and non-aqueous phase liquid (NAPL) "architecture" have been described in a companion paper (Enfield, C.G., Wood, A.L., Espinoza, F.P., Brooks, M.C., Annable, M., Rao, P.S.C., this issue. Design of aquifer remediation systems: (1) describing hydraulic structure and NAPL architecture using tracers. J. Contam. Hydrol.). The previously defined functions were used along with the properties of the remedial fluids to describe remedial performance. There are two objectives for this paper. First, is to show that a simple analytic element model can be used to give a reasonable estimate of system performance. This is accomplished by comparing forecast performance to observed performance. The second objective is to display the model output in terms of change in mass flux and mass removal as a function of pore volumes of remedial fluid injected. The modelling results suggest that short term benefits are obtained and related to mass reduction at the sites where the model was tested.

  19. Preliminary evaluation of the Environmental Research Institute of Michigan crop calendar shift algorithm for estimation of spring wheat development stage. [North Dakota, South Dakota, Montana, and Minnesota

    NASA Technical Reports Server (NTRS)

    Phinney, D. E. (Principal Investigator)

    1980-01-01

    An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.

  20. On improving low-cost IMU performance for online trajectory estimation

    NASA Astrophysics Data System (ADS)

    Yudanto, Risang; Ompusunggu, Agusmian P.; Bey-Temsamani, Abdellatif

    2015-05-01

    We have developed an automatic mitigation method for compensating drifts occurring in low-cost Inertial Measurement Units (IMU), using MEMS (Microelectromechanical systems) accelerometers and gyros, and applied the method for online trajectory estimation of a moving robot arm. The method is based on an automatic detection of system's states which triggers an online (i.e. automatic) recalibration of the sensors parameters. Stationary tests have proven an absolute reduction of drift, mainly due to random walk noise at ambient conditions, up to ~50% by using the recalibrated sensor parameters instead of using the nominal parameters obtained from sensor's datasheet. The proposed calibration methodology works online without needing manual interventions and adaptively compensates drifts under different working conditions. Notably, the proposed method requires neither any information from an aiding sensor nor a priori knowledge about system's model and/or constraints. It is experimentally shown in this paper that the method improves online trajectory estimations of the robot using a low-cost IMU consisting of MEMS-based accelerometer and gyroscope. Applications of the proposed method cover automotive, machinery and robotics industries.

  1. Estimation and optimization of thermal performance of evacuated tube solar collector system

    NASA Astrophysics Data System (ADS)

    Dikmen, Erkan; Ayaz, Mahir; Ezen, H. Hüseyin; Küçüksille, Ecir U.; Şahin, Arzu Şencan

    2014-05-01

    In this study, artificial neural networks (ANNs) and adaptive neuro-fuzzy (ANFIS) in order to predict the thermal performance of evacuated tube solar collector system have been used. The experimental data for the training and testing of the networks were used. The results of ANN are compared with ANFIS in which the same data sets are used. The R2-value for the thermal performance values of collector is 0.811914 which can be considered as satisfactory. The results obtained when unknown data were presented to the networks are satisfactory and indicate that the proposed method can successfully be used for the prediction of the thermal performance of evacuated tube solar collectors. In addition, new formulations obtained from ANN are presented for the calculation of the thermal performance. The advantages of this approaches compared to the conventional methods are speed, simplicity, and the capacity of the network to learn from examples. In addition, genetic algorithm (GA) was used to maximize the thermal performance of the system. The optimum working conditions of the system were determined by the GA.

  2. Measurement of driver calibration and the impact of feedback on drivers' estimates of performance.

    PubMed

    Roberts, Shannon C; Horrey, William J; Liang, Yulan

    2016-03-01

    Recent studies focused on driver calibration show that drivers are often miscalibrated, either over confident or under confident, and the magnitude of this miscalibration changes under different conditions. Previous work has demonstrated behavioral and performance benefits of feedback, yet these studies have not explicitly examined the issue of calibration. The objective of this study was to examine driver calibration, i.e., the degree to which drivers are accurately aware of their performance, and determine whether feedback alters driver calibration. Twenty-four drivers completed a series of driving tasks (pace clocks, traffic light, speed maintenance, and traffic cones) on a test track. Drivers drove three different blocks around the test track: (1) baseline block, where no participants received feedback; (2) feedback block, where half of the participants received performance feedback while the other half received no feedback; (3) a no feedback block, where no participants received feedback. Results indicated that across two different calibration measures, drivers were sufficiently calibrated to the pace clocks, traffic light, and traffic cone tasks. Drivers were not accurately aware of their performance regarding speed maintenance, though receiving feedback on this task improved calibration. Proper and accurate measurements of driver calibration are needed before designing performance feedback to improve calibration as these feedback systems may not always yield the intended results.

  3. The estimation and use of predictions for the assessment of model performance using large samples with multiply imputed data

    PubMed Central

    Wood, Angela M; Royston, Patrick; White, Ian R

    2015-01-01

    Multiple imputation can be used as a tool in the process of constructing prediction models in medical and epidemiological studies with missing covariate values. Such models can be used to make predictions for model performance assessment, but the task is made more complicated by the multiple imputation structure. We summarize various predictions constructed from covariates, including multiply imputed covariates, and either the set of imputation-specific prediction model coefficients or the pooled prediction model coefficients. We further describe approaches for using the predictions to assess model performance. We distinguish between ideal model performance and pragmatic model performance, where the former refers to the model's performance in an ideal clinical setting where all individuals have fully observed predictors and the latter refers to the model's performance in a real-world clinical setting where some individuals have missing predictors. The approaches are compared through an extensive simulation study based on the UK700 trial. We determine that measures of ideal model performance can be estimated within imputed datasets and subsequently pooled to give an overall measure of model performance. Alternative methods to evaluate pragmatic model performance are required and we propose constructing predictions either from a second set of covariate imputations which make no use of observed outcomes, or from a set of partial prediction models constructed for each potential observed pattern of covariate. Pragmatic model performance is generally lower than ideal model performance. We focus on model performance within the derivation data, but describe how to extend all the methods to a validation dataset. PMID:25630926

  4. Estimation of the relative severity of floods in small ungauged catchments for preliminary observations on flash flood preparedness: a case study in Korea.

    PubMed

    Kim, Eung Seok; Choi, Hyun Il

    2012-04-01

    An increase in the occurrence of sudden local flooding of great volume and short duration has caused significant danger and loss of life and property in Korea as well as many other parts of the World. Since such floods usually accompanied by rapid runoff and debris flow rise quite quickly with little or no advance warning to prevent flood damage, this study presents a new flash flood indexing methodology to promptly provide preliminary observations regarding emergency preparedness and response to flash flood disasters in small ungauged catchments. Flood runoff hydrographs are generated from a rainfall-runoff model for the annual maximum rainfall series of long-term observed data in the two selected small ungauged catchments. The relative flood severity factors quantifying characteristics of flood runoff hydrographs are standardized by the highest recorded maximum value, and then averaged to obtain the flash flood index only for flash flood events in each study catchment. It is expected that the regression equations between the proposed flash flood index and rainfall characteristics can provide the basis database of the preliminary information for forecasting the local flood severity in order to facilitate flash flood preparedness in small ungauged catchments.

  5. A preliminary investigation into the relationship between functional movement screen scores and athletic physical performance in female team sport athletes

    PubMed Central

    Schultz, AB; Callaghan, SJ; Jordan, CA; Luczo, TM; Jeffriess, MD

    2014-01-01

    There is little research investigating relationships between the Functional Movement Screen (FMS) and athletic performance in female athletes. This study analyzed the relationships between FMS (deep squat; hurdle step [HS]; in-line lunge [ILL]; shoulder mobility; active straight-leg raise [ASLR]; trunk stability push-up; rotary stability) scores, and performance tests (bilateral and unilateral sit-and-reach [flexibility]; 20-m sprint [linear speed]; 505 with turns from each leg; modified T-test with movement to left and right [change-of-direction speed]; bilateral and unilateral vertical and standing broad jumps; lateral jumps [leg power]). Nine healthy female recreational team sport athletes (age = 22.67 ± 5.12 years; height = 1.66 ± 0.05 m; body mass = 64.22 ± 4.44 kilograms) were screened in the FMS and completed the afore-mentioned tests. Percentage between-leg differences in unilateral sit-and-reach, 505 turns and the jumps, and difference between the T-test conditions, were also calculated. Spearman's correlations (p ≤ 0.05) examined relationships between the FMS and performance tests. Stepwise multiple regressions (p ≤ 0.05) were conducted for the performance tests to determine FMS predictors. Unilateral sit-and-reach positive correlated with the left-leg ASLR (r = 0.704-0.725). However, higher-scoring HS, ILL, and ASLR related to poorer 505 and T-test performance (r = 0.722-0.829). A higher-scored left-leg ASLR related to a poorer unilateral vertical and standing broad jump, which were the only significant relationships for jump performance. Predictive data tended to confirm the correlations. The results suggest limitations in using the FMS to identify movement deficiencies that could negatively impact athletic performance in female team sport athletes. PMID:25729149

  6. A preliminary investigation into the relationship between functional movement screen scores and athletic physical performance in female team sport athletes.

    PubMed

    Lockie, Rg; Schultz, Ab; Callaghan, Sj; Jordan, Ca; Luczo, Tm; Jeffriess, Md

    2015-03-01

    There is little research investigating relationships between the Functional Movement Screen (FMS) and athletic performance in female athletes. This study analyzed the relationships between FMS (deep squat; hurdle step [HS]; in-line lunge [ILL]; shoulder mobility; active straight-leg raise [ASLR]; trunk stability push-up; rotary stability) scores, and performance tests (bilateral and unilateral sit-and-reach [flexibility]; 20-m sprint [linear speed]; 505 with turns from each leg; modified T-test with movement to left and right [change-of-direction speed]; bilateral and unilateral vertical and standing broad jumps; lateral jumps [leg power]). Nine healthy female recreational team sport athletes (age = 22.67 ± 5.12 years; height = 1.66 ± 0.05 m; body mass = 64.22 ± 4.44 kilograms) were screened in the FMS and completed the afore-mentioned tests. Percentage between-leg differences in unilateral sit-and-reach, 505 turns and the jumps, and difference between the T-test conditions, were also calculated. Spearman's correlations (p ≤ 0.05) examined relationships between the FMS and performance tests. Stepwise multiple regressions (p ≤ 0.05) were conducted for the performance tests to determine FMS predictors. Unilateral sit-and-reach positive correlated with the left-leg ASLR (r = 0.704-0.725). However, higher-scoring HS, ILL, and ASLR related to poorer 505 and T-test performance (r = 0.722-0.829). A higher-scored left-leg ASLR related to a poorer unilateral vertical and standing broad jump, which were the only significant relationships for jump performance. Predictive data tended to confirm the correlations. The results suggest limitations in using the FMS to identify movement deficiencies that could negatively impact athletic performance in female team sport athletes.

  7. The estimation of airplane performance from wind tunnel tests on conventional airplane models

    NASA Technical Reports Server (NTRS)

    Warner, Edward P; Ober, Shatswell

    1925-01-01

    Calculations of the magnitude of the correction factors and the range of their variations for wind tunnel models used in making aircraft performance predictions were made for 23 wind tunnel models. Calculated performances were compared with those actually determined for such airplanes as have been built and put through flight test. Except as otherwise noted, all the models have interplane struts and diagonal struts formed to streamwise shape. Wires were omitted in all cases. All the models were about 18 inches in span and were tested in a 4-foot wind tunnel. Results are given in tabular form.

  8. A simple method to retrospectively estimate patient dose-area product for chest tomosynthesis examinations performed using VolumeRAD

    SciTech Connect

    Båth, Magnus Svalkvist, Angelica; Söderman, Christina

    2014-10-15

    Purpose: The purpose of the present work was to develop and validate a method of retrospectively estimating the dose-area product (DAP) of a chest tomosynthesis examination performed using the VolumeRAD system (GE Healthcare, Chalfont St. Giles, UK) from digital imaging and communications in medicine (DICOM) data available in the scout image. Methods: DICOM data were retrieved for 20 patients undergoing chest tomosynthesis using VolumeRAD. Using information about how the exposure parameters for the tomosynthesis examination are determined by the scout image, a correction factor for the adjustment in field size with projection angle was determined. The correction factor was used to estimate the DAP for 20 additional chest tomosynthesis examinations from DICOM data available in the scout images, which was compared with the actual DAP registered for the projection radiographs acquired during the tomosynthesis examination. Results: A field size correction factor of 0.935 was determined. Applying the developed method using this factor, the average difference between the estimated DAP and the actual DAP was 0.2%, with a standard deviation of 0.8%. However, the difference was not normally distributed and the maximum error was only 1.0%. The validity and reliability of the presented method were thus very high. Conclusions: A method to estimate the DAP of a chest tomosynthesis examination performed using the VolumeRAD system from DICOM data in the scout image was developed and validated. As the scout image normally is the only image connected to the tomosynthesis examination stored in the picture archiving and communication system (PACS) containing dose data, the method may be of value for retrospectively estimating patient dose in clinical use of chest tomosynthesis.

  9. Hypersonic research engine project. Phase 2: Preliminary report on the performance of the HRE/AIM at Mach 6

    NASA Technical Reports Server (NTRS)

    Sun, Y. H.; Sainio, W. C.

    1975-01-01

    Test results of the Aerothermodynamic Integration Model are presented. A program was initiated to develop a hydrogen-fueled research-oriented scramjet for operation between Mach 3 and 8. The primary objectives were to investigate the internal aerothermodynamic characteristics of the engine, to provide realistic design parameters for future hypersonic engine development as well as to evaluate the ground test facility and testing techniques. The engine was tested at the NASA hypersonic tunnel facility with synthetic air at Mach 5, 6, and 7. The hydrogen fuel was heated up to 1500 R prior to injection to simulate a regeneratively cooled system. The engine and component performance at Mach 6 is reported. Inlet performance compared very well both with theory and with subscale model tests. Combustor efficiencies up to 95 percent were attained at an equivalence ratio of unity. Nozzle performance was lower than expected. The overall engine performance was computed using two different methods. The performance was also compared with test data from other sources.

  10. Correlation Between Geometric Similarity of Ice Shapes and the Resulting Aerodynamic Performance Degradation: A Preliminary Investigation Using WIND

    NASA Technical Reports Server (NTRS)

    Wright, William B.; Chung, James

    1999-01-01

    Aerodynamic performance calculations were performed using WIND on ten experimental ice shapes and the corresponding ten ice shapes predicted by LEWICE 2.0. The resulting data for lift coefficient and drag coefficient are presented. The difference in aerodynamic results between the experimental ice shapes and the LEWICE ice shapes were compared to the quantitative difference in ice shape geometry presented in an earlier report. Correlations were generated to determine the geometric features which have the most effect on performance degradation. Results show that maximum lift and stall angle can be correlated to the upper horn angle and the leading edge minimum thickness. Drag coefficient can be correlated to the upper horn angle and the frequency-weighted average of the Fourier coefficients. Pitching moment correlated with the upper horn angle and to a much lesser extent to the upper and lower horn thicknesses.

  11. A preliminary examination of neurocognitive performance and symptoms following a bout of soccer heading in athletes wearing protective soccer headbands.

    PubMed

    Elbin, R J; Beatty, Amanda; Covassin, Tracey; Schatz, Philip; Hydeman, Ana; Kontos, Anthony P

    2015-01-01

    This study compared changes in neurocognitive performance and symptom reports following an acute bout of soccer heading among athletes with and without protective soccer headgear. A total of 25 participants headed a soccer ball 15 times over a 15-minute period, using a proper linear heading technique. Participants in the experimental group completed the heading exercise while wearing a protective soccer headband and controls performed the heading exercise without wearing the soccer headband. Neurocognitive performance and symptom reports were assessed before and after the acute bout of heading. Participants wearing the headband showed significant decreases on verbal memory (p = 0.02) compared with the no headband group, while the no headband group demonstrated significantly faster reaction time (p = 0.03) than the headband group following the heading exercise. These findings suggest that protective soccer headgear likely does not mitigate the subtle neurocognitive effects of acute soccer heading.

  12. Improving runoff estimates from regional climate models: a performance analysis in Spain

    NASA Astrophysics Data System (ADS)

    González-Zeas, D.; Garrote, L.; Iglesias, A.; Sordo-Ward, A.

    2012-06-01

    An important step to assess water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961-1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimise the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of natural runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behaviour of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evapotranspiration and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the "best estimator" of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber (1904) also gives good results.

  13. Improving runoff estimates from regional climate models: a performance analysis in Spain

    NASA Astrophysics Data System (ADS)

    González-Zeas, D.; Garrote, L.; Iglesias, A.; Sordo-Ward, A.

    2012-01-01

    An important aspect to assess the impact of climate change on water availability is to have monthly time series representative of the current situation. In this context, a simple methodology is presented for application in large-scale studies in regions where a properly calibrated hydrologic model is not available, using the output variables simulated by regional climate models (RCMs) of the European project PRUDENCE under current climate conditions (period 1961-1990). The methodology compares different interpolation methods and alternatives to generate annual times series that minimize the bias with respect to observed values. The objective is to identify the best alternative to obtain bias-corrected, monthly runoff time series from the output of RCM simulations. This study uses information from 338 basins in Spain that cover the entire mainland territory and whose observed values of naturalised runoff have been estimated by the distributed hydrological model SIMPA. Four interpolation methods for downscaling runoff to the basin scale from 10 RCMs are compared with emphasis on the ability of each method to reproduce the observed behavior of this variable. The alternatives consider the use of the direct runoff of the RCMs and the mean annual runoff calculated using five functional forms of the aridity index, defined as the ratio between potential evaporation and precipitation. In addition, the comparison with respect to the global runoff reference of the UNH/GRDC dataset is evaluated, as a contrast of the "best estimator" of current runoff on a large scale. Results show that the bias is minimised using the direct original interpolation method and the best alternative for bias correction of the monthly direct runoff time series of RCMs is the UNH/GRDC dataset, although the formula proposed by Schreiber also gives good results.

  14. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering.

  15. Performance/price estimates for cortex-scale hardware: a design space exploration.

    PubMed

    Zaveri, Mazad S; Hammerstrom, Dan

    2011-04-01

    In this paper, we revisit the concept of virtualization. Virtualization is useful for understanding and investigating the performance/price and other trade-offs related to the hardware design space. Moreover, it is perhaps the most important aspect of a hardware design space exploration. Such a design space exploration is a necessary part of the study of hardware architectures for large-scale computational models for intelligent computing, including AI, Bayesian, bio-inspired and neural models. A methodical exploration is needed to identify potentially interesting regions in the design space, and to assess the relative performance/price points of these implementations. As an example, in this paper we investigate the performance/price of (digital and mixed-signal) CMOS and hypothetical CMOL (nanogrid) technology based hardware implementations of human cortex-scale spiking neural systems. Through this analysis, and the resulting performance/price points, we demonstrate, in general, the importance of virtualization, and of doing these kinds of design space explorations. The specific results suggest that hybrid nanotechnology such as CMOL is a promising candidate to implement very large-scale spiking neural systems, providing a more efficient utilization of the density and storage benefits of emerging nano-scale technologies. In general, we believe that the study of such hypothetical designs/architectures will guide the neuromorphic hardware community towards building large-scale systems, and help guide research trends in intelligent computing, and computer engineering. PMID:21232918

  16. Success Estimations and Performance in Children as Influenced by Age, Gender, and Task.

    ERIC Educational Resources Information Center

    Lee, Amelia M.; And Others

    1988-01-01

    Randomly assigned 80 boys and girls in grades 3 and 7 to either a football-related or a dance-related group. Performance expectancies obtained prior to engaging in a novel motor task can be affected by the way a task is presented. Boys were more affected than girls by labels of sex appropriateness. (Author/BJV)

  17. College Student Performance, Satisfaction and Retention: Specification and Estimation of a Structural Model.

    ERIC Educational Resources Information Center

    Aitken, Norman D.

    1982-01-01

    A comprehensive theoretical model designed to explain the academic satisfaction, residential living satisfaction, academic performance, and retention of college students is presented. The model is tested against data obtained from a state university. Use of the model to test the effect of institutional policy measures on retention is described.…

  18. Comparison of force, power, and striking efficiency for a Kung Fu strike performed by novice and experienced practitioners: preliminary analysis.

    PubMed

    Neto, Osmar Pinto; Magini, Marcio; Saba, Marcelo M F; Pacheco, Marcos Tadeu Tavares

    2008-02-01

    This paper presents a comparison of force, power, and efficiency values calculated from Kung Fu Yau-Man palm strikes, when performed by 7 experienced and 6 novice men. They performed 5 palm strikes to a freestanding basketball, recorded by high-speed camera at 1000 Hz. Nonparametric comparisons and correlations showed experienced practitioners presented larger values of mean muscle force, mean impact force, mean muscle power, mean impact power, and mean striking efficiency, as is noted in evidence obtained for other martial arts. Also, an interesting result was that for experienced Kung Fu practitioners, muscle power was linearly correlated with impact power (p = .98) but not for the novice practitioners (p = .46).

  19. Preliminary assessment of accident-tolerant fuels on LWR performance during normal operation and under DB and BDB accident conditions

    NASA Astrophysics Data System (ADS)

    Ott, L. J.; Robb, K. R.; Wang, D.

    2014-05-01

    Following the severe accidents at the Japanese Fukushima Daiichi Nuclear Power Station in 2011, the US Department of Energy initiated research and development on the enhancement of the accident tolerance of light water reactors by the development of fuels/cladding that, in comparison with the standard UO2/Zircaloy (Zr) system, can tolerate loss of active cooling in the core for a considerably longer time period while maintaining or improving the fuel performance during normal operations. Analyses are presented that illustrate the impact of these new candidate fuel/cladding materials on the fuel performance at normal operating conditions and on the reactor system under DB and BDB accident conditions.

  20. Water budgets and groundwater volumes for abandoned underground mines in the Western Middle Anthracite Coalfield, Schuylkill, Columbia, and Northumberland Counties, Pennsylvania-Preliminary estimates with identification of data needs

    USGS Publications Warehouse

    Goode, Daniel J.; Cravotta, Charles A.; Hornberger, Roger J.; Hewitt, Michael A.; Hughes, Robert E.; Koury, Daniel J.; Eicholtz, Lee W.

    2011-01-01

    This report, prepared in cooperation with the Pennsylvania Department of Environmental Protection (PaDEP), the Eastern Pennsylvania Coalition for Abandoned Mine Reclamation, and the Dauphin County Conservation District, provides estimates of water budgets and groundwater volumes stored in abandoned underground mines in the Western Middle Anthracite Coalfield, which encompasses an area of 120 square miles in eastern Pennsylvania. The estimates are based on preliminary simulations using a groundwater-flow model and an associated geographic information system that integrates data on the mining features, hydrogeology, and streamflow in the study area. The Mahanoy and Shamokin Creek Basins were the focus of the study because these basins exhibit extensive hydrologic effects and water-quality degradation from the abandoned mines in their headwaters in the Western Middle Anthracite Coalfield. Proposed groundwater withdrawals from the flooded parts of the mines and stream-channel modifications in selected areas have the potential for altering the distribution of groundwater and the interaction between the groundwater and streams in the area. Preliminary three-dimensional, steady-state simulations of groundwater flow by the use of MODFLOW are presented to summarize information on the exchange of groundwater among adjacent mines and to help guide the management of ongoing data collection, reclamation activities, and water-use planning. The conceptual model includes high-permeability mine voids that are connected vertically and horizontally within multicolliery units (MCUs). MCUs were identified on the basis of mine maps, locations of mine discharges, and groundwater levels in the mines measured by PaDEP. The locations and integrity of mine barriers were determined from mine maps and groundwater levels. The permeability of intact barriers is low, reflecting the hydraulic characteristics of unmined host rock and coal. A steady-state model was calibrated to measured groundwater

  1. Preliminary Results of Altitude-Wind-Tunnel Investigation of X24C-4B Turbojet Engine. IV - Performance of Modified Compressor. Part 4; Performance of Modified Compressor

    NASA Technical Reports Server (NTRS)

    Thorman, H. Carl; Dupree, David T.

    1947-01-01

    The performance of the 11-stage axial-flow compressor, modified to improve the compressor-outlet velocity, in a revised X24C-4B turbojet engine is presented and compared with the performance of the compressor in the original engine. Performance data were obtained from an investigation of the revised engine in the MACA Cleveland altitude wind tunnel. Compressor performance data were obtained for engine operation with four exhaust nozzles of different outlet area at simulated altitudes from 15,OOO to 45,000 feet, simulated flight Mach numbers from 0.24 to 1.07, and engine speeds from 4000 to 12,500 rpm. The data cover a range of corrected engine speeds from 4100 to 13,500 rpm, which correspond to compressor Mach numbers from 0.30 to 1.00.

  2. QbD-Driven Development and Validation of a HPLC Method for Estimation of Tamoxifen Citrate with Improved Performance.

    PubMed

    Sandhu, Premjeet Singh; Beg, Sarwar; Katare, O P; Singh, Bhupinder

    2016-09-01

    The current studies entail Quality by Design (QbD)-enabled development of a simple, rapid, sensitive and cost-effective high-performance liquid chromatographic method for estimation of tamoxifen citrate (TMx). The factor screening studies were performed using a 7-factor 8-run Taguchi design. Systematic optimization was performed employing Box-Behnken design by selecting the mobile phase ratio, buffer pH and oven temperature as the critical method parameters (CMPs) identified from screening studies, thus evaluating the critical analytical attributes (CAAs), namely, peak area, retention time, theoretical plates and peak tailing as the parameters of method robustness. The optimal chromatographic separation was achieved using acetonitrile and phosphate buffer (pH 3.5) 52:48 v/v as the mobile phase with a flow rate 0.7 mL/min, an oven temperature 40°C and UV detection at 256 nm. The method was validated as per the ICH recommended conditions, which revealed high degree of linearity, accuracy, precision, sensitivity and robustness over the existing liquid chromatographic methods of the drug. Also the method was applied for the estimation of TMx in nanostructured formulations, which indicated no significant change in the retention time. In a nutshell, the studies demonstrated successful development of the HPLC method of TMx with improved understanding of the relationship among the influential variables for enhancing the method performance. PMID:27226463

  3. A preliminary study on the effect of methylphenidate on motor performance in children with comorbid DCD and ADHD.

    PubMed

    Bart, Orit; Podoly, Tamar; Bar-Haim, Yair

    2010-01-01

    Attention Deficit Hyperactive Disorder (ADHD) and Developmental Coordination Disorder (DCD) are two developmental disorders with considerable comorbidity. The impact of Methylphenidate (MPH) on ADHD symptoms is well documented. However, the effects of MPH on motor coordination are less studied. We assessed the influence of MPH on motor performance of children with comorbid DCD and ADHD. Participants were 18 children (13 boys, mean age 8.3 years) diagnosed with comorbid DCD and ADHD. A structured clinical interview (K-SADS-PL) was used to determine psychopathology and the Movement Assessment Battery for Children-Checklist were used to determine criterion for motor deficits. The Movement Assessment Battery for Children (M-ABC) was administrated to all participants once under the influence of MPH and once under a placebo pill condition. The motor tests were administered on two separate days in a double-blinded design. Participants' motor performance with MPH was significantly superior to their performance in the placebo condition. Significant improvement was observed in all the M-ABC sub-tasks except for static balance performance. The findings suggest that MPH improves motor coordination in children with comorbid DCD and ADHD but clinically significant improvement was found in only 33% of the children.

  4. The Analysis of the Relation between Eight-Grade Students' Estimation Performance in Triangles with Their Teaching Style Expectations and Sources of Motivation

    ERIC Educational Resources Information Center

    Altunkaya, Bülent; Aytekin, Cahit; Doruk, Bekir Kürsat; Özçakir, Bilal

    2014-01-01

    In this study, eight-grade students' estimation achievements in triangles were analysed according to motivation types and knowledge type expectations. Three hundred and thirty-seven students from three different elementary schools attended in this study. In order to determine the students' estimation performances, an estimation test in…

  5. Performance estimation of dual-comb spectroscopy in different frequency-control schemes.

    PubMed

    Yang, Honglei; Wei, Haoyun; Zhang, Hongyuan; Chen, Kun; Li, Yan; Smolski, Viktor O; Vodopyanov, Konstantin L

    2016-08-10

    Dual-comb spectroscopy (DCS) has shown unparalleled advantages but at the cost of highly mutual coherence between comb lasers. Here, we investigate spectral degradation induced by laser frequency instabilities and improvement benefited from active laser stabilization. Mathematical models of DCS in the cases of direct radio-frequency (RF) locking and optical phase stabilization were separately established first. Numerical simulations are utilized to study the impact of laser intrinsic stability and the improvement by different locking strategies on spectral performance in the following. Finally, both simulations are proven by corresponding experiments. It shows that an optically phase-stabilized system owns a better immunity of laser frequency fluctuations than a direct RF-stabilized one. Furthermore, the performance improvement by the feedback servos is also more effective in the optically phase-stabilized system. In addition, the simulations could instruct optimal design and system improvement. PMID:27534474

  6. Males Under-Estimate Academic Performance of Their Female Peers in Undergraduate Biology Classrooms.

    PubMed

    Grunspan, Daniel Z; Eddy, Sarah L; Brownell, Sara E; Wiggins, Benjamin L; Crowe, Alison J; Goodreau, Steven M

    2016-01-01

    Women who start college in one of the natural or physical sciences leave in greater proportions than their male peers. The reasons for this difference are complex, and one possible contributing factor is the social environment women experience in the classroom. Using social network analysis, we explore how gender influences the confidence that college-level biology students have in each other's mastery of biology. Results reveal that males are more likely than females to be named by peers as being knowledgeable about the course content. This effect increases as the term progresses, and persists even after controlling for class performance and outspokenness. The bias in nominations is specifically due to males over-nominating their male peers relative to their performance. The over-nomination of male peers is commensurate with an overestimation of male grades by 0.57 points on a 4 point grade scale, indicating a strong male bias among males when assessing their classmates. Females, in contrast, nominated equitably based on student performance rather than gender, suggesting they lacked gender biases in filling out these surveys. These trends persist across eleven surveys taken in three different iterations of the same Biology course. In every class, the most renowned students are always male. This favoring of males by peers could influence student self-confidence, and thus persistence in this STEM discipline.

  7. Manual performance deterioration in the cold estimated using the wind chill equivalent temperature.

    PubMed

    Daanen, Hein A M

    2009-07-01

    Manual performance during work in cold and windy climates is severely hampered by decreased dexterity, but valid dexterity decrease predictors based on climatic factors are scarce. Therefore, this study investigated the decrease in finger- and hand dexterity and grip force for nine combinations of ambient temperature (-20, -10 and 0 degrees C) and wind speeds (0.2, 4 and 8 m x s(2)), controlled in a climatic chamber. Finger dexterity was determined by the Purdue pegboard test, hand dexterity by the Minnesota manual dexterity test and grip force by a hand dynamometer. Twelve subjects with average to low fat percentage were exposed to cold air for one hour with and without extra insulation by a parka. The subjects were clothed in standard work clothing of the Royal Netherlands Air Force for cold conditions. Extra insulation did affect cold sensation but not manual performance. The deterioration in manual performance appeared to be strongly dependent upon Wind Chill Equivalent Temperature (WCET) and the square root of exposure time (r=0.93 for group average). These simple models may be valuable to assess problems with work in the cold, but more work should be done to determine critical values in dexterity for a wide variety of operational tasks. PMID:19531912

  8. Males Under-Estimate Academic Performance of Their Female Peers in Undergraduate Biology Classrooms

    PubMed Central

    Brownell, Sara E.; Wiggins, Benjamin L.; Crowe, Alison J.; Goodreau, Steven M.

    2016-01-01

    Women who start college in one of the natural or physical sciences leave in greater proportions than their male peers. The reasons for this difference are complex, and one possible contributing factor is the social environment women experience in the classroom. Using social network analysis, we explore how gender influences the confidence that college-level biology students have in each other’s mastery of biology. Results reveal that males are more likely than females to be named by peers as being knowledgeable about the course content. This effect increases as the term progresses, and persists even after controlling for class performance and outspokenness. The bias in nominations is specifically due to males over-nominating their male peers relative to their performance. The over-nomination of male peers is commensurate with an overestimation of male grades by 0.57 points on a 4 point grade scale, indicating a strong male bias among males when assessing their classmates. Females, in contrast, nominated equitably based on student performance rather than gender, suggesting they lacked gender biases in filling out these surveys. These trends persist across eleven surveys taken in three different iterations of the same Biology course. In every class, the most renowned students are always male. This favoring of males by peers could influence student self-confidence, and thus persistence in this STEM discipline. PMID:26863320

  9. Preliminary Results of Simulations and Field Investigations of the Performance of the WISDOM GPR of the ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Ciarletti, V.; Corbel, C.; Cais, P.; Pltettemeier, D.; Hamran, S. E.; Oyan, M.; Clifford, S.; Reineix, A.

    2009-04-01

    WISDOM (Water Ice and Subsurface Deposit Observations on Mars) is a ground penetrat-ing radar (GPR) that was selected as one of three survey instruments on the ExoMars Rover Pasteur Payload. Its purpose is to characterize the nature of the shallow subsurface (including geological structure, electromagnetic properties, and potential hydrological state) and identify the most promising locations for investigation and sampling by the Rover's onboard drill - providing information down to a depth of 2 or 3 meters with a vertical resolution of a few centimeters (performance characteristics that will vary, depending on the local permittivity and conductivity of the subsurface). WISDOM is a polarimetric, step-frequency GPR operating over the frequency range of 0.5 - 3 GHz. The polarimetric capability of WISDOM is particularly useful for identifying and characterizing oriented structures like faults, fractures and stratigraphic interface roughness. To achieve this objective, special care has been dedicated to the design of the antenna system, which consists of a pair of Vivaldi antenna to conduct both co- and cross-polar measurements. WISDOM will perform its scientific investigations at each of the sites visited by the Rover and during the intervening traverses. During a traverse between two successive experiment cycles of the mission (drilling and sample analysis), WISDOM soundings will be performed to provide a coarse survey of the structure and nature of the underground and its large-scale variations. This information is required to understand the overall geological context and the properties of the subsurface. When a particular location has been selected for potential investigation by the drill, WISDOM will obtain subsurface profiles on a 2D grid, in order to synthesize a 3D map of subsurface soil characteristics and spatial variabil-ity. Full polarimetric soundings will be performed at 10 cm intervals along each parallel grid line, which will have a line-to-line spacing

  10. Study to perform preliminary experiments to evaluate particle generation and characterization techniques for zero-gravity cloud physics experiments

    NASA Technical Reports Server (NTRS)

    Katz, U.

    1982-01-01

    Methods of particle generation and characterization with regard to their applicability for experiments requiring cloud condensation nuclei (CCN) of specified properties were investigated. Since aerosol characterization is a prerequisite to assessing performance of particle generation equipment, techniques for characterizing aerosol were evaluated. Aerosol generation is discussed, and atomizer and photolytic generators including preparation of hydrosols (used with atomizers) and the evaluation of a flight version of an atomizer are studied.

  11. Obstacle Avoidance, Visual Detection Performance, and Eye-Scanning Behavior of Glaucoma Patients in a Driving Simulator: A Preliminary Study

    PubMed Central

    Prado Vega, Rocío; van Leeuwen, Peter M.; Rendón Vélez, Elizabeth; Lemij, Hans G.; de Winter, Joost C. F.

    2013-01-01

    The objective of this study was to evaluate differences in driving performance, visual detection performance, and eye-scanning behavior between glaucoma patients and control participants without glaucoma. Glaucoma patients (n = 23) and control participants (n = 12) completed four 5-min driving sessions in a simulator. The participants were instructed to maintain the car in the right lane of a two-lane highway while their speed was automatically maintained at 100 km/h. Additional tasks per session were: Session 1: none, Session 2: verbalization of projected letters, Session 3: avoidance of static obstacles, and Session 4: combined letter verbalization and avoidance of static obstacles. Eye-scanning behavior was recorded with an eye-tracker. Results showed no statistically significant differences between patients and control participants for lane keeping, obstacle avoidance, and eye-scanning behavior. Steering activity, number of missed letters, and letter reaction time were significantly higher for glaucoma patients than for control participants. In conclusion, glaucoma patients were able to avoid objects and maintain a nominal lane keeping performance, but applied more steering input than control participants, and were more likely than control participants to miss peripherally projected stimuli. The eye-tracking results suggest that glaucoma patients did not use extra visual search to compensate for their visual field loss. Limitations of the study, such as small sample size, are discussed. PMID:24146975

  12. Efficacy of Brain Gym Training on the Cognitive Performance and Fitness Level of Active Older Adults: A Preliminary Study.

    PubMed

    Cancela, José M; Vila Suárez, Ma Helena; Vasconcelos, Jamine; Lima, Ana; Ayán, Carlos

    2015-10-01

    This study evaluates the impact of Brain Gym (BG) training in active older adults. Eighty-five participants were assigned to four training groups: BG (n = 18), BG plus water-based exercise (n = 18), land-based exercise (n = 30), and land plus water-based exercise (n = 19). The effects of the programs on the attention and memory functions were assessed by means of the symbol digit modality test. The two-min step and the eight-foot up-and-go tests were used to evaluate their impact on fitness level. No program had a significant influence on the participant's cognitive performance, while different effects on the sample' fitness levels were observed. These findings suggest that the effects of BG on the cognitive performance and fitness level of active older adults are similar to those obtained after the practice of a traditional exercise program. Whether BG is performed in isolation or combined with other exercise programs seems to have no influence on such effects.

  13. LIFE ESTIMATION OF HIGH LEVEL WASTE TANK STEEL FOR F-TANK FARM CLOSURE PERFORMANCE ASSESSMENT - 9310

    SciTech Connect

    Subramanian, K; Bruce Wiersma, B; Stephen Harris, S

    2009-01-12

    High level radioactive waste (HLW) is stored in underground carbon steel storage tanks at the Savannah River Site. The underground tanks will be closed by removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations, and severing/sealing external penetrations. The life of the carbon steel materials of construction in support of the performance assessment has been completed. The estimation considered general and localized corrosion mechanisms of the tank steel exposed to grouted conditions. A stochastic approach was followed to estimate the distributions of failures based upon mechanisms of corrosion accounting for variances in each of the independent variables. The methodology and results used for one-type of tank is presented.

  14. Evaluation of Blade-Strike Models for Estimating the Biological Performance of Large Kaplan Hydro Turbines

    SciTech Connect

    Deng, Zhiqun; Carlson, Thomas J.; Ploskey, Gene R.; Richmond, Marshall C.

    2005-11-30

    BioIndex testing of hydro-turbines is sought as an analog to the hydraulic index testing conducted on hydro-turbines to optimize their power production efficiency. In BioIndex testing the goal is to identify those operations within the range identified by Index testing where the survival of fish passing through the turbine is maximized. BioIndex testing includes the immediate tailrace region as well as the turbine environment between a turbine's intake trashracks and the exit of its draft tube. The US Army Corps of Engineers and the Department of Energy have been evaluating a variety of means, such as numerical and physical turbine models, to investigate the quality of flow through a hydro-turbine and other aspects of the turbine environment that determine its safety for fish. The goal is to use these tools to develop hypotheses identifying turbine operations and predictions of their biological performance that can be tested at prototype scales. Acceptance of hypotheses would be the means for validation of new operating rules for the turbine tested that would be in place when fish were passing through the turbines. The overall goal of this project is to evaluate the performance of numerical blade strike models as a tool to aid development of testable hypotheses for bioIndexing. Evaluation of the performance of numerical blade strike models is accomplished by comparing predictions of fish mortality resulting from strike by turbine runner blades with observations made using live test fish at mainstem Columbia River Dams and with other predictions of blade strike made using observations of beads passing through a 1:25 scale physical turbine model.

  15. Preliminary results of real-time PPP-RTK positioning algorithm development for moving platforms and its performance validation

    NASA Astrophysics Data System (ADS)

    Won, Jihye; Park, Kwan-Dong

    2015-04-01

    Real-time PPP-RTK positioning algorithms were developed for the purpose of getting precise coordinates of moving platforms. In this implementation, corrections for the satellite orbit and satellite clock were taken from the IGS-RTS products while the ionospheric delay was removed through ionosphere-free combination and the tropospheric delay was either taken care of using the Global Pressure and Temperature (GPT) model or estimated as a stochastic parameter. To improve the convergence speed, all the available GPS and GLONASS measurements were used and Extended Kalman Filter parameters were optimized. To validate our algorithms, we collected the GPS and GLONASS data from a geodetic-quality receiver installed on a roof of a moving vehicle in an open-sky environment and used IGS final products of satellite orbits and clock offsets. The horizontal positioning error got less than 10 cm within 5 minutes, and the error stayed below 10 cm even after the vehicle start moving. When the IGS-RTS product and the GPT model were used instead of the IGS precise product, the positioning accuracy of the moving vehicle was maintained at better than 20 cm once convergence was achieved at around 6 minutes.

  16. Preliminary performance assessment of the engineered barriers for a low- and intermediate-level radioactive waste repository

    SciTech Connect

    Cho, W.J.; Lee, J.O.; Hahn, P.S.; Chun, K.S.

    1996-10-01

    Radionuclide release from an engineered barrier in a low- and intermediate-level waste repository is evaluated. The results of experimental studies conducted to determine the radionuclide diffusion coefficients and the hydraulic conductivities of calcium bentonite and crushed granite mixtures are presented. The hydraulic conductivity of the mixture is relatively low even at low dry density and clay content, and the principal mechanism of radionuclide migration through the mixture is diffusion. The measured values of apparent diffusion coefficients in calcium bentonite with a dry density of 1.4 Mg/m{sup 3} are of the order of 10{sup {minus}13} to 10{sup {minus}12} m{sup 2}/s for cations and 10{sup {minus}11} m{sup 2}/s for iodine. These values are similar to those in sodium bentonite. The radionuclide release rates from the engineered barrier composed of the concrete structure and the clay-based backfill were calculated. Carbon-14 and {sup 99}Tc are the important nuclides; however, their maximum release rates are <10{sup {minus}5} GBq/yr. To quantify the effect of uncertainties of input parameters on the radionuclide release rates, Latin Hypercube sampling was used, and the ranges of release rates were estimated statistically with a confidence level of 95%. The uncertainties of the assessment results of the radionuclide release rate are larger in the case of the sorbing nuclides such as {sup 137}Cs. Finally, the sensitivity of the input parameter to release rate is also evaluated.

  17. Characterizing shallow secondary clarifier performance where conventional flux theory over-estimates allowable solids loading rate.

    PubMed

    Daigger, Glen T; Siczka, John S; Smith, Thomas F; Frank, David A; McCorquodale, J A

    2016-01-01

    The performance characteristics of relatively shallow (3.3 and 3.7 m sidewater depth in 30.5 m diameter) activated sludge secondary clarifiers were extensively evaluated during a 2-year testing program at the City of Akron Water Reclamation Facility (WRF), Ohio, USA. Testing included hydraulic and solids loading stress tests, and measurement of sludge characteristics (zone settling velocity (ZSV), dispersed and flocculated total suspended solids), and the results were used to calibrate computational fluid dynamic (CFD) models of the various clarifiers tested. The results demonstrated that good performance could be sustained at surface overflow rates in excess of 3 m/h, as long as the clarifier influent mixed liquor suspended solids (MLSS) concentration was controlled to below critical values. The limiting solids loading rate (SLR) was significantly lower than the value predicted by conventional solids flux analysis based on the measured ZSV/MLSS relationship. CFD analysis suggested that this resulted because mixed liquor entering the clarifier was being directed into the settled sludge blanket, diluting it and also creating a 'thin' concentration sludge blanket that overlays the thicker concentration sludge blanket typically expected. These results indicate the need to determine the allowable SLR for shallow clarifiers using approaches other than traditional solids flux analysis. A combination of actual testing and CFD analyses are demonstrated here to be effective in doing so.

  18. Effects of simplifying assumptions on optimal trajectory estimation for a high-performance aircraft

    NASA Technical Reports Server (NTRS)

    Kern, Lura E.; Belle, Steve D.; Duke, Eugene L.

    1990-01-01

    When analyzing the performance of an aircraft, certain simplifying assumptions, which decrease the complexity of the problem, can often be made. The degree of accuracy required in the solution may determine the extent to which these simplifying assumptions are incorporated. A complex model may yield more accurate results if it describes the real situation more thoroughly. However, a complex model usually involves more computation time, makes the analysis more difficult, and often requires more information to do the analysis. Therefore, to choose the simplifying assumptions intelligently, it is important to know what effects the assumptions may have on the calculated performance of a vehicle. Several simplifying assumptions are examined, the effects of simplified models to those of the more complex ones are compared, and conclusions are drawn about the impact of these assumptions on flight envelope generation and optimal trajectory calculation. Models which affect an aircraft are analyzed, but the implications of simplifying the model of the aircraft itself are not studied. The examples are atmospheric models, gravitational models, different models for equations of motion, and constraint conditions.

  19. Characterizing shallow secondary clarifier performance where conventional flux theory over-estimates allowable solids loading rate.

    PubMed

    Daigger, Glen T; Siczka, John S; Smith, Thomas F; Frank, David A; McCorquodale, J A

    2016-01-01

    The performance characteristics of relatively shallow (3.3 and 3.7 m sidewater depth in 30.5 m diameter) activated sludge secondary clarifiers were extensively evaluated during a 2-year testing program at the City of Akron Water Reclamation Facility (WRF), Ohio, USA. Testing included hydraulic and solids loading stress tests, and measurement of sludge characteristics (zone settling velocity (ZSV), dispersed and flocculated total suspended solids), and the results were used to calibrate computational fluid dynamic (CFD) models of the various clarifiers tested. The results demonstrated that good performance could be sustained at surface overflow rates in excess of 3 m/h, as long as the clarifier influent mixed liquor suspended solids (MLSS) concentration was controlled to below critical values. The limiting solids loading rate (SLR) was significantly lower than the value predicted by conventional solids flux analysis based on the measured ZSV/MLSS relationship. CFD analysis suggested that this resulted because mixed liquor entering the clarifier was being directed into the settled sludge blanket, diluting it and also creating a 'thin' concentration sludge blanket that overlays the thicker concentration sludge blanket typically expected. These results indicate the need to determine the allowable SLR for shallow clarifiers using approaches other than traditional solids flux analysis. A combination of actual testing and CFD analyses are demonstrated here to be effective in doing so. PMID:27438236

  20. A Preliminary Investigation into the Effect of Standards-Based Grading on the Academic Performance of African-American Students

    NASA Astrophysics Data System (ADS)

    Bradbury-Bailey, Mary

    With the implementation of No Child Left Behind came a wave of educational reform intended for those working with student populations whose academic performance seemed to indicate an alienation from the educational process. Central to these reforms was the implementation of standards-based instruction and their accompanying standardized assessments; however, in one area reform seemed nonexistent---the teacher's gradebook. (Erickson, 2010, Marzano, 2006; Scriffiny, 2008). Given the link between the grading process and achievement motivation, Ames (1992) suggested the use of practices that promote mastery goal orientation. The purpose of this study was to examine the impact of standards-based grading system as a factor contributing to mastery goal orientation on the academic performance of urban African American students. To determine the degree of impact, this study first compared the course content averages and End-of-Course-Test (EOCT) scores for science classes using a traditional grading system to those using a standards-based grading system by employing an Analysis of Covariance (ANCOVA). While there was an increase in all grading areas, two showed a significant difference---the Physical Science course content average (p = 0.024) and ix the Biology EOCT scores (p = 0.0876). These gains suggest that standards-based grading can have a positive impact on the academic performance of African American students. Secondly, this study examined the correlation between the course content averages and the EOCT scores for both the traditional and standards-based grading system; for both Physical Science and Biology, there was a stronger correlation between these two scores for the standards-based grading system.

  1. A preliminary study into performing routine tube output and automatic exposure control quality assurance using radiology information system data.

    PubMed

    Charnock, P; Jones, R; Fazakerley, J; Wilde, R; Dunn, A F

    2011-09-01

    Data are currently being collected from hospital radiology information systems in the North West of the UK for the purposes of both clinical audit and patient dose audit. Could these data also be used to satisfy quality assurance (QA) requirements according to UK guidance? From 2008 to 2009, 731 653 records were submitted from 8 hospitals from the North West England. For automatic exposure control QA, the protocol from Institute of Physics and Engineering in Medicine (IPEM) report 91 recommends that milliamperes per second can be monitored for repeatability and reproducibility using a suitable phantom, at 70-81 kV. Abdomen AP and chest PA examinations were analysed to find the most common kilovoltage used with these records then used to plot average monthly milliamperes per second with time. IPEM report 91 also recommends that a range of commonly used clinical settings is used to check output reproducibility and repeatability. For each tube, the dose area product values were plotted over time for two most common exposure factor sets. Results show that it is possible to do performance checks of AEC systems; however more work is required to be able to monitor tube output performance. Procedurally, the management system requires work and the benefits to the workflow would need to be demonstrated.

  2. Effect of the application of an electric field on the performance of a two-phase loop device: preliminary results

    NASA Astrophysics Data System (ADS)

    Creatini, F.; Di Marco, P.; Filippeschi, S.; Fioriti, D.; Mameli, M.

    2015-11-01

    In the last decade, the continuous development of electronics has pointed out the need for a change in mind with regard to thermal management. In the present scenario, Pulsating Heat Pipes (PHPs) are novel promising two-phase passive heat transport devices that seem to meet all present and future thermal requirements. Nevertheless, PHPs governing phenomena are quite unique and not completely understood. In particular, single closed loop PHPs manifest several drawbacks, mostly related to the reduction of device thermal performance and reliability, i.e. the occurrence of multiple operational quasi-steady states. The present research work proposes the application of an electric field as a technique to promote the circulation of the working fluid in a preferential direction and stabilize the device operation. The tested single closed loop PHP is made of a copper tube with an inner tube diameter equal to 2.00 mm and filled with pure ethanol (60% filling ratio). The electric field is generated by a couple of wire-shaped electrodes powered with DC voltage up to 20 kV and laid parallel to the longitudinal axis of the glass tube constituting the adiabatic section. Although the electric field intensity in the working fluid region is weakened both by the polarization phenomenon of the working fluid and by the interposition of the glass tube, the experimental results highlight the influence of the electric field on the device thermal performance and encourage the continuation of the research in this direction.

  3. The usefulness of performance matrix tests in locomotor system evaluation of girls attending a ballet school - preliminary observation.

    PubMed

    Wójcik, Małgorzata; Siatkowski, Idzi

    2014-01-01

    [Purpose] Learning ballet is connected with continuous use of the locomotor system while subjecting it to high loads. Therefore, we conducted some research defining the appearance of weak links in the motor system, in order to eliminate the risk of injury. [Methods] Fifty-two female students of a ballet school were examined. To identify weak links, low-threshold Performance Matrix tests were performed. An analysis of weak link occurrence in the locomotor system was carried out, using two way analysis of variance ANOVA Tukey's HSD test, clustering methods and Principal Component Analysis (PCA). [Results] The average age of the subjects was 11.64±0.53 years (mean ± standard deviation), their average body height was 151.1±7.5 cm, their average body weight was 35.92±5.41 kg, and their average time of learning at ballet school was 2.17±0.65 years. We found that there were significant differences in weak links occurrence in the motor system of every girl examined. [Conclusions] Weak links were found in every location of the motor system. Our results show that the influence of weak link location is essentially different from their occurrence, and that learning ballet has a significantly different impact on the number of weak links in different locations. PMID:24567673

  4. The Usefulness of Performance Matrix Tests in Locomotor System Evaluation of Girls Attending a Ballet School — Preliminary Observation

    PubMed Central

    Wójcik, Małgorzata; Siatkowski, Idzi

    2014-01-01

    [Purpose] Learning ballet is connected with continuous use of the locomotor system while subjecting it to high loads. Therefore, we conducted some research defining the appearance of weak links in the motor system, in order to eliminate the risk of injury. [Methods] Fifty-two female students of a ballet school were examined. To identify weak links, low-threshold Performance Matrix tests were performed. An analysis of weak link occurrence in the locomotor system was carried out, using two way analysis of variance ANOVA Tukey’s HSD test, clustering methods and Principal Component Analysis (PCA). [Results] The average age of the subjects was 11.64±0.53 years (mean ± standard deviation), their average body height was 151.1±7.5 cm, their average body weight was 35.92±5.41 kg, and their average time of learning at ballet school was 2.17±0.65 years. We found that there were significant differences in weak links occurrence in the motor system of every girl examined. [Conclusions] Weak links were found in every location of the motor system. Our results show that the influence of weak link location is essentially different from their occurrence, and that learning ballet has a significantly different impact on the number of weak links in different locations. PMID:24567673

  5. Preliminary Evaluation on the Effects of Feeds on the Growth and Early Reproductive Performance of Zebrafish (Danio rerio)

    PubMed Central

    2012-01-01

    This study evaluated the effects of several commercially available feeds and different feeding regimes on the growth and early reproductive performance of zebrafish (Danio rerio). Juvenile zebrafish (n= 20; 5.06 ± 0.69 mg) were stocked into each of 24 tanks (volume, 2 L); 3 tanks were assigned to each of 8 feeding combinations for a period of 60 d. At the end of 60 d, 2 male and 2 female fish from each tank were pooled by dietary treatment (n = 6) and used to evaluate the effects of feeding combinations on early reproductive performance. Zebrafish fed dietary treatments 3 and 7 had significantly greater weight gain than zebrafish fed diet 5. Mean spawning success was significantly greater in zebrafish fed the control diet (Artemiaonly) than in those fed diet 1. Mean hatch rates were greater in zebrafish fed the control feed and diets 1, 2, 3, 5, and 6 than zebrafish fed diet 4. Additional results suggest that female zebrafish are sexually mature after 90 d post fertilization and that fertilization rates are the limiting factor in early reproduction. PMID:23043806

  6. A preliminary study of functional brain activation among marijuana users during performance of a virtual water maze task.

    PubMed

    Sneider, Jennifer Tropp; Gruber, Staci A; Rogowska, Jadwiga; Silveri, Marisa M; Yurgelun-Todd, Deborah A

    2013-01-01

    Numerous studies have reported neurocognitive impairments associated with chronic marijuana use. Given that the hippocampus contains a high density of cannabinoid receptors, hippocampal-mediated cognitive functions, including visuospatial memory, may have increased vulnerability to chronic marijuana use. Thus, the current study examined brain activation during the performance of a virtual analogue of the classic Morris water maze task in 10 chronic marijuana (MJ) users compared to 18 non-using (NU) comparison subjects. Imaging data were acquired using blood oxygen-level dependent (BOLD) functional MRI at 3.0 Tesla during retrieval (hidden platform) and motor control (visible platform) conditions. While task performance on learning trials was similar between groups, MJ users demonstrated a deficit in memory retrieval. For BOLD fMRI data, NU subjects exhibited greater activation in the right parahippocampal gyrus and cingulate gyrus compared to the MJ group for the Retrieval - Motor control contrast (NU > MJ). These findings suggest that hypoactivation in MJ users may be due to differences in the efficient utilization of neuronal resources during the retrieval of memory. Given the paucity of data on visuospatial memory function in MJ users, these findings may help elucidate the neurobiological effects of marijuana on brain activation during memory retrieval. PMID:23951549

  7. A Direct Performance Test for Assessing Activities of Daily Living in Patients with Mild Degenerative Dementia: The Development of the ETAM and Preliminary Results

    PubMed Central

    Schmiedeberg-Sohn, Anke; Graessel, Elmar; Luttenberger, Katharina

    2015-01-01

    Background There are currently only a few performance tests that assess the capacity to perform activities of daily living. These measures frequently require a long time to administer, are strongly cognition oriented, or have not been adequately validated. Methods The Erlangen Test of Activities of Daily Living in Mild Dementia (ETAM) was developed in a 4-phase process that was based on the International Classification of Functioning, Disability, and Health (ICF). A pilot study was conducted on 30 subjects with mild dementia with a mean age of 80 years. The subjects' mean score on the MMSE was 21.5. Twenty-one of the 30 subjects were women. Results Ten items were developed and tested in the pilot study. The mean time required to complete the test was 26 min. The item analysis showed difficulties that ranged primarily from r = 0.28 to r = 0.79. The ETAM had a moderate correlation with the MMSE (rMMSE = 0.310) and a low correlation with the Geriatric Depression Scale-15 (GDS-15; rGDS-15 = 0.149). Conclusion The preliminary version of the ETAM is quick and easy to use and has predominantly satisfactory item characteristics. There still is the need to revise the items ‘giving directions’ and ‘making tea’ with regard to standardisation. PMID:25873929

  8. Solid oxide fuel cell/gas turbine power plant cycles and performance estimates

    SciTech Connect

    Lundberg, W.L.

    1996-12-31

    SOFC pressurization enhances SOFC efficiency and power performance. It enables the direct integration of the SOFC and gas turbine technologies which can form the basis for very efficient combined- cycle power plants. PSOFC/GT cogeneration systems, producing steam and/or hot water in addition to electric power, can be designed to achieve high fuel effectiveness values. A wide range of steam pressures and temperatures are possible owing to system component arrangement flexibility. It is anticipated that Westinghouse will offer small PSOFC/GT power plants for sale early in the next decade. These plants will have capacities less than 10 MW net ac, and they will operate with efficiencies in the 60-65% (net ac/LHV) range.

  9. Estimation of the Performance of Multiple Active Neutron Interrogation Signatures for Detecting Shielded HEU

    SciTech Connect

    David L. Chichester; Scott J. Thompson; Scott M. Watson; James T. Johnson; Edward H. Seabury

    2012-10-01

    A comprehensive modeling study has been carried out to evaluate the utility of multiple active neutron interrogation signatures for detecting shielded highly enriched uranium (HEU). The modeling effort focused on varying HEU masses from 1 kg to 20 kg; varying types of shields including wood, steel, cement, polyethylene, and borated polyethylene; varying depths of the HEU in the shields, and varying engineered shields immediately surrounding the HEU including steel, tungsten, and cadmium. Neutron and gamma-ray signatures were the focus of the study and false negative detection probabilities versus measurement time were used as a performance metric. To facilitate comparisons among different approaches an automated method was developed to generate receiver operating characteristic (ROC) curves for different sets of model variables for multiple background count rate conditions. This paper summarizes results or the analysis, including laboratory benchmark comparisons between simulations and experiments. The important impact engineered shields can play towards degrading detectability and methods for mitigating this will be discussed.

  10. Evaluation of blade-strike models for estimating the biological performance of large Kaplan hydro turbines

    SciTech Connect

    Deng, Z.; Carlson, T. J.; Ploskey, G. R.; Richmond, M. C.

    2005-11-01

    Bio-indexing of hydro turbines has been identified as an important means to optimize passage conditions for fish by identifying operations for existing and new design turbines that minimize the probability of injury. Cost-effective implementation of bio-indexing requires the use of tools such as numerical and physical turbine models to generate hypotheses for turbine operations that can be tested at prototype scales using live fish. Blade strike has been proposed as an index variable for the biological performance of turbines. Report reviews an evaluation of the use of numerical blade-strike models as a means with which to predict the probability of blade strike and injury of juvenile salmon smolt passing through large Kaplan turbines on the mainstem Columbia River.

  11. Transient performance estimation of charge plasma based negative capacitance junctionless tunnel FET

    NASA Astrophysics Data System (ADS)

    Singh, Sangeeta; Kondekar, P. N.; Pal, Pawan

    2016-02-01

    We investigate the transient behavior of an n-type double gate negative capacitance junctionless tunnel field effect transistor (NC-JLTFET). The structure is realized by using the work-function engineering of metal electrodes over a heavily doped n+ silicon channel and a ferroelectric gate stack to get negative capacitance behavior. The positive feedback in the electric dipoles of ferroelectric materials results in applied gate bias boosting. Various device transient parameters viz. transconductance, output resistance, output conductance, intrinsic gain, intrinsic gate delay, transconductance generation factor and unity gain frequency are analyzed using ac analysis of the device. To study the impact of the work-function variation of control and source gate on device performance, sensitivity analysis of the device has been carried out by varying these parameters. Simulation study reveals that it preserves inherent advantages of charge-plasma junctionless structure and exhibits improved transient behavior as well.

  12. Performance of a proportion-based approach to meta-analytic moderator estimation: results from Monte Carlo simulations.

    PubMed

    Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying

    2012-03-01

    This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Associating optical measurements and estimating orbits of geocentric objects with a Genetic Algorithm: performance limitations.

    NASA Astrophysics Data System (ADS)

    Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent

    2016-07-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.

  14. Improved Performances in Subsonic Flows of an SPH Scheme with Gradients Estimated Using an Integral Approach

    NASA Astrophysics Data System (ADS)

    Valdarnini, R.

    2016-11-01

    In this paper, we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics. In particular, we consider the Gresho–Chan vortex problem, the growth of Kelvin–Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disk problem. We compare simulation results for the different tests with those obtained, for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods. We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy, we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.

  15. Performance Estimates of Focal Plane Detectors for Focusing Gamma-Ray Telescopes

    NASA Astrophysics Data System (ADS)

    Zoglauer, Andreas C.; Wunderer, C. B.; Weidenspointner, G.; Boggs, S. E.; von Ballmoos, P.; Barriere, N.; Caroli, E.; Knoedlseder, J.

    2006-09-01

    Laue lenses for focusing gamma rays in the energy range from roughly 100 keV up to a few MeV have been successfully demonstrated in the laboratory and on balloon flights. These telescopes concentrate gamma rays onto a small, distant focal plane detector by utilizing Bragg reflection within the volume of suitably arranged crystals. Thus collection and detection area are decoupled, enabling a significant reduction of background - which roughly scales with detector volume and constitutes the sensitivity limit of today's low to medium energy gamma-ray telescopes. In order to achieve the best possible sensitivity, the focal plane instrument needs excellent background rejection capabilities, high photo-peak efficiency, and good energy resolution. Different options to achieve these goals exist, and have to be carefully balanced: For example, an active shield can be used to reduce cosmic photon background at the expense of higher background due to activation; Compton scattering can be used for background rejection, but requiring resolvable Compton scatters also reduces the photo-peak efficiency; the detector could e.g. consist of Germanium with excellent energy resolution but higher cooling requirements, or of CZT with less stringent cooling requirements but worse energy resolution. We have started to explore some of the possible detector configurations, and will compare different focal plane instruments based on their narrow line, continuum, and polarization sensitivity. For this purpose simulations of the expected space radiation environment (including activation) have been performed with MGGPOD. The MEGAlib package was used to determine the achievable performance of those detectors, in an evaluation that includes each detector's individual properties such as energy and position resolution, thresholds, etc. CBW thanks the Townes Fellowship at UC Berkeley for support.

  16. Preliminary evaluation and comparison of atmospheric turbulence rejection performance for infinite and receding horizon control in adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; De Dona, Jose

    2014-07-01

    Model-based optimal control such as Linear Quadratic Gaussian (LQG) control has been attracting considerable attention for adaptive optics systems. The ability of LQG to handle the complex dynamics of deformable mirrors and its relatively simple implementation makes LQG attractive for large adaptive optics systems. However, LQG has its own share of drawbacks, such as suboptimal handling of constraints on actuators movements and possible numerical problems in case of fast sampling rate discretization of the corresponding matrices. Unlike LQG, the Receding Horizon Control (RHC) technique provides control signals for a deformable mirror that are optimal within the prescribed constraints. This is achieved by reformulating the control problem as an online optimization problem that is solved at each sampling instance. In the unconstrained case, RHC produces the same control signals as LQG. However, when the control signals reach the constraints of actuator's allowable movement in a deformable mirror, RHC finds the control signals that are optimal within those constraints, rather than just clipping the unconstrained optimum as commonly done in LQG control. The article discusses the consequences of high-gain LQG control operation in the case when the constraints on the actuator's movement are reached. It is shown that clipping / saturating the control signals is not only suboptimal, but may be hazardous for the surface of a deformable mirror. The results of numerical simulations indicate that high-gain LQG control can lead to abrupt changes and spikes in the control signal when saturation occurs. The article further discusses a possible link between high-gain LQG and the waffle mode in the closed-loop operation of astronomical adaptive optics systems. Performance evaluation of Receding Horizon Control in terms of atmospheric disturbance rejection and a comparison with Linear Quadratic Gaussian control are performed. The results of the numerical simulations suggest that the

  17. A Preliminary Model for Spacecraft Propulsion Performance Analysis Based on Nuclear Gain and Subsystem Mass-Power Balances

    NASA Technical Reports Server (NTRS)

    Chakrabarti, S.; Schmidt, G. R.; Thio, Y. C.; Hurst, C. M.

    1999-01-01

    Rapid transportation of human crews to destinations throughout the solar system will require propulsion systems having not only very high exhaust velocities (i.e., I(sub sp) >= 10(exp 4) to 10(exp 5) sec) but also extremely low mass-power ratios (i.e., alpha <= 10(exp -2) kg/kW). These criteria are difficult to meet with electric propulsion and other power-limited systems, but may be achievable with propulsion concepts that use onboard power to produce a net gain in energy via fusion or some other nuclear process. This paper compares the fundamental performance of these gain-limited systems with that of power-limited systems, and determines from a generic power balance the gains required for ambitious planetary missions ranging up to 100 AU. Results show that energy gain reduces the required effective mass-power ratio of the system, thus enabling shorter trip times than those of power-limited concepts.

  18. Preliminary validation of high performance liquid chromatography method for detection of methyl-testosterone residue in carp muscle

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Lin, Hong; Fu, Xiaoting; Li, Mingming

    2005-07-01

    The use of synthetic anabolic steroid methyltestosterone (MT) as growth promoter is prohibited in China. Validations of analytical methods for MT residue in food and the results obtained have become indispensable. The high performance liquid chromatography (HPLC) method for the detection of MT with liquid-liquid extraction by trichloromethane-methanol in carp muscle tissue was preliminarily validated with reference to the following parameters: recovery (accuracy) at the 1, 5 and l0 mgkg-1 level, between-run and within-run CV values (repeatability, also called relative standard deviation (RSD)) and limit of detection. The recoveries were above 80% and the between-run and within-run CV values below 10% for muscle tissue. The limit of detection was 0.05 mgkg-1.

  19. Individual differences in visual information processing rate and the prediction of performance differences in team sports: a preliminary investigation.

    PubMed

    Adam, J J; Wilberg, R B

    1992-06-01

    This study used a backward-masking paradigm to examine individual differences in rate of visual information processing among university basketball, ice hockey and Canadian football players. Displays containing four letters were presented for stimulus durations ranging from 25 to 300 ms. Following stimulus offset, a masking stimulus was presented for 200 ms. The subjects were instructed to write down as many letters as possible from the briefly presented stimulus display on a specially prepared response grid. The results indicated consistent individual differences in rate of visual information processing. More importantly, it was found that rate of visual information processing as indexed by the backward-masking technique, has promising validity for predicting general performance excellence in university ice hockey and basketball players. Individual differences in rate of visual information processing were interpreted as reflecting the operation of attentional factors.

  20. A two-dimensional locally regularized strain estimation technique: preliminary clinical results for the assessment of benign and malignant breast lesions

    NASA Astrophysics Data System (ADS)

    Brusseau, Elisabeth; Detti, Valérie; Coulon, Agnès; Maissiat, Emmanuèle; Boublay, Nawèle; Berthezène, Yves; Fromageau, Jérémie; Bush, Nigel; Bamber, Jeffrey

    2011-03-01

    We previously developed a 2D locally regularized strain estimation technique that was already validated with ex vivo tissues. In this study, our technique is assessed with in vivo data, by examining breast abnormalities in clinical conditions. Method reliability is analyzed as well as tissue strain fields according to the benign or malignant character of the lesion. Ultrasound RF data were acquired in two centers on ten lesions, five being classified as fibroadenomas, the other five being classified as malignant tumors, mainly ductal carcinomas from grades I to III. The estimation procedure we developed involves maximizing a similarity criterion (the normalized correlation coefficient or NCC) between pre- and post-compression images, the deformation effects being considered. The probability of correct strain estimation is higher if this coefficient is closer to 1. Results demonstrated the ability of our technique to provide good-quality strain images with clinical data. For all lesions, movies of tissue strain during compression were obtained, with strains that can reach 15%. The NCC averaged over each movie was computed, leading for the ten cases to a mean value of 0.93, a minimum value of 0.87 and a maximum value of 0.98. These high NCC values confirm the reliability of the strain estimation. Moreover, lesions were clearly identified for the ten cases investigated. Finally, we have observed with malignant lesions that compared to ultrasound data, strain images can put in relief a more important lesion size, and can help in evaluating the lesion invasive character.