A function approximation approach to anomaly detection in propulsion system test data
NASA Technical Reports Server (NTRS)
Whitehead, Bruce A.; Hoyt, W. A.
1993-01-01
Ground test data from propulsion systems such as the Space Shuttle Main Engine (SSME) can be automatically screened for anomalies by a neural network. The neural network screens data after being trained with nominal data only. Given the values of 14 measurements reflecting external influences on the SSME at a given time, the neural network predicts the expected nominal value of a desired engine parameter at that time. We compared the ability of three different function-approximation techniques to perform this nominal value prediction: a novel neural network architecture based on Gaussian bar basis functions, a conventional back propagation neural network, and linear regression. These three techniques were tested with real data from six SSME ground tests containing two anomalies. The basis function network trained more rapidly than back propagation. It yielded nominal predictions with, a tight enough confidence interval to distinguish anomalous deviations from the nominal fluctuations in an engine parameter. Since the function-approximation approach requires nominal training data only, it is capable of detecting unknown classes of anomalies for which training data is not available.
K-ε Turbulence Model Parameter Estimates Using an Approximate Self-similar Jet-in-Crossflow Solution
DeChant, Lawrence; Ray, Jaideep; Lefantzi, Sophia; ...
2017-06-09
The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnelmore » data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.« less
NASA Astrophysics Data System (ADS)
Harmanec, Petr; Prša, Andrej
2011-08-01
The increasing precision of astronomical observations of stars and stellar systems is gradually getting to a level where the use of slightly different values of the solar mass, radius, and luminosity, as well as different values of fundamental physical constants, can lead to measurable systematic differences in the determination of basic physical properties. An equivalent issue with an inconsistent value of the speed of light was resolved by adopting a nominal value that is constant and has no error associated with it. Analogously, we suggest that the systematic error in stellar parameters may be eliminated by (1) replacing the solar radius R⊙ and luminosity L⊙ by the nominal values that are by definition exact and expressed in SI units: and ; (2) computing stellar masses in terms of M⊙ by noting that the measurement error of the product GM⊙ is 5 orders of magnitude smaller than the error in G; (3) computing stellar masses and temperatures in SI units by using the derived values and ; and (4) clearly stating the reference for the values of the fundamental physical constants used. We discuss the need and demonstrate the advantages of such a paradigm shift.
NOMINAL VALUES FOR SELECTED SOLAR AND PLANETARY QUANTITIES: IAU 2015 RESOLUTION B3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prša, Andrej; Harmanec, Petr; Torres, Guillermo
In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the timemore » of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.« less
Nominal Values for Selected Solar and Planetary Quantities: IAU 2015 Resolution B3
NASA Astrophysics Data System (ADS)
Prša, Andrej; Harmanec, Petr; Torres, Guillermo; Mamajek, Eric; Asplund, Martin; Capitaine, Nicole; Christensen-Dalsgaard, Jørgen; Depagne, Éric; Haberreiter, Margit; Hekker, Saskia; Hilton, James; Kopp, Greg; Kostov, Veselin; Kurtz, Donald W.; Laskar, Jacques; Mason, Brian D.; Milone, Eugene F.; Montgomery, Michele; Richards, Mercedes; Schmutz, Werner; Schou, Jesper; Stewart, Susan G.
2016-08-01
In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the time of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.
Review of probabilistic analysis of dynamic response of systems with random parameters
NASA Technical Reports Server (NTRS)
Kozin, F.; Klosner, J. M.
1989-01-01
The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.
Cycle 24 HST+COS Target Acquisition Monitor Summary
NASA Astrophysics Data System (ADS)
Penton, Steven V.; White, James
2018-06-01
HST/COS calibration program 14847 (P14857) was designed to verify that all three COS Target Acquisition (TA) modes were performing nominally during Cycle 24. The program was designed not only to determine if any of the COS TA flight software (FSW) patchable constants need updating but also to determine the values of any required parameter updates. All TA modes were determined to be performing nominally during the Cycle 24 calendar period of October 1, 2016 - October 1, 2017. No COS SIAF, TA subarray, or FSW parameter updates were required as a result of this program.
NASA Technical Reports Server (NTRS)
Iverson, David L. (Inventor)
2008-01-01
The present invention relates to an Inductive Monitoring System (IMS), its software implementations, hardware embodiments and applications. Training data is received, typically nominal system data acquired from sensors in normally operating systems or from detailed system simulations. The training data is formed into vectors that are used to generate a knowledge database having clusters of nominal operating regions therein. IMS monitors a system's performance or health by comparing cluster parameters in the knowledge database with incoming sensor data from a monitored-system formed into vectors. Nominal performance is concluded when a monitored-system vector is determined to lie within a nominal operating region cluster or lies sufficiently close to a such a cluster as determined by a threshold value and a distance metric. Some embodiments of IMS include cluster indexing and retrieval methods that increase the execution speed of IMS.
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John
2012-01-01
Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.
Padhi, Radhakant; Bhardhwaj, Jayender R
2009-06-01
An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.
Tradeoff studies in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1983-01-01
A computer aided design method for multiobjective parameter-insensitive design of airplane control systems is described. Methods are presented for trading off nominal values of design objectives against sensitivities of the design objectives to parameter uncertainties, together with guidelines for designer utilization of the methods. The methods are illustrated by application to the design of a lateral stability augmentation system for two supersonic flight conditions of the Shuttle Orbiter. Objective functions are conventional handling quality measures and peak magnitudes of control deflections and rates. The uncertain parameters are assumed Gaussian, and numerical approximations of the stochastic behavior of the objectives are described. Results of applying the tradeoff methods to this example show that stochastic-insensitive designs are distinctly different from deterministic multiobjective designs. The main penalty for achieving significant decrease in sensitivity is decreased speed of response for the nominal system.
NASA Technical Reports Server (NTRS)
Cruz, Juan R.; Way, David W.; Shidner, Jeremy D.; Davis, Jody L.; Adams, Douglas S.; Kipp, Devin M.
2013-01-01
The Mars Science Laboratory used a single mortar-deployed disk-gap-band parachute of 21.35 m nominal diameter to assist in the landing of the Curiosity rover on the surface of Mars. The parachute system s performance on Mars has been reconstructed using data from the on-board inertial measurement unit, atmospheric models, and terrestrial measurements of the parachute system. In addition, the parachute performance results were compared against the end-to-end entry, descent, and landing (EDL) simulation created to design, develop, and operate the EDL system. Mortar performance was nominal. The time from mortar fire to suspension lines stretch (deployment) was 1.135 s, and the time from suspension lines stretch to first peak force (inflation) was 0.635 s. These times were slightly shorter than those used in the simulation. The reconstructed aerodynamic portion of the first peak force was 153.8 kN; the median value for this parameter from an 8,000-trial Monte Carlo simulation yielded a value of 175.4 kN - 14% higher than the reconstructed value. Aeroshell dynamics during the parachute phase of EDL were evaluated by examining the aeroshell rotation rate and rotational acceleration. The peak values of these parameters were 69.4 deg/s and 625 deg/sq s, respectively, which were well within the acceptable range. The EDL simulation was successful in predicting the aeroshell dynamics within reasonable bounds. The average total parachute force coefficient for Mach numbers below 0.6 was 0.624, which is close to the pre-flight model nominal drag coefficient of 0.615.
INDUCTIVE SYSTEM HEALTH MONITORING WITH STATISTICAL METRICS
NASA Technical Reports Server (NTRS)
Iverson, David L.
2005-01-01
Model-based reasoning is a powerful method for performing system monitoring and diagnosis. Building models for model-based reasoning is often a difficult and time consuming process. The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS processes nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. In particular, a clustering algorithm forms groups of nominal values for sets of related parameters. This establishes constraints on those parameter values that should hold during nominal operation. During monitoring, IMS provides a statistically weighted measure of the deviation of current system behavior from the established normal baseline. If the deviation increases beyond the expected level, an anomaly is suspected, prompting further investigation by an operator or automated system. IMS has shown potential to be an effective, low cost technique to produce system monitoring capability for a variety of applications. We describe the training and system health monitoring techniques of IMS. We also present the application of IMS to a data set from the Space Shuttle Columbia STS-107 flight. IMS was able to detect an anomaly in the launch telemetry shortly after a foam impact damaged Columbia's thermal protection system.
An Adaptive Control Technology for Safety of a GTM-like Aircraft
NASA Technical Reports Server (NTRS)
Matsutani, Megumi; Crespo, Luis G.; Annaswamy, Anuradha; Jang, Jinho
2010-01-01
An adaptive control architecture for safe performance of a transport aircraft subject to various adverse conditions is proposed and verified in this report. This architecture combines a nominal controller based on a Linear Quadratic Regulator with integral action, and an adaptive controller that accommodates actuator saturation and bounded disturbances. The effectiveness of the baseline controller and its adaptive augmentation are evaluated using a stand-alone control veri fication methodology. Case studies that pair individual parameter uncertainties with critical flight maneuvers are studied. The resilience of the controllers is determined by evaluating the degradation in closed-loop performance resulting from increasingly larger deviations in the uncertain parameters from their nominal values. Symmetric and asymmetric actuator failures, flight upsets, and center of gravity displacements, are some of the uncertainties considered.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Mapping an operator's perception of a parameter space
NASA Technical Reports Server (NTRS)
Pew, R. W.; Jagacinski, R. J.
1972-01-01
Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.
Puncher, M; Zhang, W; Harrison, J D; Wakeford, R
2017-06-26
Assessments of risk to a specific population group resulting from internal exposure to a particular radionuclide can be used to assess the reliability of the appropriate International Commission on Radiological Protection (ICRP) dose coefficients used as a radiation protection device for the specified exposure pathway. An estimate of the uncertainty on the associated risk is important for informing judgments on reliability; a derived uncertainty factor, UF, is an estimate of the 95% probable geometric difference between the best risk estimate and the nominal risk and is a useful tool for making this assessment. This paper describes the application of parameter uncertainty analysis to quantify uncertainties resulting from internal exposures to radioiodine by members of the public, specifically 1, 10 and 20-year old females from the population of England and Wales. Best estimates of thyroid cancer incidence risk (lifetime attributable risk) are calculated for ingestion or inhalation of 129 I and 131 I, accounting for uncertainties in biokinetic model and cancer risk model parameter values. These estimates are compared with the equivalent ICRP derived nominal age-, sex- and population-averaged estimates of excess thyroid cancer incidence to obtain UFs. Derived UF values for ingestion or inhalation of 131 I for 1 year, 10-year and 20-year olds are around 28, 12 and 6, respectively, when compared with ICRP Publication 103 nominal values, and 9, 7 and 14, respectively, when compared with ICRP Publication 60 values. Broadly similar results were obtained for 129 I. The uncertainties on risk estimates are largely determined by uncertainties on risk model parameters rather than uncertainties on biokinetic model parameters. An examination of the sensitivity of the results to the risk models and populations used in the calculations show variations in the central estimates of risk of a factor of around 2-3. It is assumed that the direct proportionality of excess thyroid cancer risk and dose observed at low to moderate acute doses and incorporated in the risk models also applies to very small doses received at very low dose rates; the uncertainty in this assumption is considerable, but largely unquantifiable. The UF values illustrate the need for an informed approach to the use of ICRP dose and risk coefficients.
Ascent trajectory dispersion analysis for WTR heads-up space shuttle trajectory
NASA Technical Reports Server (NTRS)
1986-01-01
The results of a Space Transportation System ascent trajectory dispersion analysis are discussed. The purpose is to provide critical trajectory parameter values for assessing the Space Shuttle in a heads-up configuration launched from the Western Test Range (STR). This analysis was conducted using a trajectory profile based on a launch from the WTR in December. The analysis consisted of the following steps: (1) nominal trajectories were simulated under the conditions as specified by baseline reference mission guidelines; (2) dispersion trajectories were simulated using predetermined parametric variations; (3) requirements for a system-related composite trajectory were determined by a root-sum-square (RSS) analysis of the positive deviations between values of the aerodynamic heating indicator (AHI) generated by the dispersion and nominal trajectories; (4) using the RSS assessment as a guideline, the system related composite trajectory was simulated by combinations of dispersion parameters which represented major contributors; (5) an assessment of environmental perturbations via a RSS analysis was made by the combination of plus or minus 2 sigma atmospheric density variation and 95% directional design wind dispersions; (6) maximum aerodynamic heating trajectories were simulated by variation of dispersion parameters which would emulate the summation of the system-related RSS and environmental RSS values of AHI. The maximum aerodynamic heating trajectories were simulated consistent with the directional winds used in the environmental analysis.
Orbital Signature Analyzer (OSA): A spacecraft health/safety monitoring and analysis tool
NASA Technical Reports Server (NTRS)
Weaver, Steven; Degeorges, Charles; Bush, Joy; Shendock, Robert; Mandl, Daniel
1993-01-01
Fixed or static limit sensing is employed in control centers to ensure that spacecraft parameters remain within a nominal range. However, many critical parameters, such as power system telemetry, are time-varying and, as such, their 'nominal' range is necessarily time-varying as well. Predicted data, manual limits checking, and widened limit-checking ranges are often employed in an attempt to monitor these parameters without generating excessive limits violations. Generating predicted data and manual limits checking are both resource intensive, while broadening limit ranges for time-varying parameters is clearly inadequate to detect all but catastrophic problems. OSA provides a low-cost solution by using analytically selected data as a reference upon which to base its limits. These limits are always defined relative to the time-varying reference data, rather than as fixed upper and lower limits. In effect, OSA provides individual limits tailored to each value throughout all the data. A side benefit of using relative limits is that they automatically adjust to new reference data. In addition, OSA provides a wealth of analytical by-products in its execution.
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin
2017-10-01
To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may provide guidance on how to effectively reduce BDT and may be used to identifying deteriorating machine performance. © 2017 American Association of Physicists in Medicine.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Real-Time Minimization of Tracking Error for Aircraft Systems
NASA Technical Reports Server (NTRS)
Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John
2013-01-01
This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.
Design of an optical PPM communication link in the presence of component tolerances
NASA Technical Reports Server (NTRS)
Chen, C.-C.
1988-01-01
A systematic approach is described for estimating the performance of an optical direct detection pulse position modulation (PPM) communication link in the presence of parameter tolerances. This approach was incorporated into the JPL optical link analysis program to provide a useful tool for optical link design. Given a set of system parameters and their tolerance specifications, the program will calculate the nominal performance margin and its standard deviation. Through use of these values, the optical link can be designed to perform adequately even under adverse operating conditions.
Terminal altitude maximization for Mars entry considering uncertainties
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Zhao, Zeduan; Yu, Zhengshi; Dai, Juan
2018-04-01
Uncertainties present in the Mars atmospheric entry process may cause state deviations from the nominal designed values, which will lead to unexpected performance degradation if the trajectory is designed merely based on the deterministic dynamic model. In this paper, a linear covariance based entry trajectory optimization method is proposed considering the uncertainties presenting in the initial states and parameters. By extending the elements of the state covariance matrix as augmented states, the statistical behavior of the trajectory is captured to reformulate the performance metrics and path constraints. The optimization problem is solved by the GPOPS-II toolbox in MATLAB environment. Monte Carlo simulations are also conducted to demonstrate the capability of the proposed method. Primary trading performances between the nominal deployment altitude and its dispersion can be observed by modulating the weights on the dispersion penalty, and a compromised result referring to maximizing the 3σ lower bound of the terminal altitude is achieved. The resulting path constraints also show better satisfaction in a disturbed environment compared with the nominal situation.
ERIC Educational Resources Information Center
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun
2002-01-01
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Impact of selected troposphere models on Precise Point Positioning convergence
NASA Astrophysics Data System (ADS)
Kalita, Jakub; Rzepecka, Zofia
2016-04-01
The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first hour of processing. Finally, the results have been compared against results obtained during calm tropospheric conditions.
Lacey, Ronald E; Faulkner, William Brock
2015-07-01
This work applied a propagation of uncertainty method to typical total suspended particulate (TSP) sampling apparatus in order to estimate the overall measurement uncertainty. The objectives of this study were to estimate the uncertainty for three TSP samplers, develop an uncertainty budget, and determine the sensitivity of the total uncertainty to environmental parameters. The samplers evaluated were the TAMU High Volume TSP Sampler at a nominal volumetric flow rate of 1.42 m3 min(-1) (50 CFM), the TAMU Low Volume TSP Sampler at a nominal volumetric flow rate of 17 L min(-1) (0.6 CFM) and the EPA TSP Sampler at the nominal volumetric flow rates of 1.1 and 1.7 m3 min(-1) (39 and 60 CFM). Under nominal operating conditions the overall measurement uncertainty was found to vary from 6.1x10(-6) g m(-3) to 18.0x10(-6) g m(-3), which represented an uncertainty of 1.7% to 5.2% of the measurement. Analysis of the uncertainty budget determined that three of the instrument parameters contributed significantly to the overall uncertainty: the uncertainty in the pressure drop measurement across the orifice meter during both calibration and testing and the uncertainty of the airflow standard used during calibration of the orifice meter. Five environmental parameters occurring during field measurements were considered for their effect on overall uncertainty: ambient TSP concentration, volumetric airflow rate, ambient temperature, ambient pressure, and ambient relative humidity. Of these, only ambient TSP concentration and volumetric airflow rate were found to have a strong effect on the overall uncertainty. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically. This work addresses measurement uncertainty of TSP samplers used in ambient conditions. Estimation of uncertainty in gravimetric measurements is of particular interest, since as ambient particulate matter (PM) concentrations approach regulatory limits, the uncertainty of the measurement is essential in determining the sample size and the probability of type II errors in hypothesis testing. This is an important factor in determining if ambient PM concentrations exceed regulatory limits. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically.
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.
Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa
2010-01-21
Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.
Detection and Modeling of High-Dimensional Thresholds for Fault Detection and Diagnosis
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Many Fault Detection and Diagnosis (FDD) systems use discrete models for detection and reasoning. To obtain categorical values like oil pressure too high, analog sensor values need to be discretized using a suitablethreshold. Time series of analog and discrete sensor readings are processed and discretized as they come in. This task isusually performed by the wrapper code'' of the FDD system, together with signal preprocessing and filtering. In practice,selecting the right threshold is very difficult, because it heavily influences the quality of diagnosis. If a threshold causesthe alarm trigger even in nominal situations, false alarms will be the consequence. On the other hand, if threshold settingdoes not trigger in case of an off-nominal condition, important alarms might be missed, potentially causing hazardoussituations. In this paper, we will in detail describe the underlying statistical modeling techniques and algorithm as well as the Bayesian method for selecting the most likely shape and its parameters. Our approach will be illustrated by several examples from the Aerospace domain.
Application of Statistically Derived CPAS Parachute Parameters
NASA Technical Reports Server (NTRS)
Romero, Leah M.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.
Estimation of Geodetic and Geodynamical Parameters with VieVS
NASA Technical Reports Server (NTRS)
Spicakova, Hana; Bohm, Johannes; Bohm, Sigrid; Nilsson, tobias; Pany, Andrea; Plank, Lucia; Teke, Kamil; Schuh, Harald
2010-01-01
Since 2008 the VLBI group at the Institute of Geodesy and Geophysics at TU Vienna has focused on the development of a new VLBI data analysis software called VieVS (Vienna VLBI Software). One part of the program, currently under development, is a unit for parameter estimation in so-called global solutions, where the connection of the single sessions is done by stacking at the normal equation level. We can determine time independent geodynamical parameters such as Love and Shida numbers of the solid Earth tides. Apart from the estimation of the constant nominal values of Love and Shida numbers for the second degree of the tidal potential, it is possible to determine frequency dependent values in the diurnal band together with the resonance frequency of Free Core Nutation. In this paper we show first results obtained from the 24-hour IVS R1 and R4 sessions.
Preliminary Investigation of Ice Shape Sensitivity to Parameter Variations
NASA Technical Reports Server (NTRS)
Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.
2005-01-01
A parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this preliminary work was to investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD icing conditions. It was desired to identify the minimum change (threshold) in a parameter value, which yielded an observable change in the ice shape. Liquid Water Content (LWC), drop size distribution (MVD), and tunnel static temperature were varied about a nominal value, and the effects of these parameter changes on the resulting ice shapes were documented. The resulting differences in ice shapes were compared on the basis of qualitative and quantitative criteria (e.g., mass, ice horn thickness, ice horn angle, icing limits, and iced area). This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results, followed by a discussion of recommendations for future research.
Mach 10 Stage Separation Analysis for the X43-A
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Bose, David M.; Thornblom, Mark N.; Lien, J. P.; Martin, John G.
2007-01-01
This paper describes the pre-flight stage separation analysis that was conducted in support of the final flight of the X-43A. In that flight, which occurred less than eight months after the successful Mach 7 flight, the X-43A Research Vehicle attained a peak speed of Mach 9.6. Details are provided on how the lessons learned from the Mach 7 flight affected separation modeling and how adjustments were made to account for the increased flight Mach number. Also, the procedure for defining the feedback loop closure and feed-forward parameters employed in the separation control logic are described, and their effect on separation performance is explained. In addition, the range and nominal values of these parameters, which were included in the Mission Data Load, are presented. Once updates were made, the nominal pre-flight trajectory and Monte Carlo statistical results were determined and stress tests were performed to ensure system robustness. During flight the vehicle performed within the uncertainty bounds predicted in the pre-flight analysis and ultimately set the world record for airbreathing powered flight.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
36 CFR 60.10 - Concurrent State and Federal nominations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... cultural value. Federal agencies may nominate properties where a portion of the property is not under Federal ownership or control. (b) When a portion of the area included in a Federal nomination is not... cultural resource, the completed nomination form shall be sent to the State Historic Preservation Officer...
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuess, Peter, E-mail: Peter.kuess@meduniwien.ac.at
Purpose: For commercially available linear accelerators (Linacs), the electron energies of flattening filter free (FFF) and flattened (FF) beams are either identical or the electron energy of the FFF beam is increased to match the percentage depth dose curve (PDD) of the FF beam (in reference geometry). This study focuses on the primary dose components of FFF beams for both kinds of settings, studied on the same Linac. Methods: The measurements were conducted on a VersaHD Linac (Elekta, Crawley, UK) for both FF and FFF beams with nominal energies of 6 and 10 MV. In the clinical setting of themore » VersaHD, the energy of FFF{sub M} (Matched) beams is set to match the PDDs of the FF beams. In contrast the incident electron beam of the FFF{sub U} beam was set to the same energy as for the FF beam. Half value layers (HVLs) and a dual parameter beam quality specifier (DPBQS) were determined. Results: For the 6 MV FFF{sub M} beam, HVL and DPBQS values were very similar compared to those of the 6 MV FF beam, while for the 10 MV FFF{sub M} and FF beams, only %dd(10){sub x} and HVL values were comparable (differences below 1.5%). This shows that matching the PDD at one depth does not guarantee other beam quality dependent parameters to be matched. For FFF{sub U} beams, all investigated beam quality specifiers were significantly different compared to those for FF beams of the same nominal accelerator potential. The DPBQS of the 6 MV FF and FFF{sub M} beams was equal within the measurement uncertainty and was comparable to published data of a machine with similar TPR{sub 20,10} and %dd(10){sub x}. In contrast to that, the DPBQS’s two parameters of the 10 MV FFF{sub M} beam were substantially higher compared to those for the 10 MV FF beam. Conclusions: PDD-matched FF and FFF beams of both nominal accelerator potentials were observed to have similar HVL values, indicating similarity of their primary dose components. Using the DPBQS revealed that the mean attenuation coefficient was found to be the same within the uncertainty of 0.8% for 6 MV FF and 6 MV FFF{sub M} beams, while for 10 MV beams, they differed by 6.4%. This shows that the DPBQS can provide a differentiation of photon beam characteristics that would remain hidden by the use of a single beam quality specifier, such as %dd(10){sub x} or HVL.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-17
... health-based) air quality standards for oxides of nitrogen (NO X ). DATES: Nominations should be..., medicine, public health, biostatistics and risk assessment. Process and Deadline for Submitting Nominations... later than November 7, 2012. EPA values and welcomes diversity. In an effort to obtain nominations of...
Statistical Significance and Baseline Monitoring.
1984-07-01
impacted at once........................... 24 6 Observed versus nominal a levels for multivariate tests of data sets (50 runs of 4 groups each...cumulative proportion of the observations found for each nominal level. The results of the comparisons of the observed versus nominal a levels for the...a values are always higher than nominal levels. Virtual- . .,ly all nominal a levels are below 0.20. In other words, the discriminant analysis models
Tofts, Paul S; Cutajar, Marica; Mendichovszky, Iosif A; Peters, A Michael; Gordon, Isky
2012-06-01
To model the uptake phase of T(1)-weighted DCE-MRI data in normal kidneys and to demonstrate that the fitted physiological parameters correlate with published normal values. The model incorporates delay and broadening of the arterial vascular peak as it appears in the capillary bed, two distinct compartments for renal intravascular and extravascular Gd tracer, and uses a small-vessel haematocrit value of 24%. Four physiological parameters can be estimated: regional filtration K ( trans ) (ml min(-1) [ml tissue](-1)), perfusion F (ml min(-1) [100 ml tissue](-1)), blood volume v ( b ) (%) and mean residence time MRT (s). From these are found the filtration fraction (FF; %) and total GFR (ml min(-1)). Fifteen healthy volunteers were imaged twice using oblique coronal slices every 2.5 s to determine the reproducibility. Using parenchymal ROIs, group mean values for renal biomarkers all agreed with published values: K ( trans ): 0.25; F: 219; v ( b ): 34; MRT: 5.5; FF: 15; GFR: 115. Nominally cortical ROIs consistently underestimated total filtration (by ~50%). Reproducibility was 7-18%. Sensitivity analysis showed that these fitted parameters are most vulnerable to errors in the fixed parameters kidney T(1), flip angle, haematocrit and relaxivity. These renal biomarkers can potentially measure renal physiology in diagnosis and treatment. • Dynamic contrast-enhanced magnetic resonance imaging can measure renal function. • Filtration and perfusion values in healthy volunteers agree with published normal values. • Precision measured in healthy volunteers is between 7 and 15%.
On the formulation of a minimal uncertainty model for robust control with structured uncertainty
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1991-01-01
In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.
Robust linear parameter-varying control of blood pressure using vasoactive drugs
NASA Astrophysics Data System (ADS)
Luspay, Tamas; Grigoriadis, Karolos
2015-10-01
Resuscitation of emergency care patients requires fast restoration of blood pressure to a target value to achieve hemodynamic stability and vital organ perfusion. A robust control design methodology is presented in this paper for regulating the blood pressure of hypotensive patients by means of the closed-loop administration of vasoactive drugs. To this end, a dynamic first-order delay model is utilised to describe the vasoactive drug response with varying parameters that represent intra-patient and inter-patient variability. The proposed framework consists of two components: first, an online model parameter estimation is carried out using a multiple-model extended Kalman-filter. Second, the estimated model parameters are used for continuously scheduling a robust linear parameter-varying (LPV) controller. The closed-loop behaviour is characterised by parameter-varying dynamic weights designed to regulate the mean arterial pressure to a target value. Experimental data of blood pressure response of anesthetised pigs to phenylephrine injection are used for validating the LPV blood pressure models. Simulation studies are provided to validate the online model estimation and the LPV blood pressure control using phenylephrine drug injection models representing patients showing sensitive, nominal and insensitive response to the drug.
The application of neural networks to the SSME startup transient
NASA Technical Reports Server (NTRS)
Meyer, Claudia M.; Maul, William A.
1991-01-01
Feedforward neural networks were used to model three parameters during the Space Shuttle Main Engine startup transient. The three parameters were the main combustion chamber pressure, a controlled parameter, the high pressure oxidizer turbine discharge temperature, a redlined parameter, and the high pressure fuel pump discharge pressure, a failure-indicating performance parameter. Network inputs consisted of time windows of data from engine measurements that correlated highly to the modeled parameter. A standard backpropagation algorithm was used to train the feedforward networks on two nominal firings. Each trained network was validated with four additional nominal firings. For all three parameters, the neural networks were able to accurately predict the data in the validation sets as well as the training set.
NASA Astrophysics Data System (ADS)
Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.
2006-12-01
We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.
NASA Astrophysics Data System (ADS)
Chang, H.; Lee, J.
2017-12-01
Ground-based augmentations of global positioning system (GBAS) provide the user with the integrity parameter, standard deviation of vertical ionospheric gradient (σvig), to ensure integrity. σvig value currently available in CAT I GBAS is derived from the data collected from the reference stations located on the US mainland and have a value of 4 mm/km. However, since the equatorial region near the geomagnetic equator is relatively more active in the ionosphere than the mid-latitude region, there is a limit to applying σvig used in the mid-latitude region on the equatorial region. Also, since the ionospheric phenomena of daytime and nighttime in the equatorial region are significantly different, it is necessary to apply σvig whilst distinguishing the time zone. This study presents a method for obtaining standard deviation of vertical ionospheric gradient in the equatorial region at nominal days considering the equatorial ionosphere environment. We used the data collected from the Brazilian region near the geomagnetic equator in the nominal days. One of the distinguishing features of the equatorial ionosphere environment from the mid-latitude ionosphere environment is that the scintillation event occurs frequently. Therefore, the days used for the analysis were selected not only by geomagnetic indexes Kp (Planetary K index) and Dst (Disturbance storm index), but also by S4 (Scintillation index) which indicates scintillation event. In addition, unlike the ionospheric delay bias elimination method used in the mid-latitude region, the `Long-term ionospheric anomaly monitor (LTIAM)' used in this study utilized the bias removal method that applies different bias removal standards according to IPP (Ionospheric pierce point) distance in consideration of ionospheric activity. As a result, σvig values which are conservative enough to bound ionosphere spatial decorrelation for the equatorial region in nominal days are 8 mm/km for daytime and 19 mm/km for nighttime. Therefore, for CAT I GBAS operation in the equatorial region, σvig value that is twice as large as the σvig provided in the mid-latitude region needs to be applied in daytime, and the σvig value about two times greater than the σvig of daytime needs to be applied in nighttime.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... High-High, Nominal Trip Setpoint (NTSP) and Allowable Value. The Steam Generator Water Level High-High... previously evaluated is not increased. The Steam Generator Water Level High-High function revised values..., Steam Generator Water Level High-High, Nominal Trip Setpoint (NTSP) and Allowable Value. Function 5c...
Hepburn, Susan L.; DiGuiseppi, Carolyn; Rosenberg, Steven; Kaparich, Kristina; Robinson, Cordelia; Miller, Lisa
2015-01-01
Given a rising prevalence of autism spectrum disorders (ASD), this project aimed to develop and pilot test various teacher nomination strategies to identify children at risk for ASD in a timely, reliable, cost-effective manner. Sixty participating elementary school teachers evaluated 1323 children in total. Each teacher nominated students who most fit a description of ASD-associated characteristics, and completed the Autism Syndrome Screening Questionnaire (ASSQ) on every child in the classroom. The proportion of overall agreement between teacher nomination and ASSQ was 93–95%, depending upon the nomination parameters. Nomination required 15 min per class versus 3.5–5.5 h per class for the ASSQ. These results support the need for further study of teacher nomination strategies to identify children at risk for ASD. PMID:17661165
Miner, Grace L; Bauerle, William L
2017-09-01
The Ball-Berry (BB) model of stomatal conductance (g s ) is frequently coupled with a model of assimilation to estimate water and carbon exchanges in plant canopies. The empirical slope (m) and 'residual' g s (g 0 ) parameters of the BB model influence transpiration estimates, but the time-intensive nature of measurement limits species-specific data on seasonal and stress responses. We measured m and g 0 seasonally and under different water availability for maize and sunflower. The statistical method used to estimate parameters impacted values nominally when inter-plant variability was low, but had substantial impact with larger inter-plant variability. Values for maize (m = 4.53 ± 0.65; g 0 = 0.017 ± 0.016 mol m -2 s -1 ) were 40% higher than other published values. In maize, we found no seasonal changes in m or g 0 , supporting the use of constant seasonal values, but water stress reduced both parameters. In sunflower, inter-plant variability of m and g 0 was large (m = 8.84 ± 3.77; g 0 = 0.354 ± 0.226 mol m -2 s -1 ), presenting a challenge to clear interpretation of seasonal and water stress responses - m values were stable seasonally, even as g 0 values trended downward, and m values trended downward with water stress while g 0 values declined substantially. © 2017 John Wiley & Sons Ltd.
Low Light Diagnostics in Thin-Film Photovoltaics
NASA Astrophysics Data System (ADS)
Shvydka, Diana; Karpov, Victor; Compaan, Alvin
2003-03-01
We study statistics of the major photovoltaic (PV) parameters such as open circuit voltage, short circuit current and fill factor vs. light intensity on a set of nominally identical CdTe/CdS solar cells. We found the most probable parameter values to change with the light intensity as predicted by the standard diode model, while their relative fluctuations increase dramatically under low light. The crossover light intensity is found below which the relative fluctuations of the PV parameters diverge inversely proportional to the square root of the light intensity. We propose a model where the observed fluctuations are due to lateral nonuniformities in the device structure. In particular, the crossover is attributed to the lateral nonuniformity screening length exceeding the device size. >From the practical standpoint, our study introduces a simple uniformity diagnostic technique.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1978-01-01
A description is presented of six simulation cases investigating the effect of the variation of static-dynamic Coulomb friction on servo system stability/performance. The upper and lower levels of dynamic Coulomb friction which allowed operation within requirements were determined roughly to be three times and 50% respectively of nominal values considered in a table. A useful application for the nonlinear time response simulation is the sensitivity analysis of final hardware design with respect to such system parameters as cannot be varied realistically or easily in the actual hardware. Parameters of the static/dynamic Coulomb friction fall in this category.
NASA Astrophysics Data System (ADS)
Lin, Zhuosheng; Yu, Simin; Lü, Jinhu
2017-06-01
In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.
Inference of missing data and chemical model parameters using experimental statistics
NASA Astrophysics Data System (ADS)
Casey, Tiernan; Najm, Habib
2017-11-01
A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
78 FR 4591 - Bank Secrecy Act Advisory Group; Solicitation of Application for Membership
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-22
.... ACTION: Notice and request for nominations. SUMMARY: FinCEN is inviting the public to nominate financial... FURTHER INFORMATION CONTACT: Ina Boston, Senior Advisor, Office of Outreach, Regulatory Policy and... organization's participation on the BSAAG will bring value to the group Organizations may nominate themselves...
Avalanche weak layer shear fracture parameters from the cohesive crack model
NASA Astrophysics Data System (ADS)
McClung, David
2014-05-01
Dry slab avalanches release by mode II shear fracture within thin weak layers under cohesive snow slabs. The important fracture parameters include: nominal shear strength, mode II fracture toughness and mode II fracture energy. Alpine snow is not an elastic material unless the rate of deformation is very high. For natural avalanche release, it would not be possible that the fracture parameters can be considered as from classical fracture mechanics from an elastic framework. The strong rate dependence of alpine snow implies that it is a quasi-brittle material (Bažant et al., 2003) with an important size effect on nominal shear strength. Further, the rate of deformation for release of an avalanche is unknown, so it is not possible to calculate the fracture parameters for avalanche release from any model which requires the effective elastic modulus. The cohesive crack model does not require the modulus to be known to estimate the fracture energy. In this paper, the cohesive crack model was used to calculate the mode II fracture energy as a function of a brittleness number and nominal shear strength values calculated from slab avalanche fracture line data (60 with natural triggers; 191 with a mix of triggers). The brittleness number models the ratio of the approximate peak value of shear strength to nominal shear strength. A high brittleness number (> 10) represents large size relative to fracture process zone (FPZ) size and the implications of LEFM (Linear Elastic Fracture Mechanics). A low brittleness number (e.g. 0.1) represents small sample size and primarily plastic response. An intermediate value (e.g. 5) implies non-linear fracture mechanics with intermediate relative size. The calculations also implied effective values for the modulus and the critical shear fracture toughness as functions of the brittleness number. The results showed that the effective mode II fracture energy may vary by two orders of magnitude for alpine snow with median values ranging from 0.08 N/m (non-linear) to 0.18 N/m (LEFM) for median slab density around 200 kg/m3. Schulson and Duval (2009) estimated the fracture energy of solid ice (mode I) to be about 0.22-1 N/m which yields rough theoretical limits of about 0.05- 0.2 N/m for density 200 kg/m3 when the ice volume fraction is accounted for. Mode I results from lab tests (Sigrist, 2006) gave 0.1 N/m (200 kg/m3). The median effective mode II shear fracture toughness was calculated between 0.31 to 0.35 kPa(m)1/2 for the avalanche data. All the fracture energy results are much lower than previously calculated from propagation saw tests (PST) results for a weak layer collapse model (1.3 N/m) (Schweizer et al., 2011). The differences are related to model assumptions and estimates of the effective slab modulus. The calculations in this paper apply to quasi-static deformation and mode II weak layer fracture whereas the weak layer collapse model is more appropriate for dynamic conditions which follow fracture initiation (McClung and Borstad, 2012). References: Bažant, Z.P. et al. (2003) Size effect law and fracture mechanics of the triggering of dry snow slab avalanches, J. Geophys. Res. 108(B2): 2119, doi:10.1029/2002JB))1884.2003. McClung, D.M. and C.P. Borstad (2012) Deformation and energy of dry snow slabs prior to fracture propagation, J. Glaciol. 58(209), 2012 doi:10.3189/2012JoG11J009. Schulson, E.M and P. Duval (2009) Creep and fracture of ice, Cambridge University Press, 401 pp. Schweizer, J. et al. (2011) Measurements of weak layer fracture energy, Cold Reg. Sci. and Tech. 69: 139-144. Sigrist, C. (2006) Measurement of fracture mechanical properties of snow and application to dry snow slab avalanche release, Ph.D thesis: 16736, ETH, Zuerich: 139 pp.
45 CFR 73.735-502 - Permissible acceptance of gifts, entertainment, and favors.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the motivating factor. (b) Loans from banks or other financial institutions may be accepted on... similar items of nominal intrinsic value may be accepted. (d) An employee may accept food or refreshment... employee may not accept. (e) An employee may also accept food or refreshment of nominal value on infrequent...
Substitution determination of Fmoc‐substituted resins at different wavelengths
Kley, Markus; Bächle, Dirk; Loidl, Günther; Meier, Thomas; Samson, Daniel
2017-01-01
In solid‐phase peptide synthesis, the nominal batch size is calculated using the starting resin substitution and the mass of the starting resin. The starting resin substitution constitutes the basis for the calculation of a whole set of important process parameters, such as the number of amino acid derivative equivalents. For Fmoc‐substituted resins, substitution determination is often performed by suspending the Fmoc‐protected starting resin in 20% (v/v) piperidine in DMF to generate the dibenzofulvene–piperidine adduct that is quantified by ultraviolet–visible spectroscopy. The spectrometric measurement is performed at the maximum absorption wavelength of the dibenzofulvene–piperidine adduct, that is, at 301.0 nm. The recorded absorption value, the resin weight and the volume are entered into an equation derived from Lambert–Beer's law, together with the substance‐specific molar absorption coefficient at 301.0 nm, in order to calculate the nominal substitution. To our knowledge, molar absorption coefficients between 7100 l mol−1 cm−1 and 8100 l mol−1 cm−1 have been reported for the dibenzofulvene–piperidine adduct at 301.0 nm. Depending on the applied value, the nominal batch size may differ up to 14%. In this publication, a determination of the molar absorption coefficients at 301.0 and 289.8 nm is reported. Furthermore, proof is given that by measuring the absorption at 289.8 nm the impact of wavelength accuracy is reduced. © 2017 The Authors Journal of Peptide Science published by European Peptide Society and John Wiley & Sons Ltd. PMID:28635051
NASA Astrophysics Data System (ADS)
Ohara, Masaki; Noguchi, Toshihiko
This paper describes a new method for a rotor position sensorless control of a surface permanent magnet synchronous motor based on a model reference adaptive system (MRAS). This method features the MRAS in a current control loop to estimate a rotor speed and position by using only current sensors. This method as well as almost all the conventional methods incorporates a mathematical model of the motor, which consists of parameters such as winding resistances, inductances, and an induced voltage constant. Hence, the important thing is to investigate how the deviation of these parameters affects the estimated rotor position. First, this paper proposes a structure of the sensorless control applied in the current control loop. Next, it proves the stability of the proposed method when motor parameters deviate from the nominal values, and derives the relationship between the estimated position and the deviation of the parameters in a steady state. Finally, some experimental results are presented to show performance and effectiveness of the proposed method.
Validating an Air Traffic Management Concept of Operation Using Statistical Modeling
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2013-01-01
Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis
A Database Design for the Brazilian Air Force Military Personnel Control System.
1987-06-01
GIVEN A RECNum GET MOVING HISTORICAL". 77 SEL4 PlC X(70) VALUE ". 4. GIVEN A RECNUM GET NOMINATION HISTORICAL". 77 SEL5 PIC X(70) VALUE it 5. GIVEN A...WHERE - "°RECNUM = :RECNUM". 77 SQL-SEL3-LENGTH PIC S9999 VALUE 150 COMP. 77 SQL- SEL4 PIC X(150) VALUE "SELECT ABBREV,DTNOM,DTEXO,SITN FROM...NOMINATION WHERE RECNUM 77 SQL- SEL4 -LENGTH PIC S9999 VALUE 150 COMP. 77 SQL-SEL5 PIC X(150) VALUE "SELECT ABBREVDTDES,DTWAIVER,SITD FROM DESIG WHERE RECNUM It
NASA Astrophysics Data System (ADS)
Pujari, P. K.; Datta, T.; Manohar, S. B.; Prakash, Satya; Sastry, P. V. P. S. S.; Yakhmi, J. V.; Iyer, R. M.
1990-03-01
Doppler broadened annihilation radiation (DBAR) spectral parameters have been reported- for the first time- between 77 K and 300 K, for several Bi-based oxide superconductors, viz. A: single phase (2122) Bi 2CaSr 2Cu 2O x with Tc=85 K (R=0), B: a mixed phase lead doped sample containing both 2122 and 2223 with a nominal composition Bi 1.6Pb 0.4Ca 2Sr 2Cu 3O y, and, C: another 2122+2223 sample with same nominal composition as that of B but synthesised under a different heat-treatment schedule so as to yield a Tc=85 K (R=0). Analyses of these spectra using PAACFIT program yielded two components, of which the intensity of the narrow component, I N, and, the width of the broad component, T B, were seen to be the only temperature dependent parameters. At the onset of superconducting transition both T B and I N were seen to increase to a maximum value and decrease on further cooling. A double peak structure in T B vs temperature profile were observed in sample B and C, similar to one reported by us in Tl-Ca-Ba-Cu-O systems. In addition, presence of a magnetic field (1 KG) yielded no significant change in the DBAR spectral parameters. The results are discussed.
Fire, ice, water, and dirt: A simple climate model
NASA Astrophysics Data System (ADS)
Kroll, John
2017-07-01
A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.
Fire, ice, water, and dirt: A simple climate model.
Kroll, John
2017-07-01
A simple paleoclimate model was developed as a modeling exercise. The model is a lumped parameter system consisting of an ocean (water), land (dirt), glacier, and sea ice (ice) and driven by the sun (fire). In comparison with other such models, its uniqueness lies in its relative simplicity yet yielding good results. For nominal values of parameters, the system is very sensitive to small changes in the parameters, yielding equilibrium, steady oscillations, and catastrophes such as freezing or boiling oceans. However, stable solutions can be found, especially naturally oscillating solutions. For nominally realistic conditions, natural periods of order 100kyrs are obtained, and chaos ensues if the Milankovitch orbital forcing is applied. An analysis of a truncated system shows that the naturally oscillating solution is a limit cycle with the characteristics of a relaxation oscillation in the two major dependent variables, the ocean temperature and the glacier ice extent. The key to getting oscillations is having the effective emissivity decreasing with temperature and, at the same time, the effective ocean albedo decreases with increasing glacier extent. Results of the original model compare favorably to the proxy data for ice mass variation, but not for temperature variation. However, modifications to the effective emissivity and albedo can be made to yield much more realistic results. The primary conclusion is that the opinion of Saltzman [Clim. Dyn. 5, 67-78 (1990)] is plausible that the external Milankovitch orbital forcing is not sufficient to explain the dominant 100kyr period in the data.
An algorithm for control system design via parameter optimization. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sinha, P. K.
1972-01-01
An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
NASA Technical Reports Server (NTRS)
Porter, J. A.; Gibson, J. S.; Kroll, Q. D.; Loh, Y. C.
1981-01-01
The RF communications capabilities and nominally expected performance for the ascent phase of the second orbital flight of the shuttle are provided. Predicted performance is given mainly in the form of plots of signal strength versus elapsed mission time for the STDN (downlink) and shuttle orbiter (uplink) receivers for the S-band PM and FM, and UHF systems. Performance of the NAV and landing RF systems is treated for RTLS abort, since in this case the spacecraft will loop around and return to the launch site. NAV and landing RF systems include TACAN, MSBLS, and C-band altimeter. Signal strength plots were produced by a computer program which combines the spacecraft trajectory, antenna patterns, transmit and receive performance characteristics, and system mathematical models. When available, measured spacecraft parameters were used in the predictions; otherwise, specified values were used. Specified ground station parameter values were also used. Thresholds and other criteria on the graphs are explained.
NASA Astrophysics Data System (ADS)
Swastika, Windra
2017-03-01
A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.
Substitution determination of Fmoc-substituted resins at different wavelengths.
Eissler, Stefan; Kley, Markus; Bächle, Dirk; Loidl, Günther; Meier, Thomas; Samson, Daniel
2017-10-01
In solid-phase peptide synthesis, the nominal batch size is calculated using the starting resin substitution and the mass of the starting resin. The starting resin substitution constitutes the basis for the calculation of a whole set of important process parameters, such as the number of amino acid derivative equivalents. For Fmoc-substituted resins, substitution determination is often performed by suspending the Fmoc-protected starting resin in 20% (v/v) piperidine in DMF to generate the dibenzofulvene-piperidine adduct that is quantified by ultraviolet-visible spectroscopy. The spectrometric measurement is performed at the maximum absorption wavelength of the dibenzofulvene-piperidine adduct, that is, at 301.0 nm. The recorded absorption value, the resin weight and the volume are entered into an equation derived from Lambert-Beer's law, together with the substance-specific molar absorption coefficient at 301.0 nm, in order to calculate the nominal substitution. To our knowledge, molar absorption coefficients between 7100 l mol -1 cm -1 and 8100 l mol -1 cm -1 have been reported for the dibenzofulvene-piperidine adduct at 301.0 nm. Depending on the applied value, the nominal batch size may differ up to 14%. In this publication, a determination of the molar absorption coefficients at 301.0 and 289.8 nm is reported. Furthermore, proof is given that by measuring the absorption at 289.8 nm the impact of wavelength accuracy is reduced. © 2017 The Authors Journal of Peptide Science published by European Peptide Society and John Wiley & Sons Ltd. © 2017 The Authors Journal of Peptide Science published by European Peptide Society and John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Preston, Kathleen; Reise, Steven; Cai, Li; Hays, Ron D.
2011-01-01
The authors used a nominal response item response theory model to estimate category boundary discrimination (CBD) parameters for items drawn from the Emotional Distress item pools (Depression, Anxiety, and Anger) developed in the Patient-Reported Outcomes Measurement Information Systems (PROMIS) project. For polytomous items with ordered response…
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
Optical properties of pre-colored dental monolithic zirconia ceramics.
Kim, Hee-Kyung; Kim, Sung-Hun
2016-12-01
The purposes of this study were to evaluate the optical properties of recently marketed pre-colored monolithic zirconia ceramics and to compare with those of veneered zirconia and lithium disilicate glass ceramics. Various shades of pre-colored monolithic zirconia, veneered zirconia, and lithium disilicate glass ceramic specimens were tested (17.0×17.0×1.5mm, n=5). CIELab color coordinates were obtained against white, black, and grey backgrounds with a spectrophotometer. Color differences of the specimen pairs were calculated by using the CIEDE2000 (ΔE 00 ) formula. The translucency parameter (TP) was derived from ΔE 00 of the specimen against a white and a black background. X-ray diffraction was used to determine the crystalline phases of monolithic zirconia specimens. Data were analyzed with 1-way ANOVA, Scheffé post hoc, and Pearson correlation testing (α=0.05). For different shades of the same ceramic brand, there were significant differences in L * , a * , b * , and TP values in most ceramic brands. With the same nominal shade (A2), statistically significant differences were observed in L * , a * , b * , and TP values among different ceramic brands and systems (P<0.001). The color differences between pre-colored monolithic zirconia and veneered zirconia or lithium disilicate glass ceramics of the corresponding nominal shades ranged beyond the acceptability threshold. Due to the high L * values and low a * and b * values, pre-colored monolithic zirconia ceramics can be used with additional staining to match neighboring restorations or natural teeth. Due to their high value and low chroma, unacceptable color mismatch with adjacent ceramic restorations might be expected. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Vajingortin, L. D.; Roisman, W. P.
1991-01-01
The problem of ensuring the required quality of products and/or technological processes often becomes more difficult due to the fact that there is not general theory of determining the optimal sets of value of the primary factors, i.e., of the output parameters of the parts and units comprising an object and ensuring the correspondence of the object's parameters to the quality requirements. This is the main reason for the amount of time taken to finish complex vital article. To create this theory, one has to overcome a number of difficulties and to solve the following tasks: the creation of reliable and stable mathematical models showing the influence of the primary factors on the output parameters; finding a new technique of assigning tolerances for primary factors with regard to economical, technological, and other criteria, the technique being based on the solution of the main problem; well reasoned assignment of nominal values for primary factors which serve as the basis for creating tolerances. Each of the above listed tasks is of independent importance. An attempt is made to give solutions for this problem. The above problem dealing with quality ensuring an mathematically formalized aspect is called the multiple inverse problem.
Matsumoto, Mariko; Inaba, Yohei; Yamaguchi, Ichiro; Endo, Osamu; Hammond, David; Uchiyama, Shigehisa; Suzuki, Gen
2013-03-01
Although the relative risk of lung cancer due to smoking is reported to be lower in Japan than in other countries, few studies have examined the characteristics of Japanese cigarettes or potential differences in smoking patterns among Japanese smokers. To examine tar, nicotine and carbon monoxide (TNCO) emissions from ten leading cigarettes in Japan, machine smoking tests were conducted using the International Organization for Standardization (ISO) protocol and the Health Canada Intense (HCI) protocol. Smoking topography and tobacco-related biomarkers were collected from 101 Japanese smokers to examine measures of exposure. The findings indicate considerable variability in the smoking behavior of Japanese smokers. On average, puffing behaviors observed among smokers were more similar to the parameters of the HCI protocol, and brands with greater ventilation that yielded lower machine values using the ISO protocol were smoked more intensely than brands with lower levels of ventilation. The smokers of "ultra-low/low" nicotine-yield cigarettes smoked 2.7-fold more intensively than those of "medium/high" nicotine-yield cigarette smokers to achieve the same level of salivary cotinine (p = 0.024). CO levels in expiratory breath samples were associated with puff volume and self-reported smoking intensity, but not with nominal values of nicotine-yield reported on cigarette packages. Japanese smokers engaged in "compensatory smoking" to achieve their desired nicotine intake, and levels of exposure were greater than those suggested by the nominal value of nicotine and tar yields reported on cigarette packages.
McKisson, John E.; Barbosa, Fernando
2015-09-01
A method for designing a completely passive bias compensation circuit to stabilize the gain of multiple pixel avalanche photo detector devices. The method includes determining circuitry design and component values to achieve a desired precision of gain stability. The method can be used with any temperature sensitive device with a nominally linear coefficient of voltage dependent parameter that must be stabilized. The circuitry design includes a negative temperature coefficient resistor in thermal contact with the photomultiplier device to provide a varying resistance and a second fixed resistor to form a voltage divider that can be chosen to set the desired slope and intercept for the characteristic with a specific voltage source value. The addition of a third resistor to the divider network provides a solution set for a set of SiPM devices that requires only a single stabilized voltage source value.
On the estimation algorithm used in adaptive performance optimization of turbofan engines
NASA Technical Reports Server (NTRS)
Espana, Martin D.; Gilyard, Glenn B.
1993-01-01
The performance seeking control algorithm is designed to continuously optimize the performance of propulsion systems. The performance seeking control algorithm uses a nominal model of the propulsion system and estimates, in flight, the engine deviation parameters characterizing the engine deviations with respect to nominal conditions. In practice, because of measurement biases and/or model uncertainties, the estimated engine deviation parameters may not reflect the engine's actual off-nominal condition. This factor has a necessary impact on the overall performance seeking control scheme exacerbated by the open-loop character of the algorithm. The effects produced by unknown measurement biases over the estimation algorithm are evaluated. This evaluation allows for identification of the most critical measurements for application of the performance seeking control algorithm to an F100 engine. An equivalence relation between the biases and engine deviation parameters stems from an observability study; therefore, it is undecided whether the estimated engine deviation parameters represent the actual engine deviation or whether they simply reflect the measurement biases. A new algorithm, based on the engine's (steady-state) optimization model, is proposed and tested with flight data. When compared with previous Kalman filter schemes, based on local engine dynamic models, the new algorithm is easier to design and tune and it reduces the computational burden of the onboard computer.
The human as a detector of changes in variance and bandwidth
NASA Technical Reports Server (NTRS)
Curry, R. E.; Govindaraj, T.
1977-01-01
The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.
Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions
NASA Technical Reports Server (NTRS)
Teubert, Christopher; Daigle, Matthew J.
2014-01-01
Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Model for economic evaluation of high energy gas fracturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engi, D.
1984-05-01
The HEGF/NPV model has been developed and adapted for interactive microcomputer calculations of the economic consequences of reservoir stimulation by high energy gas fracturing (HEGF) in naturally fractured formations. This model makes use of three individual models: a model of the stimulated reservoir, a model of the gas flow in this reservoir, and a model of the discounted expected net cash flow (net present value, or NPV) associated with the enhanced gas production. Nominal values of the input parameters, based on observed data and reasonable estimates, are used to calculate the initial expected increase in the average daily rate ofmore » production resulting from the Meigs County HEGF stimulation experiment. Agreement with the observed initial increase in rate is good. On the basis of this calculation, production from the Meigs County Well is not expected to be profitable, but the HEGF/NPV model probably provides conservative results. Furthermore, analyses of the sensitivity of the expected NPV to variations in the values of certain reservoir parameters suggest that the use of HEGF stimulation in somewhat more favorable formations is potentially profitable. 6 references, 4 figures, 3 tables.« less
Uncertainty-enabled design of electromagnetic reflectors with integrated shape control
NASA Astrophysics Data System (ADS)
Haque, Samiul; Kindrat, Laszlo P.; Zhang, Li; Mikheev, Vikenty; Kim, Daewa; Liu, Sijing; Chung, Jooyeon; Kuian, Mykhailo; Massad, Jordan E.; Smith, Ralph C.
2018-03-01
We implemented a computationally efficient model for a corner-supported, thin, rectangular, orthotropic polyvinylidene fluoride (PVDF) laminate membrane, actuated by a two-dimensional array of segmented electrodes. The laminate can be used as shape-controlled electromagnetic reflector and the model estimates the reflector's shape given an array of control voltages. In this paper, we describe a model to determine the shape of the laminate for a given distribution of control voltages. Then, we investigate the surface shape error and its sensitivity to the model parameters. Subsequently, we analyze the simulated deflection of the actuated bimorph using a Zernike polynomial decomposition. Finally, we provide a probabilistic description of reflector performance using statistical methods to quantify uncertainty. We make design recommendations for nominal parameter values and their tolerances based on optimization under uncertainty using multiple methods.
FY2014 Parameters for Helions and Gold Ions in Booster, AGS, and RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, C. J.
The nominal parameters for helions (helion is the bound state of two protons and one neutron, the nucleus of a helium-3 atom) and gold ions in Booster, AGS, and RHIC are given for the FY2014 running period. The parameters are found using various formulas to derive mass, helion anomalous g-factor, kinetic parameters, RF parameters, ring parameters, etc..
Optimization of an electromagnetic linear actuator using a network and a finite element model
NASA Astrophysics Data System (ADS)
Neubert, Holger; Kamusella, Alfred; Lienig, Jens
2011-03-01
Model based design optimization leads to robust solutions only if the statistical deviations of design, load and ambient parameters from nominal values are considered. We describe an optimization methodology that involves these deviations as stochastic variables for an exemplary electromagnetic actuator used to drive a Braille printer. A combined model simulates the dynamic behavior of the actuator and its non-linear load. It consists of a dynamic network model and a stationary magnetic finite element (FE) model. The network model utilizes lookup tables of the magnetic force and the flux linkage computed by the FE model. After a sensitivity analysis using design of experiment (DoE) methods and a nominal optimization based on gradient methods, a robust design optimization is performed. Selected design variables are involved in form of their density functions. In order to reduce the computational effort we use response surfaces instead of the combined system model obtained in all stochastic analysis steps. Thus, Monte-Carlo simulations can be applied. As a result we found an optimum system design meeting our requirements with regard to function and reliability.
NASA Astrophysics Data System (ADS)
Köhler, Ulf; Nevas, Saulius; McConville, Glen; Evans, Robert; Smid, Marek; Stanek, Martin; Redondas, Alberto; Schönenborn, Fritz
2018-04-01
Three reference Dobsons (regional standard Dobsons No. 064, Germany and No. 074, Czech Republic as well as the world standard No. 083, USA) were optically characterized at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig in 2015 and at the Czech Metrology Institute (CMI) in Prague in 2016 within the EMRP ENV 059 project Traceability for atmospheric total column ozone
. Slit functions and the related parameters of the instruments were measured and compared with G. M. B. Dobson's specifications in his handbook. All Dobsons show a predominantly good match of the slit functions and the peak (centroid) wavelengths with deviations between -0.11 and +0.12 nm and differences of the full width half maximum (FWHM) between 0.13 and 0.37 nm compared to the nominal values at the shorter wavelengths. Slightly larger deviations of the FWHMs from the nominal Dobson data, up to 1.22 nm, can be seen at the longer wavelengths, especially for the slit function of the long D-wavelength. However, differences between the effective absorption coefficients (EACs) for ozone derived using Dobson's nominal values of the optical parameters on one hand and these measured values on the other hand are not too large in the case of both old
Bass-Paur (BP) and new
IUP-ozone (Institut für Umweltphysik, University of Bremen) absorption cross sections. Their inclusion in the calculation of the total ozone column (TOC) leads to improvements of significantly less than ±1 % at the AD-wavelengths between -1 and -2 % at the CD-wavelengths pairs in the BP-scale. The effect on the TOC in the IUP-scale is somewhat larger at the AD-wavelengths, up to +1 % (D074), and smaller at the CD-wavelengths pair, from -0.44 to -1.5 %. Beside this positive effect gained from the data with higher metrological quality that is needed for trend analyses and satellite validation, it will be also possible to explain uncommon behaviours of field Dobsons during calibration services, especially when a newly developed transportable device TuPS (tuneable portable radiation source) from CMI proves its capability to provide similar results as the stationary setups in the laboratories of National Metrology Institutes. Then, the field Dobsons can be optically characterized as well during regular calibration campaigns. A corresponding publication will be prepared using the results of TuPS-based measurements of more than 10 Dobsons in field campaigns in 2017.
Bain, Paul G; Kashima, Yoshihisa; Haslam, Nick
2006-08-01
Beliefs that may underlie the importance of human values were investigated in 4 studies, drawing on research that distinguishes natural-kind (natural), nominal-kind (conventional), and artifact (functional) beliefs. Values were best characterized by artifact and nominal-kind beliefs, as well as a natural-kind belief specific to the social domain, "human nature" (Studies 1 and 2). The extent to which values were considered central to human nature was associated with value importance in both Australia and Japan (Study 2), and experimentally manipulating human nature beliefs influenced value importance (Study 3). Beyond their association with importance, human nature beliefs predicted participants' reactions to value trade-offs (Study 1) and to value-laden rhetorical statements (Study 4). Human nature beliefs therefore play a central role in the psychology of values.
Experimental research of flow parameters on the last stage of the steam turbine 1090 MW
NASA Astrophysics Data System (ADS)
Sedlák, Kamil; Hoznedl, Michal; Bednář, Lukáš; Mrózek, Lukáš; Kalista, Robert
2016-06-01
This article deals with a brief description of measurement and evaluation of flow parameters at the output from the last stage of the low pressure steam turbine casing for the saturated steam with the nominal power 1090 MW. Measurement was carried out using a seven-hole pneumatic probe traversing along the length of the blade in several peripheral positions under nominal and selected partial modes. The result is knowledge of distribution of the static, dynamic and total pressure along the length of the blade and velocity distribution including their components. This information is the input data for determination of efficiency of the last stage, the loss coefficient of the diffuser and other significant parameters describing efficiency of selected parts of the steam turbine.
Power Peaking Effect of OTTO Fuel Scheme Pebble Bed Reactor
NASA Astrophysics Data System (ADS)
Setiadipura, T.; Suwoto; Zuhair; Bakhri, S.; Sunaryo, G. R.
2018-02-01
Pebble Bed Reactor (PBR) type of Hight Temperature Gas-cooled Reactor (HTGR) is a very interesting nuclear reactor design to fulfill the growing electricity and heat demand with a superior passive safety features. Effort to introduce the PBR design to the market can be strengthen by simplifying its system with the Once-through-then-out (OTTO) cycle PBR in which the pebble fuel only pass the core once. Important challenge in the OTTO fuel scheme is the power peaking effect which limit the maximum nominal power or burnup of the design. Parametric survey is perform in this study to investigate the contribution of different design parameters to power peaking effect of OTTO cycle PBR. PEBBED code is utilized in this study to perform the equilibrium PBR core analysis for different design parameter and fuel scheme. The parameters include its core diameter, height-per-diameter (H/D), power density, and core nominal power. Results of this study show that diameter and H/D effectsare stronger compare to the power density and nominal core power. Results of this study might become an importance guidance for design optimization of OTTO fuel scheme PBR.
75 FR 38771 - Notice of the Peanut Standards Board
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-06
...] Notice of the Peanut Standards Board AGENCY: Agricultural Marketing Service, USDA. ACTION: Notice... Board was appointed by the Secretary and announced on December 5, 2002. USDA seeks nominations for..., 2012 and June 30, 2013. USDA values diversity. In an effort to obtain nominations of diverse [[Page...
Lomnitz, Jason G.; Savageau, Michael A.
2016-01-01
Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346
The realization of temperature controller for small resistance measurement system
NASA Astrophysics Data System (ADS)
Sobecki, Jakub; Walendziuk, Wojciech; Idzkowski, Adam
2017-08-01
This paper concerns the issues of construction and experimental tests of a temperature stabilization system for small resistance increments measurement circuits. After switching the system on, a PCB board heats up and the long-term temperature drift altered the measurement result. The aim of this work is reducing the time of achieving constant nominal temperature by the measurement system, which would enable decreasing the time of measurements in the steady state. Moreover, the influence of temperatures higher than the nominal on the measurement results and the obtained heating curve were tested. During the working process, the circuit heats up to about 32 °C spontaneously, and it has the time to reach steady state of about 1200 s. Implementing a USART terminal on the PC and an NI USB-6341 data acquisition card makes recording the data (concerning temperature and resistance) in the digital form and its further processing easier. It also enables changing the quantity of the regulator settings. This paper presents sample results of measurements for several temperature values and the characteristics of the temperature and resistance changes in time as well as their comparison with the output values. The object identification is accomplished due to the Ziegler-Nichols method. The algorithm of determining the step characteristics parameters and examples of computations of the regulator settings are included together with example characteristics of the object regulation.
Results of an integrated structure/control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.
Robust time and frequency domain estimation methods in adaptive control
NASA Technical Reports Server (NTRS)
Lamaire, Richard Orville
1987-01-01
A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.
ACOSS Eight (Active Control of Space Structures), Phase 2
1981-09-01
A-2 A-2 Nominal Model - Equipment Section and Solar Panels ....... A-3 A-3 Nominal Model - Upper Support .-uss ...... ............ A-4 A...sensitivity analysis technique ef selecting critical system parameters is applied tc the Diaper tetrahedral truss structure (See Section 4-2...and solar panels are omitted. The precision section is mounted on isolators to inertially r•" I fixed rigid support. The mode frequencies of this
NASA Astrophysics Data System (ADS)
Rutkowski, Lucile; Masłowski, Piotr; Johansson, Alexandra C.; Khodabakhsh, Amir; Foltynowicz, Aleksandra
2018-01-01
Broadband precision spectroscopy is indispensable for providing high fidelity molecular parameters for spectroscopic databases. We have recently shown that mechanical Fourier transform spectrometers based on optical frequency combs can measure broadband high-resolution molecular spectra undistorted by the instrumental line shape (ILS) and with a highly precise frequency scale provided by the comb. The accurate measurement of the power of the comb modes interacting with the molecular sample was achieved by acquiring single-burst interferograms with nominal resolution matched to the comb mode spacing. Here we describe in detail the experimental and numerical steps needed to achieve sub-nominal resolution and retrieve ILS-free molecular spectra, i.e. with ILS-induced distortion below the noise level. We investigate the accuracy of the transition line centers retrieved by fitting to the absorption lines measured using this method. We verify the performance by measuring an ILS-free cavity-enhanced low-pressure spectrum of the 3ν1 + ν3 band of CO2 around 1575 nm with line widths narrower than the nominal resolution. We observe and quantify collisional narrowing of absorption line shape, for the first time with a comb-based spectroscopic technique. Thus retrieval of line shape parameters with accuracy not limited by the Voigt profile is now possible for entire absorption bands acquired simultaneously.
Resonance of relativistic electrons with electromagnetic ion cyclotron waves
Denton, R. E.; Jordanova, V. K.; Bortnik, J.
2015-06-29
Relativistic electrons have been thought to more easily resonate with electromagnetic ion cyclotron EMIC waves if the total density is large. We show that, for a particular EMIC mode, this dependence is weak due to the dependence of the wave frequency and wave vector on the density. A significant increase in relativistic electron minimum resonant energy might occur for the H band EMIC mode only for small density, but no changes in parameters significantly decrease the minimum resonant energy from a nominal value. The minimum resonant energy depends most strongly on the thermal velocity associated with the field line motionmore » of the hot ring current protons that drive the instability. High density due to a plasmasphere or plasmaspheric plume could possibly lead to lower minimum resonance energy by causing the He band EMIC mode to be dominant. We demonstrate these points using parameters from a ring current simulation.« less
NASA Technical Reports Server (NTRS)
Tolson, Robert H.; Lugo, Rafael A.; Baird, Darren T.; Cianciolo, Alicia D.; Bougher, Stephen W.; Zurek, Richard M.
2017-01-01
The Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft is a NASA orbiter designed to explore the Mars upper atmosphere, typically from 140 to 160 km altitude. In addition to the nominal science mission, MAVEN has performed several Deep Dip campaigns in which the orbit's closest point of approach, also called periapsis, was lowered to an altitude range of 115 to 135 km. MAVEN accelerometer data were used during mission operations to estimate atmospheric parameters such as density, scale height, along-track gradients, and wave structures. Density and scale height estimates were compared against those obtained from the Mars Global Reference Atmospheric Model and used to aid the MAVEN navigation team in planning maneuvers to raise and lower periapsis during Deep Dip operations. This paper describes the processes used to reconstruct atmosphere parameters from accelerometers data and presents the results of their comparison to model and navigation-derived values.
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-05-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
A guidance law for hypersonic descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1992-01-01
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and ismore » far superior to proportional navigation. 8 refs.« less
McGrory, Sarah; Taylor, Adele M; Kirin, Mirna; Corley, Janie; Pattie, Alison; Cox, Simon R; Dhillon, Baljean; Wardlaw, Joanna M; Doubal, Fergus N; Starr, John M; Trucco, Emanuele; MacGillivray, Thomas J; Deary, Ian J
2017-01-01
Aim To examine the relationship between retinal vascular morphology and cognitive abilities in a narrow-age cohort of community-dwelling older people. Methods Digital retinal images taken at age ∼73 years from 683 participants of the Lothian Birth Cohort 1936 (LBC1936) were analysed with Singapore I Vessel Assessment (SIVA) software. Multiple regression models were applied to determine cross-sectional associations between retinal vascular parameters and general cognitive ability (g), memory, processing speed, visuospatial ability, crystallised cognitive ability and change in IQ from childhood to older age. Results After adjustment for cognitive ability at age 11 years and cardiovascular risk factors, venular length-to-diameter ratio was nominally significantly associated with processing speed (β=−0.116, p=0.01) and g (β=−0.079, p=0.04). Arteriolar length-to-diameter ratio was associated with visuospatial ability (β=0.092, p=0.04). Decreased arteriolar junctional exponent deviation and increased arteriolar branching coefficient values were associated with less relative decline in IQ between childhood and older age (arteriolar junctional exponent deviation: β=−0.101, p=0.02; arteriolar branching coefficient: β=0.089, p=0.04). Data are presented as standardised β coefficients (β) reflecting change in cognitive domain score associated with an increase of 1 SD unit in retinal parameter. None of these nominally significant associations remained significant after correction for multiple statistical testing. Conclusions Retinal parameters contributed <1% of the variance in the majority of associations observed. Whereas retinal analysis may have potential for early detection of some types of age-related cognitive decline and dementia, our results present little evidence that retinal vascular features are associated with non-pathological cognitive ageing. PMID:28400371
Flight Motor Set 360T010 (STS-31R). Volume 1: System Overview
NASA Technical Reports Server (NTRS)
Garecht, Diane
1990-01-01
Flight motor set 360T010 was launched at approximately 7:34 a.m. CST (090:114:12:33:50.990 GMT) on 24 Apr. 1990 after one launch attempt (attempt on 10 Apr. 1990 was scrubbed following an indication of erratic operation of the Orbiter No. 1 Auciliary Power Unit No. 1). There were no problems with the solid rocket motor launches, overall motor performance was excellent. There were no debris concerns from either motor. Nearly all ballistic contract end item specification parameters were verified with the exception of ignition interval, pressure rise rate, and ignition time thrust imbalance. These could not be verified due to elimination of developmental flight instrumentation on 360L004 (STS-30R) and subsequent, but low sample rate data that were available showed nominal propulsion performance. All ballistic and mass property parameters that could be assessed closely matched the predicted values and were well within the required contract end item specification levels. All field joint heaters and igniter joint heaters performed without anomalies. Evaluation of the ground environment instrumentation measurements again verified thermal model analysis data and showed agreement with predicted environmental effects. No launch commit criteria violations occurred. Postflight inspection again verified nominal performance of the insulation, phenolics, metal parts, and seals. Postflight evaluation indicated both nozzles performed as expected during flight. All combustion gas was contained by insulation in the field and case-to-nozzle joints.
Likelihoods for fixed rank nomination networks
HOFF, PETER; FOSDICK, BAILEY; VOLFOVSKY, ALEX; STOVEL, KATHERINE
2014-01-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Modelling Experimental Procedures for Manipulator Calibration
1991-12-01
C - ( ~ cosW -sir4 0(12)rz (’ im 4n cos 0 0 1 cosO 0 sinO 0 0 i 0 0 (13) -sinG 0 cos6 0 i o 0 0 1 foos4 -sint 0 01 F0t z,4) snCJ cos€ 0 01 (14) 0 0...s4ses* c ( c * s s c *- c s* (21) -sO cOs* cOcp o 0 0 0 With the orientation now specified by RPY(0, O ,*), it is only necessary to specify the translations... computed based on the nominal parameters and the it set of joint angles and stored in TE" where C refers to the calculated value. Recall that
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-22
... hoc panel to provide advice through the chartered CASAC on primary (human health-based) air quality..., epidemiology, medicine, public health, biostatistics and risk assessment. Process and Deadline for Submitting.... Nominations should be submitted in time to arrive no later than August 12, 2013. EPA values and welcomes...
Epidemiologic Evaluation of Measurement Data in the Presence of Detection Limits
Lubin, Jay H.; Colt, Joanne S.; Camann, David; Davis, Scott; Cerhan, James R.; Severson, Richard K.; Bernstein, Leslie; Hartge, Patricia
2004-01-01
Quantitative measurements of environmental factors greatly improve the quality of epidemiologic studies but can pose challenges because of the presence of upper or lower detection limits or interfering compounds, which do not allow for precise measured values. We consider the regression of an environmental measurement (dependent variable) on several covariates (independent variables). Various strategies are commonly employed to impute values for interval-measured data, including assignment of one-half the detection limit to nondetected values or of “fill-in” values randomly selected from an appropriate distribution. On the basis of a limited simulation study, we found that the former approach can be biased unless the percentage of measurements below detection limits is small (5–10%). The fill-in approach generally produces unbiased parameter estimates but may produce biased variance estimates and thereby distort inference when 30% or more of the data are below detection limits. Truncated data methods (e.g., Tobit regression) and multiple imputation offer two unbiased approaches for analyzing measurement data with detection limits. If interest resides solely on regression parameters, then Tobit regression can be used. If individualized values for measurements below detection limits are needed for additional analysis, such as relative risk regression or graphical display, then multiple imputation produces unbiased estimates and nominal confidence intervals unless the proportion of missing data is extreme. We illustrate various approaches using measurements of pesticide residues in carpet dust in control subjects from a case–control study of non-Hodgkin lymphoma. PMID:15579415
NASA Technical Reports Server (NTRS)
Ardema, M. D.
1974-01-01
Sensitivity data for advanced technology transports has been systematically collected. This data has been generated in two separate studies. In the first of these, three nominal, or base point, vehicles designed to cruise at Mach numbers .85, .93, and .98, respectively, were defined. The effects on performance and economics of perturbations to basic parameters in the areas of structures, aerodynamics, and propulsion were then determined. In all cases, aircraft were sized to meet the same payload and range as the nominals. This sensitivity data may be used to assess the relative effects of technology changes. The second study was an assessment of the effect of cruise Mach number. Three families of aircraft were investigated in the Mach number range 0.70 to 0.98: straight wing aircraft from 0.70 to 0.80; sweptwing, non-area ruled aircraft from 0.80 to 0.95; and area ruled aircraft from 0.90 to 0.98. At each Mach number, the values of wing loading, aspect ratio, and bypass ratio which resulted in minimum gross takeoff weight were used. As part of the Mach number study, an assessment of the effect of increased fuel costs was made.
Geological nominations at UNESCO World Heritage, an upstream struggle
NASA Astrophysics Data System (ADS)
Olive-Garcia, Cécile; van Wyk de Vries, Benjamin
2017-04-01
Using my 10 years experience in setting up and defending a UNESCO world Heritage Geological nomination, this presentation aims to give a personal insight into this international process and the differential use of science, subjective perception (aesthetic and 'naturality'), and politics. At this point in the process, new protocols have been tested in order to improve the dialogue, accountability and transparency between the different stake-holders. These are, the State parties, the IUCN, the scientific community, and UNESCO itself. Our proposal is the Chaîne des Puys-Limagne fault ensemble, which combines tectonic, geomorphological evolution and volcanology. The project's essence is a conjunction of inseparable geological features and processes, set in the context of plate tectonics. This very unicit yof diverse forms and processes creates the value of the site. However, it is just this that has caused a problem, as the advisory body has a categorical approach of nominations that separates items to assess them in an unconnected manner.From the start we proposed a combined approach, where a property is seen in its entirety, and the constituent elements seen as interlinked elements reflecting the joint underlying phenomena. At this point, our project has received the first ever open review by an independent technical mission (jointly set up by IUCN, UNESCO and the State party). The subsequent report was broadly supportive of the project's approach and of the value of the ensemble of features. The UNESCO committee in 2016, re-referred the nomination, acknowledging the potential Outstanding Universal Value of the site and requesting the parties to continue the upstream process (e.g. collaborative work), notably on the recommendations and conclusions of the Independent Technical mission report. Meetings are continuing, and I shall provide you with the hot-off-the-press news as this ground breaking nomination progresses.
Solvent Hold Tank Sample Results for MCU-15-661-662-663: April 2015 Monthly Sample
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fondeur, F.; Taylor-Pashow, K.
2015-07-08
The Savannah River National Lab (SRNL) received one set of Solvent Hold Tank (SHT) samples (MCU-15-661, MCU-15-662, and MCU-15-663 pulled on April 2, 2015) for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-15-661-662-663 indicated a low concentration (~ 63% of nominal) of the suppressor (TiDG) and a slightly below the nominal concentration (~ 10% below nominal) of the extractant (MaxCalix). The modifier (CS-7SB) level was also 10% below its nominal value while the Isopar™ L level was slightly above its nominal value. This analysis confirms the addition of Isopar™L to the solvent onmore » March 6, 2015. Despite that the values are below target component levels, the current levels of TiDG, CS-7SB and MaxCalix are sufficient for continuing operation without adding a trim at this time until the next monthly sample. No impurities above the 1000 ppm level were found in this solvent. However, the sample was found to contain approximately 18.4 ug/g solvent mercury. The gamma level increased to 8 E5 dpm/mL solvent and it represents an order of magnitude increase relative to previous solvent samples. The increase means less cesium is being stripped from the solvent. Further analysis is needed to determine if the recent spike in the gamma measurement is due to external factors such as algae or other material that may impede stripping. The laboratory will continue to monitor the quality of the solvent in particular for any new impurity or degradation of the solvent components.« less
Parameter study of simplified dragonfly airfoil geometry at Reynolds number of 6000.
Levy, David-Elie; Seifert, Avraham
2010-10-21
Aerodynamic study of a simplified Dragonfly airfoil in gliding flight at Reynolds numbers below 10,000 is motivated by both pure scientific interest and technological applications. At these Reynolds numbers, the natural insect flight could provide inspiration for technology development of Micro UAV's and more. Insect wings are typically characterized by corrugated airfoils. The present study follows a fundamental flow physics study (Levy and Seifert, 2009), that revealed the importance of flow separation from the first corrugation, the roll-up of the separated shear layer to discrete vortices and their role in promoting flow reattachment to the aft arc, as the leading mechanism enabling high-lift, low drag performance of the Dragonfly gliding flight. This paper describes the effect of systematic airfoil geometry variations on the aerodynamic properties of a simplified Dragonfly airfoil at Reynolds number of 6000. The parameter study includes a detailed analysis of small variations of the nominal geometry, such as corrugation placement or height, rear arc and trailing edge shape. Numerical simulations using the 2D laminar Navier-Stokes equations revealed that the flow accelerating over the first corrugation slope is followed by an unsteady pressure recovery, combined with vortex shedding. The latter allows the reattachment of the flow over the rear arc. Also, the drag values are directly linked to the vortices' magnitude. This parametric study shows that geometric variations which reduce the vortices' amplitude, as reduction of the rear cavity depth or the reduction of the rear arc and trailing edge curvature, will reduce the drag values. Other changes will extend the flow reattachment over the rear arc for a larger mean lift coefficients range; such as the negative deflection of the forward flat plate. These changes consequently reduce the drag values at higher mean lift coefficients. The detailed geometry study enabled the definition of a corrugated airfoil geometry with enhanced aerodynamic properties, such as range and endurance factors, as compared to the nominal airfoil studied in the literature. Copyright © 2010 Elsevier Ltd. All rights reserved.
Active State Model for Autonomous Systems
NASA Technical Reports Server (NTRS)
Park, Han; Chien, Steve; Zak, Michail; James, Mark; Mackey, Ryan; Fisher, Forest
2003-01-01
The concept of the active state model (ASM) is an architecture for the development of advanced integrated fault-detection-and-isolation (FDI) systems for robotic land vehicles, pilotless aircraft, exploratory spacecraft, or other complex engineering systems that will be capable of autonomous operation. An FDI system based on the ASM concept would not only provide traditional diagnostic capabilities, but also integrate the FDI system under a unified framework and provide mechanism for sharing of information between FDI subsystems to fully assess the overall health of the system. The ASM concept begins with definitions borrowed from psychology, wherein a system is regarded as active when it possesses self-image, self-awareness, and an ability to make decisions itself, such that it is able to perform purposeful motions and other transitions with some degree of autonomy from the environment. For an engineering system, self-image would manifest itself as the ability to determine nominal values of sensor data by use of a mathematical model of itself, and selfawareness would manifest itself as the ability to relate sensor data to their nominal values. The ASM for such a system may start with the closed-loop control dynamics that describe the evolution of state variables. As soon as this model was supplemented with nominal values of sensor data, it would possess self-image. The ability to process the current sensor data and compare them with the nominal values would represent self-awareness. On the basis of self-image and self-awareness, the ASM provides the capability for self-identification, detection of abnormalities, and self-diagnosis.
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
ERIC Educational Resources Information Center
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
An optimal control strategy for collision avoidance of mobile robots in non-stationary environments
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An optimal control formulation of the problem of collision avoidance of mobile robots in environments containing moving obstacles is presented. Collision avoidance is guaranteed if the minimum distance between the robot and the objects is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Furthermore, time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. Simulation results verify the value of the proposed strategy.
Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.
2005-01-01
This report presents the Hydrologic Simulation Program-FORTRAN Model (HSPF) parameters for eight basins in the coal-mining region of West Virginia. The magnitude and characteristics of model parameters from this study will assist users of HSPF in simulating streamflow at other basins in the coal-mining region of West Virginia. The parameter for nominal capacity of the upper-zone storage, UZSN, increased from south to north. The increase in UZSN with the increase in basin latitude could be due to decreasing slopes, decreasing rockiness of the soils, and increasing soil depths from south to north. A special action was given to the parameter for fraction of ground-water inflow that flows to inactive ground water, DEEPFR. The basis for this special action was related to the seasonal movement of the water table and transpiration from trees. The models were most sensitive to DEEPFR and the parameter for interception storage capacity, CEPSC. The models were also fairly sensitive to the parameter for an index representing the infiltration capacity of the soil, INFILT; the parameter for indicating the behavior of the ground-water recession flow, KVARY; the parameter for the basic ground-water recession rate, AGWRC; the parameter for nominal capacity of the upper zone storage, UZSN; the parameter for the interflow inflow, INTFW; the parameter for the interflow recession constant, IRC; and the parameter for lower zone evapotranspiration, LZETP.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Spread of status value: Rewards and the creation of status characteristics.
Harkness, Sarah K
2017-01-01
Rewards have social significance and are highly esteemed objects, but what does their ownership signify to others? Prior work has demonstrated it may be possible for these rewards to spread their status to those who possess them, such that individuals gain or lose status and influence by virtue of the rewards they display. Yet, is this spread enough to produce entirely new status characteristics by virtue of their association with rewards? I propose a theoretical extension of the spread of status value theory and offer an experimental test considering whether the status value conveyed by rewards spreads to a new, nominal characteristic of those who come to possess these objects. The results indicate that states of a nominal characteristic do gain or lose status value and behavioral influence through their association with differentially valued rewards. Thus, rewards can create new status characteristics with resulting behavioral expectations. Copyright © 2016 Elsevier Inc. All rights reserved.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
Evaluation of a load cell model for dynamic calibration of the rotor systems research aircraft
NASA Technical Reports Server (NTRS)
Duval, R. W.; Bahrami, H.; Wellman, B.
1985-01-01
The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission system from the fuselage. An analytical model of the relationship between applied rotor loads and the resulting load cell measurements is derived by applying a force-and-moment balance to the isolated rotor/transmission system. The model is then used to estimate the applied loads from measured load cell data, as obtained from a ground-based shake test. Using nominal design values for the parameters, the estimation errors, for the case of lateral forcing, were shown to be on the order of the sensor measurement noise in all but the roll axis. An unmodeled external load appears to be the source of the error in this axis.
Hybrid adaptive ascent flight control for a flexible launch vehicle
NASA Astrophysics Data System (ADS)
Lefevre, Brian D.
For the purpose of maintaining dynamic stability and improving guidance command tracking performance under off-nominal flight conditions, a hybrid adaptive control scheme is selected and modified for use as a launch vehicle flight controller. This architecture merges a model reference adaptive approach, which utilizes both direct and indirect adaptive elements, with a classical dynamic inversion controller. This structure is chosen for a number of reasons: the properties of the reference model can be easily adjusted to tune the desired handling qualities of the spacecraft, the indirect adaptive element (which consists of an online parameter identification algorithm) continually refines the estimates of the evolving characteristic parameters utilized in the dynamic inversion, and the direct adaptive element (which consists of a neural network) augments the linear feedback signal to compensate for any nonlinearities in the vehicle dynamics. The combination of these elements enables the control system to retain the nonlinear capabilities of an adaptive network while relying heavily on the linear portion of the feedback signal to dictate the dynamic response under most operating conditions. To begin the analysis, the ascent dynamics of a launch vehicle with a single 1st stage rocket motor (typical of the Ares 1 spacecraft) are characterized. The dynamics are then linearized with assumptions that are appropriate for a launch vehicle, so that the resulting equations may be inverted by the flight controller in order to compute the control signals necessary to generate the desired response from the vehicle. Next, the development of the hybrid adaptive launch vehicle ascent flight control architecture is discussed in detail. Alterations of the generic hybrid adaptive control architecture include the incorporation of a command conversion operation which transforms guidance input from quaternion form (as provided by NASA) to the body-fixed angular rate commands needed by the hybrid adaptive flight controller, development of a Newton's method based online parameter update that is modified to include a step size which regulates the rate of change in the parameter estimates, comparison of the modified Newton's method and recursive least squares online parameter update algorithms, modification of the neural network's input structure to accommodate for the nature of the nonlinearities present in a launch vehicle's ascent flight, examination of both tracking error based and modeling error based neural network weight update laws, and integration of feedback filters for the purpose of preventing harmful interaction between the flight control system and flexible structural modes. To validate the hybrid adaptive controller, a high-fidelity Ares I ascent flight simulator and a classical gain-scheduled proportional-integral-derivative (PID) ascent flight controller were obtained from the NASA Marshall Space Flight Center. The classical PID flight controller is used as a benchmark when analyzing the performance of the hybrid adaptive flight controller. Simulations are conducted which model both nominal and off-nominal flight conditions with structural flexibility of the vehicle either enabled or disabled. First, rigid body ascent simulations are performed with the hybrid adaptive controller under nominal flight conditions for the purpose of selecting the update laws which drive the indirect and direct adaptive components. With the neural network disabled, the results revealed that the recursive least squares online parameter update caused high frequency oscillations to appear in the engine gimbal commands. This is highly undesirable for long and slender launch vehicles, such as the Ares I, because such oscillation of the rocket nozzle could excite unstable structural flex modes. In contrast, the modified Newton's method online parameter update produced smooth control signals and was thus selected for use in the hybrid adaptive launch vehicle flight controller. In the simulations where the online parameter identification algorithm was disabled, the tracking error based neural network weight update law forced the network's output to diverge despite repeated reductions of the adaptive learning rate. As a result, the modeling error based neural network weight update law (which generated bounded signals) is utilized by the hybrid adaptive controller in all subsequent simulations. Comparing the PID and hybrid adaptive flight controllers under nominal flight conditions in rigid body ascent simulations showed that their tracking error magnitudes are similar for a period of time during the middle of the ascent phase. Though the PID controller performs better for a short interval around the 20 second mark, the hybrid adaptive controller performs far better from roughly 70 to 120 seconds. Elevating the aerodynamic loads by increasing the force and moment coefficients produced results very similar to the nominal case. However, applying a 5% or 10% thrust reduction to the first stage rocket motor causes the tracking error magnitude observed by the PID controller to be significantly elevated and diverge rapidly as the simulation concludes. In contrast, the hybrid adaptive controller steadily maintains smaller errors (often less than 50% of the corresponding PID value). Under the same sets of flight conditions with flexibility enabled, the results exhibit similar trends with the hybrid adaptive controller performing even better in each case. Again, the reduction of the first stage rocket motor's thrust clearly illustrated the superior robustness of the hybrid adaptive flight controller.
Hudson, James I.; Gasior, Maria; Herman, Barry K.; Radewonuk, Jana; Wilfley, Denise; Busner, Joan
2017-01-01
Abstract Objective This study examined the time course of efficacy‐related endpoints for lisdexamfetamine dimesylate (LDX) versus placebo in adults with protocol‐defined moderate to severe binge‐eating disorder (BED). Methods In two 12‐week, double‐blind, placebo‐controlled studies, adults meeting DSM‐IV‐TR BED criteria were randomized 1:1 to receive placebo or dose‐optimized LDX (50 or 70 mg). Analyses across visits used mixed‐effects models for repeated measures (binge eating days/week, binge eating episodes/week, Yale‐Brown Obsessive Compulsive Scale modified for Binge Eating [Y‐BOCS‐BE] scores, percentage body weight change) and chi‐square tests (Clinical Global Impressions—Improvement [CGI‐I; from the perspective of BED symptoms] scale dichotomized as improved or not improved). These analyses were not part of the prespecified testing strategy, so reported p values are nominal (unadjusted and descriptive only). Results Least squares mean treatment differences for change from baseline in both studies favored LDX over placebo (all nominal p values < .001) starting at Week 1 for binge eating days/week, binge‐eating episodes/week, and percentage weight change and at the first posttreatment assessment (Week 4) for Y‐BOCS‐BE total and domain scores. On the CGI‐I, more participants on LDX than placebo were categorized as improved starting at Week 1 in both studies (both nominal p values < .001). Across these efficacy‐related endpoints, the superiority of LDX over placebo was maintained at each posttreatment assessment in both studies (all nominal p values < .001). Discussion In adults with BED, LDX treatment appeared to be associated with improvement on efficacy measures as early as 1 week, which was maintained throughout the 12‐week studies. PMID:28481434
Determination of the measurement threshold in gamma-ray spectrometry.
Korun, M; Vodenik, B; Zorko, B
2017-03-01
In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.
System and Method for Outlier Detection via Estimating Clusters
NASA Technical Reports Server (NTRS)
Iverson, David J. (Inventor)
2016-01-01
An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.
Evaluation and Validation of the Messinger Freezing Fraction
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2005-01-01
One of the most important non-dimensional parameters used in ice-accretion modeling and scaling studies is the freezing fraction defined by the heat-balance analysis of Messinger. For fifty years this parameter has been used to indicate how rapidly freezing takes place when super-cooled water strikes a solid body. The value ranges from 0 (no freezing) to 1 (water freezes immediately on impact), and the magnitude has been shown to play a major role in determining the physical appearance of the accreted ice. Because of its importance to ice shape, this parameter and the physics underlying the expressions used to calculate it have been questioned from time to time. Until now, there has been no strong evidence either validating or casting doubt on the current expressions. This paper presents experimental measurements of the leading-edge thickness of a number of ice shapes for a variety of test conditions with nominal freezing fractions from 0.3 to 1.0. From these thickness measurements, experimental freezing fractions were calculated and compared with values found from the Messinger analysis as applied by Ruff. Within the experimental uncertainty of measuring the leading-edge thickness, agreement of the experimental and analytical freezing fraction was very good. It is also shown that values of analytical freezing fraction were entirely consistent with observed ice shapes at and near rime conditions: At an analytical freezing fraction of unity, experimental ice shapes displayed the classic rime shape, while for conditions producing analytical freezing fractions slightly lower than unity, glaze features started to appear.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
Automatic control design procedures for restructurable aircraft control
NASA Technical Reports Server (NTRS)
Looze, D. P.; Krolewski, S.; Weiss, J.; Barrett, N.; Eterno, J.
1985-01-01
A simple, reliable automatic redesign procedure for restructurable control is discussed. This procedure is based on Linear Quadratic (LQ) design methodologies. It employs a robust control system design for the unfailed aircraft to minimize the effects of failed surfaces and to extend the time available for restructuring the Flight Control System. The procedure uses the LQ design parameters for the unfailed system as a basis for choosing the design parameters of the failed system. This philosophy alloys the engineering trade-offs that were present in the nominal design to the inherited by the restructurable design. In particular, it alloys bandwidth limitations and performance trade-offs to be incorporated in the redesigned system. The procedure also has several other desirable features. It effectively redistributes authority among the available control effectors to maximize the system performance subject to actuator limitations and constraints. It provides a graceful performance degradation as the amount of control authority lessens. When given the parameters of the unfailed aircraft, the automatic redesign procedure reproduces the nominal control system design.
Fine-scale patterns of population stratification confound rare variant association tests.
O'Connor, Timothy D; Kiezun, Adam; Bamshad, Michael; Rich, Stephen S; Smith, Joshua D; Turner, Emily; Leal, Suzanne M; Akey, Joshua M
2013-01-01
Advances in next-generation sequencing technology have enabled systematic exploration of the contribution of rare variation to Mendelian and complex diseases. Although it is well known that population stratification can generate spurious associations with common alleles, its impact on rare variant association methods remains poorly understood. Here, we performed exhaustive coalescent simulations with demographic parameters calibrated from exome sequence data to evaluate the performance of nine rare variant association methods in the presence of fine-scale population structure. We find that all methods have an inflated spurious association rate for parameter values that are consistent with levels of differentiation typical of European populations. For example, at a nominal significance level of 5%, some test statistics have a spurious association rate as high as 40%. Finally, we empirically assess the impact of population stratification in a large data set of 4,298 European American exomes. Our results have important implications for the design, analysis, and interpretation of rare variant genome-wide association studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahoo, N; Zhu, X; Zhang, X
Purpose: To quantify the impact of range and setup uncertainties on various dosimetric indices that are used to assess normal tissue toxicities of patients receiving passive scattering proton beam therapy (PSPBT). Methods: Robust analysis of sample treatment plans of six brain cancer patients treated with PSPBT at our facility for whom the maximum brain stem dose exceeded 5800 CcGE were performed. The DVH of each plan was calculated in an Eclipse treatment planning system (TPS) version 11 applying ±3.5% range uncertainty and ±3 mm shift of the isocenter in x, y and z directions to account for setup uncertainties. Worst-casemore » dose indices for brain stem and whole brain were compared to their values in the nominal plan to determine the average change in their values. For the brain stem, maximum dose to 1 cc of volume, dose to 10%, 50%, 90% of volume (D10, D50, D90) and volume receiving 6000, 5400, 5000, 4500, 4000 CcGE (V60, V54, V50, V45, V40) were evaluated. For the whole brain, maximum dose to 1 cc of volume, and volume receiving 5400, 5000, 4500, 4000, 3000 CcGE (V54, V50, V45, V40 and V30) were assessed. Results: The average change in the values of these indices in the worst scenario cases from the nominal plan were as follows. Brain stem; Maximum dose to 1 cc of volume: 1.1%, D10: 1.4%, D50: 8.0%, D90:73.3%, V60:116.9%, V54:27.7%, V50: 21.2%, V45:16.2%, V40:13.6%,Whole brain; Maximum dose to 1 cc of volume: 0.3%, V54:11.4%, V50: 13.0%, V45:13.6%, V40:14.1%, V30:13.5%. Conclusion: Large to modest changes in the dosiemtric indices for brain stem and whole brain compared to nominal plan due to range and set up uncertainties were observed. Such potential changes should be taken into account while using any dosimetric parameters for outcome evaluation of patients receiving proton therapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D; MacDougall, R
2016-06-15
Purpose: Accurate values for Kerma-Area-Product (KAP) are needed for patient dosimetry and quality control for exams utilizing radiographic and/or fluoroscopic imaging. The KAP measured using a typical direct KAP meter built with parallel-plate transmission ionization chamber is not precise and depends on the energy spectrum of diagnostic x-rays. This study compared the accuracy and reproducibility of KAP derived from system parameters with values measured with a direct KAP meter. Methods: IEC tolerance for displayed KAP is specified up to ± 35% above 2.5 Gy-cm{sup 2} and manufacturer’s specifications are typically ± 25%. KAP values from the direct KAP meter driftsmore » with time leading to replacement or re-calibration. More precise and consistent KAP is achievable utilizing a database of known radiation output for various system parameters. The integrated KAP meter was removed from a radiography system. A total of 48 measurements of air kerma were acquired at x-ray tube potential from 40 to 150 kVp with 10 kVp increment using ion chamber type external dosimeter at free-in-air geometry for four different types of filter combinations following the manufacturer’s service procedure. These data were used to create updated correction factors that determine air kerma computationally for given system parameters. Results of calculated KAP were evaluated against results using a calibrated ion chamber based dosimeter and a computed radiography imaging plate to measure x-ray field size. Results: The accuracy of calculated KAP from the system parameters was better within 4% deviation in all diagnostic x-ray tube potentials tested from 50 to 140 kVp. In contrast, deviations of up to 25% were measured from KAP displayed from the direct KAP meter. Conclusion: The “calculated KAP” approach provides the nominal advantage of improved accuracy and precision of displayed KAP as well as reduced cost of calibrating or replacing integrated KAP meters.« less
Guide to analyzing investment options using TWIGS.
Charles R Blinn; Dietmar W. Rose; Monique L. Belli
1988-01-01
Describes methods for analyzing economic return of simulated stand management alternatives in TWIGS. Defines and discusses net present value, equivalent annual income, soil expectation value, and real vs. nominal analyses. Discusses risk and sensitivity analysis when comparing alternatives.
NASA Astrophysics Data System (ADS)
Regonda, Satish Kumar; Seo, Dong-Jun; Lawrence, Bill; Brown, James D.; Demargne, Julie
2013-08-01
We present a statistical procedure for generating short-term ensemble streamflow forecasts from single-valued, or deterministic, streamflow forecasts produced operationally by the U.S. National Weather Service (NWS) River Forecast Centers (RFCs). The resulting ensemble streamflow forecast provides an estimate of the predictive uncertainty associated with the single-valued forecast to support risk-based decision making by the forecasters and by the users of the forecast products, such as emergency managers. Forced by single-valued quantitative precipitation and temperature forecasts (QPF, QTF), the single-valued streamflow forecasts are produced at a 6-h time step nominally out to 5 days into the future. The single-valued streamflow forecasts reflect various run-time modifications, or "manual data assimilation", applied by the human forecasters in an attempt to reduce error from various sources in the end-to-end forecast process. The proposed procedure generates ensemble traces of streamflow from a parsimonious approximation of the conditional multivariate probability distribution of future streamflow given the single-valued streamflow forecast, QPF, and the most recent streamflow observation. For parameter estimation and evaluation, we used a multiyear archive of the single-valued river stage forecast produced operationally by the NWS Arkansas-Red River Basin River Forecast Center (ABRFC) in Tulsa, Oklahoma. As a by-product of parameter estimation, the procedure provides a categorical assessment of the effective lead time of the operational hydrologic forecasts for different QPF and forecast flow conditions. To evaluate the procedure, we carried out hindcasting experiments in dependent and cross-validation modes. The results indicate that the short-term streamflow ensemble hindcasts generated from the procedure are generally reliable within the effective lead time of the single-valued forecasts and well capture the skill of the single-valued forecasts. For smaller basins, however, the effective lead time is significantly reduced by short basin memory and reduced skill in the single-valued QPF.
Hysteresis and uncertainty in soil water-retention curve parameters
Likos, William J.; Lu, Ning; Godt, Jonathan W.
2014-01-01
Accurate estimates of soil hydraulic parameters representing wetting and drying paths are required for predicting hydraulic and mechanical responses in a large number of applications. A comprehensive suite of laboratory experiments was conducted to measure hysteretic soil-water characteristic curves (SWCCs) representing a wide range of soil types. Results were used to quantitatively assess differences and uncertainty in three simplifications frequently adopted to estimate wetting-path SWCC parameters from more easily measured drying curves. They are the following: (1) αw=2αd, (2) nw=nd, and (3) θws=θds, where α, n, and θs are fitting parameters entering van Genuchten’s commonly adopted SWCC model, and the superscripts w and d indicate wetting and drying paths, respectively. The average ratio αw/αd for the data set was 2.24±1.25. Nominally cohesive soils had a lower αw/αd ratio (1.73±0.94) than nominally cohesionless soils (3.14±1.27). The average nw/nd ratio was 1.01±0.11 with no significant dependency on soil type, thus confirming the nw=nd simplification for a wider range of soil types than previously available. Water content at zero suction during wetting (θws) was consistently less than during drying (θds) owing to air entrapment. The θws/θds ratio averaged 0.85±0.10 and was comparable for nominally cohesive (0.87±0.11) and cohesionless (0.81±0.08) soils. Regression statistics are provided to quantitatively account for uncertainty in estimating hysteretic retention curves. Practical consequences are demonstrated for two case studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, Ernest; Hadgu, Teklu; Greenberg, Harris
This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approachmore » to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).« less
Luan, Jian'an; Mihailov, Evelin; Metspalu, Andres; Forouhi, Nita G.; Magnusson, Patrik K. E.; Pedersen, Nancy L.; Hallmans, Göran; Chu, Audrey Y.; Justice, Anne E.; Graff, Mariaelisa; Rose, Lynda M.; Langenberg, Claudia; Cupples, L. Adrienne; Ridker, Paul M.; Ong, Ken K.; Loos, Ruth J. F.; Chasman, Daniel I.; Ingelsson, Erik; Kilpeläinen, Tuomas O.; Scott, Robert A.; Mägi, Reedik
2017-01-01
Phenotypic variance heterogeneity across genotypes at a single nucleotide polymorphism (SNP) may reflect underlying gene-environment (G×E) or gene-gene interactions. We modeled variance heterogeneity for blood lipids and BMI in up to 44,211 participants and investigated relationships between variance effects (Pv), G×E interaction effects (with smoking and physical activity), and marginal genetic effects (Pm). Correlations between Pv and Pm were stronger for SNPs with established marginal effects (Spearman’s ρ = 0.401 for triglycerides, and ρ = 0.236 for BMI) compared to all SNPs. When Pv and Pm were compared for all pruned SNPs, only BMI was statistically significant (Spearman’s ρ = 0.010). Overall, SNPs with established marginal effects were overrepresented in the nominally significant part of the Pv distribution (Pbinomial <0.05). SNPs from the top 1% of the Pm distribution for BMI had more significant Pv values (PMann–Whitney = 1.46×10−5), and the odds ratio of SNPs with nominally significant (<0.05) Pm and Pv was 1.33 (95% CI: 1.12, 1.57) for BMI. Moreover, BMI SNPs with nominally significant G×E interaction P-values (Pint<0.05) were enriched with nominally significant Pv values (Pbinomial = 8.63×10−9 and 8.52×10−7 for SNP × smoking and SNP × physical activity, respectively). We conclude that some loci with strong marginal effects may be good candidates for G×E, and variance-based prioritization can be used to identify them. PMID:28614350
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
NASA Astrophysics Data System (ADS)
Ahmad, Iqbal; Shah, Syed Mujtaba; Ashiq, Muhammad Naeem; Nawaz, Faisal; Shah, Afzal; Siddiq, Muhammad; Fahim, Iqra; Khan, Samiullah
2016-10-01
Microemulsion method has been used for the synthesis of high resistive spinal nanoferrites with nominal composition Sr1- x Nd x Fe2- y Mn y O4 (0.0 ≤ x ≤ 0.1, 0.0 ≤ y ≤ 1.0) for high frequency device applications. It has been confirmed by x-ray diffraction (XRD) results that these ferrites have a cubic spinal structure with a mean crystallite size ranging from 34 mm to 47 nm. The co-substitution of Nd3+ and Mn2+ ions was performed, and its effect on electrical, dielectric and impedance properties was analyzed employing direct current (DC) resistivity measurements, dielectric measurements and electrochemical impedance spectroscopy (EIS). The DC resistivity ( ρ) value was the highest for the composition Sr0.90Nd0.1FeMnO4, but for the same composition, dielectric parameters and alternating current (AC) conductivity showed their minimum values. In the lower frequency range, the magnitudes of dielectric parameters decrease with increasing frequency and show an almost independent frequency response at higher frequencies. Dielectric polarization has been employed to explain these results. It was inferred from the results of EIS that the conduction process in the studied ferrite materials is predominantly governed by grain boundary volume.
A discrete decentralized variable structure robotic controller
NASA Technical Reports Server (NTRS)
Tumeh, Zuheir S.
1989-01-01
A decentralized trajectory controller for robotic manipulators is designed and tested using a multiprocessor architecture and a PUMA 560 robot arm. The controller is made up of a nominal model-based component and a correction component based on a variable structure suction control approach. The second control component is designed using bounds on the difference between the used and actual values of the model parameters. Since the continuous manipulator system is digitally controlled along a trajectory, a discretized equivalent model of the manipulator is used to derive the controller. The motivation for decentralized control is that the derived algorithms can be executed in parallel using a distributed, relatively inexpensive, architecture where each joint is assigned a microprocessor. Nonlinear interaction and coupling between joints is treated as a disturbance torque that is estimated and compensated for.
NASA Technical Reports Server (NTRS)
Juhasz, A.
1974-01-01
The performance of a short highly asymmetric annular diffuser equipped with wall bleed (suction) capability was evaluated at nominal inlet Mach numbers of 0.188, 0.264, and 0.324 with the inlet pressure and temperature at near ambient values. The diffuser had an area ratio of 2.75 and a length- to inlet-height ratio of 1.6. Results show that the radial profiles of diffuser exit velocity could be controlled from a severely hub peaked to a slightly tip biased form by selective use of bleed. At the same time, other performance parameters were also improved. These results indicate the possible application of the diffuser bleed technique to control flow profiles to gas turbine combustors.
Method and system to perform energy-extraction based active noise control
NASA Technical Reports Server (NTRS)
Kelkar, Atul (Inventor); Joshi, Suresh M. (Inventor)
2009-01-01
A method to provide active noise control to reduce noise and vibration in reverberant acoustic enclosures such as aircraft, vehicles, appliances, instruments, industrial equipment and the like is presented. A continuous-time multi-input multi-output (MIMO) state space mathematical model of the plant is obtained via analytical modeling and system identification. Compensation is designed to render the mathematical model passive in the sense of mathematical system theory. The compensated system is checked to ensure robustness of the passive property of the plant. The check ensures that the passivity is preserved if the mathematical model parameters are perturbed from nominal values. A passivity-based controller is designed and verified using numerical simulations and then tested. The controller is designed so that the resulting closed-loop response shows the desired noise reduction.
Drifting oscillations in axion monodromy
Flauger, Raphael; McAllister, Liam; Silverstein, Eva; ...
2017-10-31
In this paper, we study the pattern of oscillations in the primordial power spectrum in axion monodromy inflation, accounting for drifts in the oscillation period that can be important for comparing to cosmological data. In these models the potential energy has a monomial form over a super-Planckian field range, with superimposed modulations whose size is model-dependent. The amplitude and frequency of the modulations are set by the expectation values of moduli fields. We show that during the course of inflation, the diminishing energy density can induce slow adjustments of the moduli, changing the modulations. We provide templates capturing the effectsmore » of drifting moduli, as well as drifts arising in effective field theory models based on softly broken discrete shift symmetries, and we estimate the precision required to detect a drifting period. A non-drifting template suffices over a wide range of parameters, but for the highest frequencies of interest, or for sufficiently strong drift, it is necessary to include parameters characterizing the change in frequency over the e-folds visible in the CMB. Finally, we use these templates to perform a preliminary search for drifting oscillations in a part of the parameter space in the Planck nominal mission data.« less
Flexible operation strategy for environment control system in abnormal supply power condition
NASA Astrophysics Data System (ADS)
Liping, Pang; Guoxiang, Li; Hongquan, Qu; Yufeng, Fang
2017-04-01
This paper establishes an optimization method that can be applied to the flexible operation of the environment control system in an abnormal supply power condition. A proposed conception of lifespan is used to evaluate the depletion time of the non-regenerative substance. The optimization objective function is to maximize the lifespans. The optimization variables are the allocated powers of subsystems. The improved Non-dominated Sorting Genetic Algorithm is adopted to obtain the pareto optimization frontier with the constraints of the cabin environmental parameters and the adjustable operating parameters of the subsystems. Based on the same importance of objective functions, the preferred power allocation of subsystems can be optimized. Then the corresponding running parameters of subsystems can be determined to ensure the maximum lifespans. A long-duration space station with three astronauts is used to show the implementation of the proposed optimization method. Three different CO2 partial pressure levels are taken into consideration in this study. The optimization results show that the proposed optimization method can obtain the preferred power allocation for the subsystems when the supply power is at a less-than-nominal value. The method can be applied to the autonomous control for the emergency response of the environment control system.
Jókay, Ágnes; Farkas, Árpád; Füri, Péter; Horváth, Alpár; Tomisa, Gábor; Balásházy, Imre
2016-06-10
Asthma is a serious global health problem with rising prevalence and treatment costs. Due to the growing number of different types of inhalation devices and aerosol drugs, physicians often face difficulties in choosing the right medication for their patients. The main objectives of this study are (i) to elucidate the possibility and the advantages of the application of numerical modeling techniques in aerosol drug and device selection, and (ii) to demonstrate the possibility of the optimization of inhalation modes in asthma therapy with a numerical lung model by simulating patient-specific drug deposition distributions. In this study we measured inhalation parameter values of 25 healthy adult volunteers when using Foster(®) NEXThaler(®) and Seretide(®) Diskus(®). Relationships between emitted doses and patient-specific inhalation flow rates were established. Furthermore, individualized emitted particle size distributions were determined applying size distributions at measured flow rates. Based on the measured breathing parameter values, we calculated patient-specific drug deposition distributions for the active components (steroid and bronchodilator) of both drugs by the help of a validated aerosol lung deposition model adapted to therapeutic aerosols. Deposited dose fractions and deposition densities have been computed in the entire respiratory tract, in distinct anatomical regions of the airways and at the level of airway generations. We found that Foster(®) NEXThaler(®) deposits more efficiently in the lungs (average deposited steroid dose: 42.32±5.76% of the nominal emitted dose) than Seretide(®) Diskus(®) (average deposited steroid dose: 24.33±2.83% of the nominal emitted dose), but the variance of the deposition values of different individuals in the lung is significant. In addition, there are differences in the required minimal flow rates, therefore at certain patients Seretide(®) Diskus(®) or pMDIs could be a better choice. Our results show that validated computer deposition models could be useful tools in providing valuable deposition data and assisting health professionals in the personalized drug selection and delivery optimization. Patient-specific modeling could open a new horizon in the treatment of asthma towards a more effective personalized medicine in the future. Copyright © 2016 Elsevier B.V. All rights reserved.
Heat Transfer in High-Temperature Fibrous Insulation
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran
2002-01-01
The combined radiation/conduction heat transfer in high-porosity, high-temperature fibrous insulations was investigated experimentally and numerically. The effective thermal conductivity of fibrous insulation samples was measured over the temperature range of 300-1300 K and environmental pressure range of 1.33 x 10(exp -5)-101.32 kPa. The fibrous insulation samples tested had nominal densities of 24, 48, and 72 kilograms per cubic meter and thicknesses of 13.3, 26.6 and 39.9 millimeters. Seven samples were tested such that the applied heat flux vector was aligned with local gravity vector to eliminate natural convection as a mode of heat transfer. Two samples were tested with reverse orientation to investigate natural convection effects. It was determined that for the fibrous insulation densities and thicknesses investigated no heat transfer takes place through natural convection. A finite volume numerical model was developed to solve the governing combined radiation and conduction heat transfer equations. Various methods of modeling the gas/solid conduction interaction in fibrous insulations were investigated. The radiation heat transfer was modeled using the modified two-flux approximation assuming anisotropic scattering and gray medium. A genetic-algorithm based parameter estimation technique was utilized with this model to determine the relevant radiative properties of the fibrous insulation over the temperature range of 300-1300 K. The parameter estimation was performed by least square minimization of the difference between measured and predicted values of effective thermal conductivity at a density of 24 kilograms per cubic meters and at nominal pressures of 1.33 x 10(exp -4) and 99.98 kPa. The numerical model was validated by comparison with steady-state effective thermal conductivity measurements at other densities and pressures. The numerical model was also validated by comparison with a transient thermal test simulating reentry aerodynamic heating conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom; Su-Jong Yoon
2014-04-01
Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phasesmore » on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.« less
Fluency and reading comprehension in students with reading difficulties.
Nascimento, Tânia Augusto; Carvalho, Carolina Alves Ferreira de; Kida, Adriana de Souza Batista; Avila, Clara Regina Brandão de
2011-12-01
To characterize the performance of students with reading difficulties in decoding and reading comprehension tasks as well as to investigate the possible correlations between them. Sixty students (29 girls) from 3rd to 5th grades of public Elementary Schools were evaluated. Thirty students (Research Group - RG), ten from each grade, were nominated by their teachers as presenting evidences of learning disabilities. The other thirty students were indicated as good readers, and were matched by gender, age and grade to the RG, composing the Comparison Group (CG). All subjects were assessed regarding the parameters of reading fluency (rate and accuracy in words, pseudowords and text reading) and reading comprehension (reading level, number and type of ideas identified, and correct responses on multiple choice questions). The RG presented significantly lower scores than the CG in fluency and reading comprehension. Different patterns of positive and negative correlations, from weak to excellent, among the decoding and comprehension parameters were found in both groups. In the RG, low values of reading rate and accuracy were observed, which were correlated to low scores in comprehension and improvement in decoding, but not in comprehension, with grade increase. In CG, correlation was found between different fluency parameters, but none of them was correlated to the reading comprehension variables. Students with reading and writing difficulties show lower values of reading fluency and comprehension than good readers. Fluency and comprehension are correlated in the group with difficulties, showing that deficits in decoding influence reading comprehension, which does not improve with age increase.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
An evaluative model of system performance in manned teleoperational systems
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1989-01-01
Manned teleoperational systems are used in aerospace operations in which humans must interact with machines remotely. Manual guidance of remotely piloted vehicles, controling a wind tunnel, carrying out a scientific procedure remotely are examples of teleoperations. A four input parameter throughput (Tp) model is presented which can be used to evaluate complex, manned, teleoperations-based systems and make critical comparisons among candidate control systems. The first two parameters of this model deal with nominal (A) and off-nominal (B) predicted events while the last two focus on measured events of two types, human performance (C) and system performance (D). Digital simulations showed that the expression A(1-B)/C+D) produced the greatest homogeneity of variance and distribution symmetry. Results from a recently completed manned life science telescience experiment will be used to further validate the model. Complex, interacting teleoperational systems may be systematically evaluated using this expression much like a computer benchmark is used.
The impact of integrated water management on the Space Station propulsion system
NASA Technical Reports Server (NTRS)
Schmidt, George R.
1987-01-01
The water usage of elements in the Space Station integrated water system (IWS) is discussed, and the parameters affecting the overall water balance and the water-electrolysis propulsion-system requirements are considered. With nominal IWS operating characteristics, extra logistic water resupply (LWR) is found to be unnecessary in the satisfaction of the nominal propulsion requirements. With the consideration of all possible operating characteristics, LWR will not be required in 65.5 percent of the cases, and for 17.9 percent of the cases LWR can be eliminated by controlling the stay time of theShuttle Orbiter orbiter.
Modeling of the interest rate policy of the central bank of Russia
NASA Astrophysics Data System (ADS)
Shelomentsev, A. G.; Berg, D. B.; Detkov, A. A.; Rylova, A. P.
2017-11-01
This paper investigates interactions among money supply, exchange rates, inflation, and nominal interest rates, which are regulating parameters of the Central bank policy. The study is based on the data received from Russian source in 2002-2016. The major findings are 1) the interest rate demonstrates almost no relation with inflation; 2) ties of money supply and the nominal interest rate are strong; 3) money supply and inflation show meaningful relations only in comparison to their growth rates. We have developed a dynamic model, which can be used in forecasting of macroeconomic processes.
Creep of a Silicon Nitride Under Various Specimen/Loading Configurations
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Powers, Lynn M.; Holland, Frederic A.; Gyekenyesi, John P.; Holland, F. A. (Technical Monitor)
2000-01-01
Extensive creep testing of a hot-pressed silicon nitride (NC132) was performed at 1300 C in air using five different specimen/loading configurations, including pure tension, pure compression, four-point uniaxial flexure, ball-on-ring biaxial flexure, and ring-on-ring biaxial flexure. Nominal creep strain and its rate for a given nominal applied stress were greatest in tension, least in compression, and intermediate in uniaxial and biaxial flexure. Except for the case of compressive loading, nominal creep strain generally decreased with time, resulting in less-defined steady-state condition. Of the four different creep formulations - power-law, hyperbolic sine, step, redistribution models - the conventional power-law model still provides the most convenient and reasonable means to estimate simple, quantitative creep parameters of the material. Predictions of creep deformation for the case of multiaxial stress state (biaxial flexure) were made based on pure tension and compression creep data by using the design code CARES/Creep.
Optimal motion planning for collision avoidance of mobile robots in non-stationary environments
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An optimal control formulation of the problem of collision avoidance of mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the minimum distance between the robot and the object is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. A perturbation control type of approach is used to update the optimal plan. Simulation results verify the value of the proposed strategy.
A system performance throughput model applicable to advanced manned telescience systems
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1990-01-01
As automated space systems become more complex, autonomous, and opaque to the flight crew, it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed that are related to total system validation. An evaluative throughput model is presented which can be used to generate a human operator-related benchmark or figure of merit for a given system which involves humans at the input and output ends as well as other automated intelligent agents. The concept of sustained and accurate command/control data information transfer is introduced. The first two input parameters of the model involve nominal and off-nominal predicted events. The first of these calls for a detailed task analysis while the second is for a contingency event assessment. The last two required input parameters involving actual (measured) events, namely human performance and continuous semi-automated system performance. An expression combining these four parameters was found using digital simulations and identical, representative, random data to yield the smallest variance.
42 CFR 495.348 - Procurement standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (CONTINUED) STANDARDS AND CERTIFICATION STANDARDS FOR THE ELECTRONIC HEALTH RECORD TECHNOLOGY INCENTIVE... solicit nor accept gratuities, favors, or anything of monetary value from contractors, or parties to sub... or the gift is an unsolicited item of nominal value. (5) The standards of conduct provide for...
42 CFR 495.348 - Procurement standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (CONTINUED) STANDARDS AND CERTIFICATION STANDARDS FOR THE ELECTRONIC HEALTH RECORD TECHNOLOGY INCENTIVE... solicit nor accept gratuities, favors, or anything of monetary value from contractors, or parties to sub... or the gift is an unsolicited item of nominal value. (5) The standards of conduct provide for...
42 CFR 495.348 - Procurement standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (CONTINUED) STANDARDS AND CERTIFICATION STANDARDS FOR THE ELECTRONIC HEALTH RECORD TECHNOLOGY INCENTIVE... solicit nor accept gratuities, favors, or anything of monetary value from contractors, or parties to sub... or the gift is an unsolicited item of nominal value. (5) The standards of conduct provide for...
McElroy, Susan L; Hudson, James I; Gasior, Maria; Herman, Barry K; Radewonuk, Jana; Wilfley, Denise; Busner, Joan
2017-08-01
This study examined the time course of efficacy-related endpoints for lisdexamfetamine dimesylate (LDX) versus placebo in adults with protocol-defined moderate to severe binge-eating disorder (BED). In two 12-week, double-blind, placebo-controlled studies, adults meeting DSM-IV-TR BED criteria were randomized 1:1 to receive placebo or dose-optimized LDX (50 or 70 mg). Analyses across visits used mixed-effects models for repeated measures (binge eating days/week, binge eating episodes/week, Yale-Brown Obsessive Compulsive Scale modified for Binge Eating [Y-BOCS-BE] scores, percentage body weight change) and chi-square tests (Clinical Global Impressions-Improvement [CGI-I; from the perspective of BED symptoms] scale dichotomized as improved or not improved). These analyses were not part of the prespecified testing strategy, so reported p values are nominal (unadjusted and descriptive only). Least squares mean treatment differences for change from baseline in both studies favored LDX over placebo (all nominal p values < .001) starting at Week 1 for binge eating days/week, binge-eating episodes/week, and percentage weight change and at the first posttreatment assessment (Week 4) for Y-BOCS-BE total and domain scores. On the CGI-I, more participants on LDX than placebo were categorized as improved starting at Week 1 in both studies (both nominal p values < .001). Across these efficacy-related endpoints, the superiority of LDX over placebo was maintained at each posttreatment assessment in both studies (all nominal p values < .001). In adults with BED, LDX treatment appeared to be associated with improvement on efficacy measures as early as 1 week, which was maintained throughout the 12-week studies. © 2017 The Authors International Journal of Eating Disorders Published by Wiley Periodicals, Inc.
A self-adapting system for the automated detection of inter-ictal epileptiform discharges.
Lodder, Shaun S; van Putten, Michel J A M
2014-01-01
Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form "IED nominations", each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20-30 min recordings 1 took approximately 5 min. The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
Some Calculations for the RHIC Kicker
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claus, J.
1996-12-01
The bunches that arrive from the AGS are put on to RHIC's median plane by a string of four injection kickers in each ring. There are four short kickers rather than one long one in order to keep the kicker filling time acceptable, filling time being defined as the amount of time needed for increasing the deflecting field in the kicker from zero to its nominal value. During the filling time process the energy stored in the deflecting field is moved from outside the kicker to its aperture; since energy can only be displaced with finite velocity the filling timemore » is non-zero for kickers of non-zero length, and tends to increase with increasing length. It is one of the more important parameters of the kicker because it sets a lower limit to the time interval between the last of the already circulating bunches and the newly injected one, and thus an upper limit to the total number of bunches that can be injected. RF gymnastics can be used to pack the bunches tighter than is indicated by this limit, but such gymnastics required radial aperture beyond what would be required otherwise, as well as time, and probably special hardware. Minimization of the kicker's stored energy requires minimization of its aperture, it presents therefore a major aperture restriction. Unless it is placed at a point where the dispersion is negligible its aperture would have to be increased in order to provide the radial space needed for the gymnastics. Both the amount of extra space needed and the rate of longitudinal displacement increase with the maximum deviation in energy of the bunch to be displaced from the nominal value, thus taking more time for the exercise reduces the aperture requirements. This time is measured in terms of synchrotron periods and is not small. It adds directly to the filling time of each ring and decreases therefore the time-average luminosity. Evidently the maximation of the time-average luminosity is a complex issue in which the kicker filling time is a major parameter.« less
Solvent Hold Tank Sample Results for MCU-16-1247-1248-1249: August 2016 Monthly Sample
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fondeur, F. F.; Jones, D. H.
Savannah River National Laboratory (SRNL) received one set of Solvent Hold Tank (SHT) samples (MCU-16-1247-1248-1249), pulled on 08/22/2016 for analysis. The samples were combined and analyzed for composition. Analysis of the composite sample MCU-16-1247-1248-1249 indicated the Isopar™L concentration is above its nominal level (101%). The extractant (MaxCalix) and the modifier (CS-7SB) are 7% and 9 % below their nominal concentrations. The suppressor (TiDG) is 63% below its nominal concentration. This analysis confirms the solvent may require the addition of TiDG, and possibly of modifier and MaxCalix to restore then to nominal levels. Based on the current monthly sample, the levelsmore » of TiDG, Isopar™L, MaxCalix, and modifier are sufficient for continuing operation but are expected to decrease with time. Periodic characterization and trimming additions to the solvent are recommended. At the time of writing this report, A solvent trim batch containing TiDG, modifier and MaxCalix, was added to the SHT (October 2016) and expect the concentration of these components to be at their nominal values.« less
Neutron resonance parameters of 6830Zn+n and statistical distributions of level spacings and widths
NASA Astrophysics Data System (ADS)
Garg, J. B.; Tikku, V. K.; Harvey, J. A.; Halperin, J.; Macklin, R. L.
1982-04-01
Discrete values of the parameters (E0, gΓn, Jπ, Γγ, etc.) of the resonances in the reaction 6830Zn + n have been determined from total cross section measurements from a few keV to 380 keV with a nominal resolution of 0.07 ns/m for the highest energy and from capture cross section measurements up to 130 keV using the pulsed neutron time-of-flight technique with a neutron burst width of 5 ns. The cross section data were analyzed to determine the parameters of the resonances using R-matrix multilevel codes. These results have provided values of average quantities as follows: S0=(2.01+/-0.34), S1=(0.56+/-0.05), S2=(0.2+/-0.1) in units of 10-4, D0=(5.56+/-0.43) keV and D1=(1.63+/-0.14) keV. From these measurements we have also determined the following average radiation widths: (Γ¯γ)l=0=(302+/-60) meV and (Γ¯γ)l=1=(157 +/-7) meV. The investigation of the statistical properties of neutron reduced widths and level spacings showed excellent agreement of the data with the Porter-Thomas distribution for s- and p-wave neutron widths and with the Dyson-Mehta Δ3 statistic and the Wigner distribution for the s-wave level spacing distribution. In addition, a correlation coefficient of ρ=0.50+/-0.10 between Γ0n and Γγ has been observed for s-wave resonances. The value of <σnγ> at (30+/-10) keV is 19.2 mb. NUCLEAR REACTIONS 3068Zn(n,n), 3068Zn(n,γ), E=few keV to 380, 130 keV, respectively. Measured total and capture cross sections versus neutron energy, deduced resonance parameters, E0, Jπ, gΓn, Γγ, S0, S1, S2, D0, D1.
Derivation of nominal strength for wood utility poles
Ronald W. Wolfe; Jozsef Bodig; Patricia Lebow
2001-01-01
The designated fiber stress values published in the American National Standards Institute Standard for Poles, ANSI 05.1, no longer reflect the state of the knowledge. These values are based on a combination of test data from small clear wood samples and small poles (
Mikhaylov, Alexander; Uudsemaa, Merle; Trummal, Aleksander; Arias, Eduardo; Moggio, Ivana; Ziolo, Ronald; Cooper, Thomas M; Rebane, Aleksander
2018-04-19
Change of the permanent molecular electric dipole moment, Δμ, in a series of nominally centrosymmetric and noncentrosymmteric ferrocene-phenyleneethynylene oligomers was estimated by measuring the two-photon absorption cross-section spectra of the lower energy metal-to-ligand charge-transfer transitions using femtosecond nonlinear transmission method and was found to vary in the range up to 12 D, with the highest value corresponding to the most nonsymmetric system. Calculations of the Δμ performed by the TD-DFT method show quantitative agreement with the experimental values and reveal that facile rotation of the ferrocene moieties relative to the organic ligand breaks the ground-state inversion symmetry in the nominally symmetric structures.
NASA Technical Reports Server (NTRS)
Howell, W. E.
1974-01-01
The mechanical properties of a symmetrical, eight-step, titanium-boron-epoxy joint are discussed. A study of the effect of adhesive and matrix stiffnesses on the axial, normal, and shear stress distributions was made using the finite element method. The NASA Structural Analysis Program (NASTRAN) was used for the analysis. The elastic modulus of the adhesive was varied from 345 MPa to 3100 MPa with the nominal value of 1030 MPa as a standard. The nominal values were used to analyze the stability of the joint. The elastic moduli were varied to determine their effect on the stresses in the joint.
Electrical and optical performance of mid-wavelength infrared InAsSb heterostructure detectors
NASA Astrophysics Data System (ADS)
Gomółka, Emilia; Kopytko, Małgorzata; Michalczewski, Krystian; Kubiszyn, Łukasz; Kebłowski, Artur; Gawron, Waldemar; Martyniuk, Piotr; Piotrowski, Józef; Rutkowski, Jarosław
2017-10-01
In this work we investigate the high-operating temperature performance of InAsSb/AlSb heterostructure detectors with cut-off wavelengths near 5 μm at 230 K. The devices have been fabricated with different type of the absorbing layer: nominally undoped absorber, and both n- and p-type doped. The results show that the device performance strongly depends on absorber layer doping. Generally, p-type absorber provides higher values of current responsivity than n-type absorber, but at the same time also higher values of dark current. The device with nominally undoped absorbing layer shows moderate values of both current responsivity and dark current. Resulting detectivities D° of non-immersed devices varies from 2×109 to 7×109 cmHz1/2/W at 230 K, which is easily achievable with a two stage thermoelectric cooler.
NASA Astrophysics Data System (ADS)
Prakash, S.; Sinha, S. K.
2015-09-01
In this research work, two areas hydro-thermal power system connected through tie-lines is considered. The perturbation of frequencies at the areas and resulting tie line power flows arise due to unpredictable load variations that cause mismatch between the generated and demanded powers. Due to rising and falling power demand, the real and reactive power balance is harmed; hence frequency and voltage get deviated from nominal value. This necessitates designing of an accurate and fast controller to maintain the system parameters at nominal value. The main purpose of system generation control is to balance the system generation against the load and losses so that the desired frequency and power interchange between neighboring systems are maintained. The intelligent controllers like fuzzy logic, artificial neural network (ANN) and hybrid fuzzy neural network approaches are used for automatic generation control for the two area interconnected power systems. Area 1 consists of thermal reheat power plant whereas area 2 consists of hydro power plant with electric governor. Performance evaluation is carried out by using intelligent (ANFIS, ANN and fuzzy) control and conventional PI and PID control approaches. To enhance the performance of controller sliding surface i.e. variable structure control is included. The model of interconnected power system has been developed with all five types of said controllers and simulated using MATLAB/SIMULINK package. The performance of the intelligent controllers has been compared with the conventional PI and PID controllers for the interconnected power system. A comparison of ANFIS, ANN, Fuzzy and PI, PID based approaches shows the superiority of proposed ANFIS over ANN, fuzzy and PI, PID. Thus the hybrid fuzzy neural network controller has better dynamic response i.e., quick in operation, reduced error magnitude and minimized frequency transients.
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less
Composite Multilinearity, Epistemic Uncertainty and Risk Achievement Worth
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Borgonovo; C. L. Smith
2012-10-01
Risk Achievement Worth is one of the most widely utilized importance measures. RAW is defined as the ratio of the risk metric value attained when a component has failed over the base case value of the risk metric. Traditionally, both the numerator and denominator are point estimates. Relevant literature has shown that inclusion of epistemic uncertainty i) induces notable variability in the point estimate ranking and ii) causes the expected value of the risk metric to differ from its nominal value. We obtain the conditions under which the equality holds between the nominal and expected values of a reliability riskmore » metric. Among these conditions, separability and state-of-knowledge independence emerge. We then study how the presence of epistemic uncertainty aspects RAW and the associated ranking. We propose an extension of RAW (called ERAW) which allows one to obtain a ranking robust to epistemic uncertainty. We discuss the properties of ERAW and the conditions under which it coincides with RAW. We apply our findings to a probabilistic risk assessment model developed for the safety analysis of NASA lunar space missions.« less
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Edward; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
Hard Constraints in Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2008-01-01
This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.
Performance degradation of photovoltaic modules at different sites
NASA Astrophysics Data System (ADS)
Arab, A. Hadj; Mahammed, I. Hadj; Ould Amrouche, S.; Taghezouit, B.; Yassaa, N.
2018-05-01
In this work are presented results of electrical performance measurements of 120 crystalline silicon PV modules following long-term outdoor measurements. A set of 90 PV modules represent the first grid-connected photovoltaic (PV) system in Algeria, installed at the level of the “Centre de Développement des Energies Renouvelables” (CDER) site (Mediterranean coast), Bouzareah. The other 30 PV modules were undertaken in an arid area of the desert region of Ghardaïa site, about 600 km south of Algiers, with measurements collected from different applications. Following different characterization tests, we noticed that the all tested PV modules kept their power-generating rate except a slight reduction. Therefore, a mathematical model has been used to carry out PV module testing at different irradiance and temperature levels. Hence, different PV module parameters have been calculated from the recorded values of the open-circuit voltage, the short-circuit current, the voltage and current at maximum power point. The electrical measurements have indicated different degradations of current-voltage parameters. All the PV modules stated a decrease in the nominal power, which is variable from one module to another.
Stabilization of a three-dimensional limit cycle walking model through step-to-step ankle control.
Kim, Myunghee; Collins, Steven H
2013-06-01
Unilateral, below-knee amputation is associated with an increased risk of falls, which may be partially related to a loss of active ankle control. If ankle control can contribute significantly to maintaining balance, even in the presence of active foot placement, this might provide an opportunity to improve balance using robotic ankle-foot prostheses. We investigated ankle- and hip-based walking stabilization methods in a three-dimensional model of human gait that included ankle plantarflexion, ankle inversion-eversion, hip flexion-extension, and hip ad/abduction. We generated discrete feedback control laws (linear quadratic regulators) that altered nominal actuation parameters once per step. We used ankle push-off, lateral ankle stiffness and damping, fore-aft foot placement, lateral foot placement, or all of these as control inputs. We modeled environmental disturbances as random, bounded, unexpected changes in floor height, and defined balance performance as the maximum allowable disturbance value for which the model walked 500 steps without falling. Nominal walking motions were unstable, but were stabilized by all of the step-to-step control laws we tested. Surprisingly, step-by-step modulation of ankle push-off alone led to better balance performance (3.2% leg length) than lateral foot placement (1.2% leg length) for these control laws. These results suggest that appropriate control of robotic ankle-foot prosthesis push-off could make balancing during walking easier for individuals with amputation.
NASA Technical Reports Server (NTRS)
Glass, Christopher E.
1989-01-01
The effects of cylindrical leading edge sweep on surface pressure and heat transfer rate for swept shock wave interference were investigated. Experimental tests were conducted in the Calspan 48-inch Hypersonic Shock Tunnel at a nominal Mach number of 8, nominal unit Reynolds number of 1.5 x 10 to the 6th power per foot, leading edge and incident shock generator sweep angles of 0, 15, and 30 deg, and incident shock generator angle-of-attack fixed at 12.5 deg. Detailed surface pressure and heat transfer rate on the cylindircal leading edge of a swept shock wave interference model were measured at the region of the maximum surface pressure and heat transfer rate. Results show that pressure and heat transfer rate on the cylindrical leading edge of the shock wave interference model were reduced as the sweep was increased over the range of tested parameters. Peak surface pressure and heat transfer rate on the cylinder were about 10 and 30 times the undisturbed flow stagnation point value, respectively, for the 0 deg sweep test. A comparison of the 15 and 30 deg swept results with the 0 deg swept results showed that peak pressure was reduced about 13 percent and 44 percent, respectively, and peak heat transfer rate was reduced about 7 percent and 27 percent, respectively.
Economic study of multipurpose advanced high-speed transport configurations
NASA Technical Reports Server (NTRS)
1979-01-01
A nondimensional economic examination of a parametrically-derived set of supersonic transport aircraft was conducted. The measure of economic value was surcharged relative to subsonic airplane tourist-class yield. Ten airplanes were defined according to size, payload, and speed. The price, range capability, fuel burned, and block time were determined for each configuration, then operating costs and surcharges were calculated. The parameter with the most noticeable influence on nominal surcharge was found to be real (constant dollars) fuel price increase. A change in SST design Mach number from 2.4 to Mach 2.7 showed a very small surcharge advantage (on the order of 1 percent for the faster aircraft). Configuration design compromises required for an airplane to operate overland at supersonic speeds without causing sonic boom annoyance result in severe performance penalties and require high (more than 100 percent) surcharges.
Ambiguities in spaceborne synthetic aperture radar systems
NASA Technical Reports Server (NTRS)
Li, F. K.; Johnson, W. T. K.
1983-01-01
An examination of aspects of spaceborne SAR time delay and Doppler ambiguities has led to the formulation of an accurate method for the evaluation of the ratio of ambiguity intensities to that of the signal, which has been applied to the nominal SAR system on Seasat. After discussing the variation of this ratio as a function of orbital latitude and attitude control error, it is shown that the detailed range migration-azimuth phase history of an ambiguity is different from that of a signal, so that the images of ambiguities are dispersed. Seasat SAR dispersed images are presented, and their dispersions are eliminated through an adjustment of the processing parameters. A method is also presented which uses a set of multiple pulse repetition sequences to determine the Doppler centroid frequency absolute values for SARs with high carrier frequencies and poor attitude measurements.
Liu, Hao; Shao, Qi; Fang, Xuelin
2017-02-01
For the class-E amplifier in a wireless power transfer (WPT) system, the design parameters are always determined by the nominal model. However, this model neglects the conduction loss and voltage stress of MOSFET and cannot guarantee the highest efficiency in the WPT system for biomedical implants. To solve this problem, this paper proposes a novel circuit model of the subnominal class-E amplifier. On a WPT platform for capsule endoscope, the proposed model was validated to be effective and the relationship between the amplifier's design parameters and its characteristics was analyzed. At a given duty ratio, the design parameters with the highest efficiency and safe voltage stress are derived and the condition is called 'optimal subnominal condition.' The amplifier's efficiency can reach the highest of 99.3% at the 0.097 duty ratio. Furthermore, at the 0.5 duty ratio, the measured efficiency of the optimal subnominal condition can reach 90.8%, which is 15.2% higher than that of the nominal condition. Then, a WPT experiment with a receiving unit was carried out to validate the feasibility of the optimized amplifier. In general, the design parameters of class-E amplifier in a WPT system for biomedical implants can be determined with the proposed optimization method in this paper.
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
A Singular Perturbation Approach for Time-Domain Assessment of Phase Margin
NASA Technical Reports Server (NTRS)
Zhu, J. Jim; Yang, Xiaojing; Hodel, A Scottedward
2010-01-01
This paper considers the problem of time-domain assessment of the Phase Margin (PM) of a Single Input Single Output (SISO) Linear Time-Invariant (LTI) system using a singular perturbation approach, where a SISO LTI fast loop system, whose phase lag increases monotonically with frequency, is introduced into the loop as a singular perturbation with a singular perturbation (time-scale separation) parameter Epsilon. First, a bijective relationship between the Singular Perturbation Margin (SPM) max and the PM of the nominal (slow) system is established with an approximation error on the order of Epsilon(exp 2). In proving this result, relationships between the singular perturbation parameter Epsilon, PM of the perturbed system, PM and SPM of the nominal system, and the (monotonically increasing) phase of the fast system are also revealed. These results make it possible to assess the PM of the nominal system in the time-domain for SISO LTI systems using the SPM with a standardized testing system called "PM-gauge," as demonstrated by examples. PM is a widely used stability margin for LTI control system design and certification. Unfortunately, it is not applicable to Linear Time-Varying (LTV) and Nonlinear Time-Varying (NLTV) systems. The approach developed here can be used to establish a theoretical as well as practical metric of stability margin for LTV and NLTV systems using a standardized SPM that is backward compatible with PM.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Molecular Genetics of Successful Smoking Cessation: Convergent Genome-Wide Association Study Results
Uhl, George R.; Liu, Qing-Rong; Drgon, Tomas; Johnson, Catherine; Walther, Donna; Rose, Jed E.; David, Sean P.; Niaura, Ray; Lerman, Caryn
2008-01-01
Context Smoking remains a major public health problem. Twin studies indicate that the ability to quit smoking is substantially heritable, with genetics that overlap modestly with the genetics of vulnerability to dependence on addictive substances. Objectives To identify replicated genes that facilitate smokers’ abilities to achieve and sustain abstinence from smoking (hereinafter referred to as quit-success genes) found in more than 2 genome-wide association (GWA) studies of successful vs unsuccessful abstainers, and, secondarily, to nominate genes for selective involvement in smoking cessation success with bupropion hydrochloride vs nicotine replacement therapy (NRT). Design The GWA results in subjects from 3 centers, with secondary analyses of NRT vs bupropion responders. Setting Outpatient smoking cessation trial participants from 3 centers. Participants European American smokers who successfully vs unsuccessfully abstain from smoking with biochemical confirmation in a smoking cessation trial using NRT, bupropion, or placebo (N=550). Main Outcome Measures Quit-success genes, reproducibly identified by clustered nominally positive single-nucleotide polymorphisms (SNPs) in more than 2 independent samples with significant P values based on Monte Carlo simulation trials. The NRT-selective genes were nominated by clustered SNPs that display much larger t values for NRT vs placebo comparisons. The bupropion-selective genes were nominated by bupropion-selective results. Results Variants in quit-success genes are likely to alter cell adhesion, enzymatic, transcriptional, structural, and DNA, RNA, and/or protein-handling functions. Quit-success genes are identified by clustered nominally positive SNPs from more than 2 samples and are unlikely to represent chance observations (Monte Carlo P < .0003). These genes display modest overlap with genes identified in GWA studies of dependence on addictive substances and memory. Conclusions These results support polygenic genetics for success in abstaining from smoking, overlap with genetics of substance dependence and memory, and nominate gene variants for selective influences on therapeutic responses to bupropion vs NRT. Molecular genetics should help match the types and/or intensity of anti-smoking treatments with the smokers most likely to benefit from them. PMID:18519826
Generalized internal model robust control for active front steering intervention
NASA Astrophysics Data System (ADS)
Wu, Jian; Zhao, Youqun; Ji, Xuewu; Liu, Yahui; Zhang, Lipeng
2015-03-01
Because of the tire nonlinearity and vehicle's parameters' uncertainties, robust control methods based on the worst cases, such as H ∞, µ synthesis, have been widely used in active front steering control, however, in order to guarantee the stability of active front steering system (AFS) controller, the robust control is at the cost of performance so that the robust controller is a little conservative and has low performance for AFS control. In this paper, a generalized internal model robust control (GIMC) that can overcome the contradiction between performance and stability is used in the AFS control. In GIMC, the Youla parameterization is used in an improved way. And GIMC controller includes two sections: a high performance controller designed for the nominal vehicle model and a robust controller compensating the vehicle parameters' uncertainties and some external disturbances. Simulations of double lane change (DLC) maneuver and that of braking on split- µ road are conducted to compare the performance and stability of the GIMC control, the nominal performance PID controller and the H ∞ controller. Simulation results show that the high nominal performance PID controller will be unstable under some extreme situations because of large vehicle's parameters variations, H ∞ controller is conservative so that the performance is a little low, and only the GIMC controller overcomes the contradiction between performance and robustness, which can both ensure the stability of the AFS controller and guarantee the high performance of the AFS controller. Therefore, the GIMC method proposed for AFS can overcome some disadvantages of control methods used by current AFS system, that is, can solve the instability of PID or LQP control methods and the low performance of the standard H ∞ controller.
NASA Astrophysics Data System (ADS)
Levykin, S. V.; Chibilev, A. A.; Kazachkov, G. V.; Petrishchev, V. P.
2017-02-01
The evolution of Russian concepts concerning the assessment of soil suitability for cultivation in relation to several campaigns on large-scale plowing of virgin steppe soils is examined. The major problems of agricultural land use in steppe areas—preservation of rainfed farming in the regions with increasing climatic risks, underestimation of the potential of arable lands in land cadaster assessments, and much lower factual yields in comparison with potential yields—are considered. It is suggested that the assessments of arable lands should be performed on the basis of the soil-ecological index (SEI) developed by I. Karmanov with further conversion of SEI values into nominal monetary values. Under conditions of land reforms and economic reforms, it is important to determine suitability of steppe chernozems for plowing and economic feasibility of their use for crop growing in dependence on macroeconomic parameters. This should support decisions on optimization of land use in the steppe zone on the basis of the principles suggested by V. Dokuchaev. The developed approach for assessing soil suitability for cultivation was tested in the subzone of herbaceous-fescue-feather grass steppes in the Cis-Ural part of Orenburg oblast and used for the assessment of soil suitability for cultivation in the southern and southeastern regions of Orenburg oblast.
NASA Astrophysics Data System (ADS)
Li, Yuankai; Ding, Liang; Zheng, Zhizhong; Yang, Qizhi; Zhao, Xingang; Liu, Guangjun
2018-05-01
For motion control of wheeled planetary rovers traversing on deformable terrain, real-time terrain parameter estimation is critical in modeling the wheel-terrain interaction and compensating the effect of wheel slipping. A multi-mode real-time estimation method is proposed in this paper to achieve accurate terrain parameter estimation. The proposed method is composed of an inner layer for real-time filtering and an outer layer for online update. In the inner layer, sinkage exponent and internal frictional angle, which have higher sensitivity than that of the other terrain parameters to wheel-terrain interaction forces, are estimated in real time by using an adaptive robust extended Kalman filter (AREKF), whereas the other parameters are fixed with nominal values. The inner layer result can help synthesize the current wheel-terrain contact forces with adequate precision, but has limited prediction capability for time-variable wheel slipping. To improve estimation accuracy of the result from the inner layer, an outer layer based on recursive Gauss-Newton (RGN) algorithm is introduced to refine the result of real-time filtering according to the innovation contained in the history data. With the two-layer structure, the proposed method can work in three fundamental estimation modes: EKF, REKF and RGN, making the method applicable for flat, rough and non-uniform terrains. Simulations have demonstrated the effectiveness of the proposed method under three terrain types, showing the advantages of introducing the two-layer structure.
29 CFR 95.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-07-01
... JURISDICTION OF FOREIGN GOVERNMENTS, AND INTERNATIONAL ORGANIZATIONS Post-Award Requirements Procurement... interest is not substantial or the gift is an unsolicited item of nominal value. The standards of conduct...
Robust root clustering for linear uncertain systems using generalized Lyapunov theory
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1993-01-01
Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.
The medial prefrontal cortex exhibits money illusion
Weber, Bernd; Rangel, Antonio; Wibral, Matthias; Falk, Armin
2009-01-01
Behavioral economists have proposed that money illusion, which is a deviation from rationality in which individuals engage in nominal evaluation, can explain a wide range of important economic and social phenomena. This proposition stands in sharp contrast to the standard economic assumption of rationality that requires individuals to judge the value of money only on the basis of the bundle of goods that it can buy—its real value—and not on the basis of the actual amount of currency—its nominal value. We used fMRI to investigate whether the brain's reward circuitry exhibits money illusion. Subjects received prizes in 2 different experimental conditions that were identical in real economic terms, but differed in nominal terms. Thus, in the absence of money illusion there should be no differences in activation in reward-related brain areas. In contrast, we found that areas of the ventromedial prefrontal cortex (vmPFC), which have been previously associated with the processing of anticipatory and experienced rewards, and the valuation of goods, exhibited money illusion. We also found that the amount of money illusion exhibited by the vmPFC was correlated with the amount of money illusion exhibited in the evaluation of economic transactions. PMID:19307555
Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System
NASA Technical Reports Server (NTRS)
Williams-Hayes, Peggy S.
2004-01-01
The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.
Ruthrauff, Daniel R.; Dekinga, Anne; Gill, Robert E.; Piersma, Theunis
2013-01-01
Closely related species or subspecies can exhibit metabolic differences that reflect site-specific environmental conditions. Whether such differences represent fixed traits or flexible adjustments to local conditions, however, is difficult to predict across taxa. The nominate race of Rock Sandpiper (Calidris ptilocnemis) exhibits the most northerly nonbreeding distribution of any shorebird in the North Pacific, being common during winter in cold, dark locations as far north as upper Cook Inlet, Alaska (61°N). By contrast, the tschuktschorum subspecies migrates to sites ranging from about 59°N to more benign locations as far south as ~37°N. These distributional extremes exert contrasting energetic demands, and we measured common metabolic parameters in the two subspecies held under identical laboratory conditions to determine whether differences in these parameters are reflected by their nonbreeding life histories. Basal metabolic rate and thermal conductance did not differ between subspecies, and the subspecies had a similar metabolic response to temperatures below their thermoneutral zone. Relatively low thermal conductance values may, however, reflect intrinsic metabolic adaptations to northerly latitudes. In the absence of differences in basic metabolic parameters, the two subspecies’ nonbreeding distributions will likely be more strongly influenced by adaptations to regional variation in ecological factors such as prey density, prey quality, and foraging habitat.
Thirteenth International Workshop on Principles of Diagnosis (DX-2002)
2002-05-01
Aplicación de la red neuronal SOM para la detección de fallos desconocidos en un grupo hidroeléctrico. In proceedings of the I Jornadas de Trabajo sobre...into a discrete value such as “nominal,” or “off-nominal high” must be sophisticated enough to take all the en - vironmental conditions into account...of the tanks. A boost pump in each of the feed tanks controls the supply of fuel from the tank to its respective en - gine. The transfer system moves
NASA Technical Reports Server (NTRS)
Rausch, J. R.
1977-01-01
The effect of interaction between the reaction control system (RCS) jets and the flow over the space shuttle orbiter in the atmosphere was investigated in the NASA Langley 31-inch continuous flow hypersonic tunnel at a nominal Mach number of 10.3 and in the AEDC continuous flow hypersonic tunnel B at a nominal Mach number of 6, using 0.01 and .0125 scale force models with aft RCS nozzles mounted both on the model and on the sting of the force model balance. The data show that RCS nozzle exit momentum ratio is the primary correlating parameter for effects where the plume impinges on an adjacent surface and mass flow ratio is the parameter when the plume interaction is primarily with the external stream. An analytic model of aft mounted RCS units was developed in which the total reaction control moments are the sum of thrust, impingement, interaction, and cross-coupling terms.
Parametric study of the lubrication of thrust loaded 120-mm bore ball bearings to 3 million DN
NASA Technical Reports Server (NTRS)
Signer, H.; Bamberger, E. N.; Zaretsky, E. V.
1973-01-01
A parametric study was performed with 120-mm bore angular-contact ball bearings under varying thrust loads, bearing and lubricant temperatures, and cooling and lubricant flow rates. Contact angles were nominally 20 and 24 deg with bearing speeds to 3 million DN. Endurance tests were run at 3 million DN and a temperature of 492 K (425 F) with 10 bearings having a nominal 24 deg contact angle at a thrust load of 22241 N (5000 lb). Bearing operating temperature, differences in temperatures between the inner and outer races, and bearing power consumption can be tuned to any desirable operating requirement by varying 4 parameters. These parameters are outer-race cooling, inner-race cooling, lubricant flow to the inner race, and oil inlet temperature. Preliminary endurance tests at 3 million DN and 492 K (425 F) indicate that long term bearing operation can be achieved with a high degree of reliability.
NASCAP simulation of PIX 2 experiments
NASA Technical Reports Server (NTRS)
Roche, J. C.; Mandell, M. J.
1985-01-01
The latest version of the NASCAP/LEO digital computer code used to simulate the PIX 2 experiment is discussed. NASCAP is a finite-element code and previous versions were restricted to a single fixed mesh size. As a consequence the resolution was dictated by the largest physical dimension to be modeled. The latest version of NASCAP/LEO can subdivide selected regions. This permitted the modeling of the overall Delta launch vehicle in the primary computational grid at a coarse resolution, with subdivided regions at finer resolution being used to pick up the details of the experiment module configuration. Langmuir probe data from the flight were used to estimate the space plasma density and temperature and the Delta ground potential relative to the space plasma. This information is needed for input to NASCAP. Because of the uncertainty or variability in the values of these parameters, it was necessary to explore a range around the nominal value in order to determine the variation in current collection. The flight data from PIX 2 were also compared with the results of the NASCAP simulation.
7 CFR 3019.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-01-01
... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Procurement Standards... interest is not substantial or the gift is an unsolicited item of nominal value. The standards of conduct...
45 CFR 74.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., AND COMMERCIAL ORGANIZATIONS Post-Award Requirements Procurement Standards § 74.42 Codes of conduct... the gift is an unsolicited item of nominal value. The standards of conduct shall provide for...
36 CFR 1210.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-07-01
... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Procurement Standards... interest is not substantial or the gift is an unsolicited item of nominal value. The standards of conduct...
Silicon Nitride Creep Under Various Specimen-Loading Configurations
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Holland, Frederic A.
2000-01-01
Extensive creep testing of a hot-pressed silicon nitride (NC 132) was performed at 1300 C in air using five different specimen-loading configurations: (1) pure tension, (2) pure compression, (3) four-point uniaxial flexure, (4) ball-on-ring biaxial flexure, and (5) ring-on-ring biaxial flexure. This paper reports experimental results as well as test techniques developed in this work. Nominal creep strain and its rate for a given nominal applied stress were greatest in tension, least in compression, and intermediate in uniaxial and biaxial flexure. Except for the case of compression loading, nominal creep strain generally decreased with time, resulting in a less-defined steady-state condition. Of the four creep formulations-power-law, hyperbolic sine, step, and redistribution--the conventional power-law formulation still provides the most convenient and reasonable estimation of the creep parameters of the NC 132 material. The data base to be obtained will be used to validate the NASA Glenn-developed design code CARES/Creep (ceramics analysis and reliability evaluation of structures and creep).
Modern methodology of designing target reliability into rotating mechanical components
NASA Technical Reports Server (NTRS)
Kececioglu, D. B.; Chester, L. B.
1973-01-01
Experimentally determined distributional cycles-to-failure versus maximum alternating nominal strength (S-N) diagrams, and distributional mean nominal strength versus maximum alternating nominal strength (Goodman) diagrams are presented. These distributional S-N and Goodman diagrams are for AISI 4340 steel, R sub c 35/40 hardness, round, cylindrical specimens 0.735 in. in diameter and 6 in. long with a circumferential groove 0.145 in. radius for a theoretical stress concentration = 1.42 and 0.034 in. radius for a stress concentration = 2.34. The specimens are subjected to reversed bending and steady torque in specially built, three complex-fatigue research machines. Based on these results, the effects on the distributional S-N and Goodman diagrams and on service life of superimposing steady torque on reversed bending are established, as well as the effect of various stress concentrations. In addition a computer program for determining the three-parameter Weibull distribution representing the cycles-to-failure data, and two methods for calculating the reliability of components subjected to cumulative fatigue loads are given.
SU-E-T-98: An Analysis of TG-51 Electron Beam Calibration Correction Factor Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, P; Alvarez, P; Taylor, P
Purpose: To analyze the uncertainty of the TG-51 electron beam calibration correction factors for farmer type ion chambers currently used by institutions visited by IROC Houston. Methods: TG-51 calibration data were collected from 181 institutions visited by IROC Houston physicists for 1174 and 197 distinct electron beams from modern Varian and Elekta accelerators, respectively. Data collected and analyzed included ion chamber make and model, nominal energy, N{sub D,w}, I{sub 50}, R{sub 50}, k’R{sub 50}, d{sub ref}, P{sub gr} and pdd(d{sub ref}). k’R{sub 50} data for parallel plate chambers were excluded from the analysis. Results: Unlike photon beams, electron nominal energymore » is a poor indicator of the actual energy as evidenced by the range of R{sub 50} values for each electron beam energy (6–22MeV). The large range in R{sub 50} values resulted k’R{sub 50} values with a small standard deviation but large range between maximum value used and minimum value (0.001–0.029) used for a specific Varian nominal energy. Varian data showed more variability in k’R{sub 50} values than the Elekta data (0.001–0.014). Using the observed range of R{sub 50} values, the maximum spread in k’R{sub 50} values was determined by IROC Houston and compared to the spread of k’R{sub 50} values used in the community. For Elekta linacs the spreads were equivalent, but for Varian energies of 6 to 16MeV, the community spread was 2 to 6 times larger. Community P{sub gr} values had a much larger range of values for 6 and 9 MeV values than predicted. The range in Varian pdd(d{sub ref} ) used by the community for low energies was large, (1.4–4.9 percent), when it should have been very close to unity. Exradin, PTW Roos and PTW farmer chambers N{sub D,w} values showed the largest spread, ≥11 percent. Conclusion: While the vast majority of electron beam calibration correction factors used are accurate, there is a surprising spread in some of the values used.« less
NASA Astrophysics Data System (ADS)
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf
2015-05-01
All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.
26 CFR 1.165-4 - Decline in value of stock.
Code of Federal Regulations, 2012 CFR
2012-04-01
... authorities may require that stock owned by such organizations be charged off as worthless or written down to... deducting the loss under section 165(a) merely because, in obedience to the specific orders or general policy of such supervisory authorities, the value of the stock is written down to a nominal amount...
26 CFR 1.165-4 - Decline in value of stock.
Code of Federal Regulations, 2010 CFR
2010-04-01
... authorities may require that stock owned by such organizations be charged off as worthless or written down to... deducting the loss under section 165(a) merely because, in obedience to the specific orders or general policy of such supervisory authorities, the value of the stock is written down to a nominal amount...
26 CFR 1.165-4 - Decline in value of stock.
Code of Federal Regulations, 2011 CFR
2011-04-01
... authorities may require that stock owned by such organizations be charged off as worthless or written down to... deducting the loss under section 165(a) merely because, in obedience to the specific orders or general policy of such supervisory authorities, the value of the stock is written down to a nominal amount...
26 CFR 1.165-4 - Decline in value of stock.
Code of Federal Regulations, 2013 CFR
2013-04-01
... authorities may require that stock owned by such organizations be charged off as worthless or written down to... deducting the loss under section 165(a) merely because, in obedience to the specific orders or general policy of such supervisory authorities, the value of the stock is written down to a nominal amount...
26 CFR 1.165-4 - Decline in value of stock.
Code of Federal Regulations, 2014 CFR
2014-04-01
... authorities may require that stock owned by such organizations be charged off as worthless or written down to... deducting the loss under section 165(a) merely because, in obedience to the specific orders or general policy of such supervisory authorities, the value of the stock is written down to a nominal amount...
46 CFR 310.53 - Nominations and vacancies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Nebraska 2 Nevada 2 New Hampshire 2 New Jersey 6 New Mexico 2 New York 15 North Carolina 6 North Dakota 1...) qualified citizens who possess qualities deemed to be of special value to the Academy. In making these... and to recognizing individuals with qualities deemed to be of special value to the Academy. [47 FR...
46 CFR 310.53 - Nominations and vacancies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Nebraska 2 Nevada 2 New Hampshire 2 New Jersey 6 New Mexico 2 New York 15 North Carolina 6 North Dakota 1...) qualified citizens who possess qualities deemed to be of special value to the Academy. In making these... and to recognizing individuals with qualities deemed to be of special value to the Academy. [47 FR...
ERIC Educational Resources Information Center
Shuster, Michael M.; Li, Yan; Shi, Junqi
2012-01-01
Interrelations among cultural values, parenting practices, and adolescent aggression were examined using longitudinal data collected from Chinese adolescents and their mothers. Adolescents' overt and relational aggression were assessed using peer nominations at Time 1 (7th grade) and Time 2 (9th grade). Mothers reported endorsement of cultural…
Large Second-Harmonic Response of C60 Thin Films
1992-04-01
temperature; the largest value occurred at a nominal temperature of 140’C where X"’ is ten times larger than the room temperature value. 14. SU8 )ECT TERMS 1S...optical chromatography.’ The purity was examined by Raman. IR materials based upon conjugated-carbon- polymers charac- absorption, high-performance liquid
Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.
Munir, Mohammad
2018-06-01
Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.
Spacecraft Thermal and Optical Modeling Impacts on Estimation of the GRAIL Lunar Gravity Field
NASA Technical Reports Server (NTRS)
Fahnestock, Eugene G.; Park, Ryan S.; Yuan, Dah-Ning; Konopliv, Alex S.
2012-01-01
We summarize work performed involving thermo-optical modeling of the two Gravity Recovery And Interior Laboratory (GRAIL) spacecraft. We derived several reconciled spacecraft thermo-optical models having varying detail. We used the simplest in calculating SRP acceleration, and used the most detailed to calculate acceleration due to thermal re-radiation. For the latter, we used both the output of pre-launch finite-element-based thermal simulations and downlinked temperature sensor telemetry. The estimation process to recover the lunar gravity field utilizes both a nominal thermal re-radiation accleration history and an apriori error model derived from that plus an off-nominal history, which bounds parameter uncertainties as informed by sensitivity studies.
Electrical and optical performance of midwave infrared InAsSb heterostructure detectors
NASA Astrophysics Data System (ADS)
Gomółka, Emilia; Kopytko, Małgorzata; Markowska, Olga; Michalczewski, Krystian; Kubiszyn, Łukasz; Kębłowski, Artur; Jureńczyk, Jarosław; Gawron, Waldemar; Martyniuk, Piotr Marcin; Piotrowski, Józef; Rutkowski, Jarosław; Rogalski, Antoni
2018-02-01
We investigate the high-operating temperature performance of InAsSb/AlSb heterostructure detectors with cutoff wavelengths near 5 μm at 230 K. The devices have been fabricated with different types of absorbing layers: nominally undoped absorber (with n-type conductivity), and both n- and p-type doped. The results show that the device performance strongly depends on absorber layer type. Generally, the p-type absorber provides higher values of current responsivity than the n-type absorber, but at the same time also higher values of dark current. The device with the nominally undoped absorbing layer shows moderate values of both current responsivity and dark current. Resulting detectivities D * of nonimmersed devices vary from 2 × 109 to 5 × 109 cm Hz1/2 W ? 1 at 230 K, which is easily achievable with a two-stage thermoelectric cooler. Optical immersion increases the detectivity up to 5 × 1010 cm Hz1/2 W ? 1.
Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael
2014-04-01
We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.
A preliminary assessment of the Titan planetary boundary layer
NASA Technical Reports Server (NTRS)
Allison, Michael
1992-01-01
Results of a preliminary assessment of the characteristic features of the Titan planetary boundary are addressed. These were derived from the combined application of a patched Ekman surface layer model and Rossby number similarity theory. Both these models together with Obukhov scaling, surface speed limits and saltation are discussed. A characteristic Akman depth of approximately 0.7 km is anticipated, with an eddy viscosity approximately equal to 1000 sq cm/s, an associated friction velocity approximately 0.01 m/s, and a surface wind typically smaller than 0.6 m/s. Actual values of these parameters probably vary by as much as a factor of two or three, in response to local temporal variations in surface roughness and stability. The saltation threshold for the windblown injection of approximately 50 micrometer particulates into the atmosphere is less than twice the nominal friction velocity, suggesting that dusty breezes might be an occassional feature of the Titan meteorology.
Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das
2018-02-01
This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
de la Cruz, Javier; Cano, Ulises; Romero, Tatiana
2016-10-01
A critical parameter for PEM fuel cell's electric contact is the nominal clamping pressure. Predicting the mechanical behavior of all components in a fuel cell stack is a very complex task due to the diversity of materials properties. Prior to the integration of a 3 kW PEMFC power plant, a numerical simulation was performed in order to obtain the mechanical stress distribution for two of the most pressure sensitive components of the stack: the membrane, and the graphite plates. The stress distribution of the above mentioned components was numerically simulated by finite element analysis and the stress magnitude for the membrane was confirmed using pressure films. Stress values were found within the elastic zone which guarantees mechanical integrity of fuel cell components. These low stress levels particularly for the membrane will allow prolonging the life and integrity of the fuel cell stack according to its design specifications.
Smith, Geoff; Jeeraruangrattana, Yowwares; Ermolina, Irina
2018-06-22
Through vial impedance spectroscopy (TVIS) is a product non-invasive process analytical technology which exploits the frequency dependence of the complex impedance spectrum of a composite object (i.e. the freeze-drying vial and its contents) in order to track the progression of the freeze-drying cycle. This work demonstrates the use of a dual electrode system, attached to the external surface of a type I glass tubing vial (nominal capacity 10 mL) in the prediction of (i) the ice interface temperatures at the sublimation front and at the base of the vial, and (ii) the primary drying rate. A value for the heat transfer coefficient (for a chamber pressure of 270 µbar) was then calculated from these parameters and shown to be comparable to that published by Tchessalov[1]. Copyright © 2018. Published by Elsevier B.V.
Internally heated mantle convection and the thermal and degassing history of the earth
NASA Technical Reports Server (NTRS)
Williams, David R.; Pan, Vivian
1992-01-01
An internally heated model of parameterized whole mantle convection with viscosity dependent on temperature and volatile content is examined. The model is run for 4l6 Gyr, and temperature, heat flow, degassing and regassing rates, stress, and viscosity are calculated. A nominal case is established which shows good agreement with accepted mantle values. The effects of changing various parameters are also tested. All cases show rapid cooling early in the planet's history and strong self-regulation of viscosity due to the temperature and volatile-content dependence. The effects of weakly stress-dependent viscosity are examined within the bounds of this model and are found to be small. Mantle water is typically outgassed rapidly to reach an equilibrium concentration on a time scale of less than 200 Myr for almost all models, the main exception being for models which start out with temperatures well below the melting temperature.
NASA Astrophysics Data System (ADS)
Xu, Peiliang
2018-06-01
The numerical integration method has been routinely used by major institutions worldwide, for example, NASA Goddard Space Flight Center and German Research Center for Geosciences (GFZ), to produce global gravitational models from satellite tracking measurements of CHAMP and/or GRACE types. Such Earth's gravitational products have found widest possible multidisciplinary applications in Earth Sciences. The method is essentially implemented by solving the differential equations of the partial derivatives of the orbit of a satellite with respect to the unknown harmonic coefficients under the conditions of zero initial values. From the mathematical and statistical point of view, satellite gravimetry from satellite tracking is essentially the problem of estimating unknown parameters in the Newton's nonlinear differential equations from satellite tracking measurements. We prove that zero initial values for the partial derivatives are incorrect mathematically and not permitted physically. The numerical integration method, as currently implemented and used in mathematics and statistics, chemistry and physics, and satellite gravimetry, is groundless, mathematically and physically. Given the Newton's nonlinear governing differential equations of satellite motion with unknown equation parameters and unknown initial conditions, we develop three methods to derive new local solutions around a nominal reference orbit, which are linked to measurements to estimate the unknown corrections to approximate values of the unknown parameters and the unknown initial conditions. Bearing in mind that satellite orbits can now be tracked almost continuously at unprecedented accuracy, we propose the measurement-based perturbation theory and derive global uniformly convergent solutions to the Newton's nonlinear governing differential equations of satellite motion for the next generation of global gravitational models. Since the solutions are global uniformly convergent, theoretically speaking, they are able to extract smallest possible gravitational signals from modern and future satellite tracking measurements, leading to the production of global high-precision, high-resolution gravitational models. By directly turning the nonlinear differential equations of satellite motion into the nonlinear integral equations, and recognizing the fact that satellite orbits are measured with random errors, we further reformulate the links between satellite tracking measurements and the global uniformly convergent solutions to the Newton's governing differential equations as a condition adjustment model with unknown parameters, or equivalently, the weighted least squares estimation of unknown differential equation parameters with equality constraints, for the reconstruction of global high-precision, high-resolution gravitational models from modern (and future) satellite tracking measurements.
34 CFR 74.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Procurement Standards § 74.42 Codes of conduct. The recipient shall maintain written standards of conduct... interest is not substantial or the gift is an unsolicited item of nominal value. The standards of conduct...
24 CFR 84.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Procurement Standards § 84.42 Codes of... substantial or the gift is an unsolicited item of nominal value. The standards of conduct shall provide for...
22 CFR 518.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Procurement Standards § 518.42 Codes of conduct. The recipient shall maintain written standards of conduct... financial interest is not substantial or the gift is an unsolicited item of nominal value. The standards of...
2 CFR 215.42 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-01-01
... NON-PROFIT ORGANIZATIONS (OMB CIRCULAR A-110) Post Award Requirements Procurement Standards § 215.42... interest is not substantial or the gift is an unsolicited item of nominal value. The standards of conduct...
22 CFR 145.42 - Code of conduct.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements Procurement Standards § 145.42 Code of... substantial or the gift is an unsolicited item of nominal value. The standards of conduct shall provide for...
41 CFR 105-72.502 - Codes of conduct.
Code of Federal Regulations, 2010 CFR
2010-07-01
... EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS 72.50-Post-Award Requirements/Procurement Standards... interest is not substantial or the gift is an unsolicited item of nominal value. The standards of conduct...
Remapping Nominal Features in the Second Language
ERIC Educational Resources Information Center
Cho, Ji-Hyeon Jacee
2012-01-01
This dissertation investigates second language (L2) development in the domains of morphosyntax and semantics. Specifically, it examines the acquisition of definiteness and specificity in Russian within the Feature Re-assembly framework (Lardiere, 2009), according to which the hardest L2 learning task is not to reset parameters but to reconfigure,…
Parametric excitation of tire-wheel assemblies by a stiffness non-uniformity
NASA Astrophysics Data System (ADS)
Stutts, D. S.; Krousgrill, C. M.; Soedel, W.
1995-01-01
A simple model of the effect of a concentrated radial stiffness non-uniformity in a passenger car tire is presented. The model treats the tread band of the tire as a rigid ring supported on a viscoelastic foundation. The distributed radial stiffness is lumped into equivalent horizontal (fore-and-aft) and vertical stiffnesses. The concentrated radial stiffness non-uniformity is modeled by treating the tread band as fixed, and the stiffness non-uniformity as rotating around it at the nominal angular velocity of the wheel. Due to loading, the center of mass of the tread band ring model is displaced upward with respect to the wheel spindle and, therefore, the rotating stiffness non-uniformity is alternately compressed and stretched through one complete rotation. This stretching and compressing of the stiffness non-uniformity results in force transmission to the wheel spindle at twice the nominal angular velocity in frequency, and therefore, would excite a given resonance at one-half the nominal angular wheel velocity that a mass unbalance would. The forcing produced by the stiffness non-uniformity is parametric in nature, thus creating the possibility of parametric resonance. The basic theory of the parametric resonance is explained, and a parameter study using derived lumped parameters based on a typical passenger car tire is performed. This study revealed that parametric resonance in passenger car tires, although possible, is unlikely at normal highway speeds as predicted by this model unless the tire is partially deflated.
Kinematic sensitivity of robot manipulators
NASA Technical Reports Server (NTRS)
Vuskovic, Marko I.
1989-01-01
Kinematic sensitivity vectors and matrices for open-loop, n degrees-of-freedom manipulators are derived. First-order sensitivity vectors are defined as partial derivatives of the manipulator's position and orientation with respect to its geometrical parameters. The four-parameter kinematic model is considered, as well as the five-parameter model in case of nominally parallel joint axes. Sensitivity vectors are expressed in terms of coordinate axes of manipulator frames. Second-order sensitivity vectors, the partial derivatives of first-order sensitivity vectors, are also considered. It is shown that second-order sensitivity vectors can be expressed as vector products of the first-order sensitivity vectors.
Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-01-01
Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations. PMID:22466105
Distortion correction of OCT images of the crystalline lens: gradient index approach.
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-05-01
To propose a method to correct optical coherence tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Two-dimensional images of nine human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared with the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley, and lens thickness shifts from the nominal data. Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface in terms of root mean square and peak values, with errors <6 and 13 μm, respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8 μm. The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in two dimension, it is expected that three-dimensional imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations.
Computation of acoustic ressure fields produced in feline brain by high-intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Omidi, Nazanin
In 1975, Dunn et al. (JASA 58:512-514) showed that a simple relation describes the ultrasonic threshold for cavitation-induced changes in the mammalian brain. The thresholds for tissue damage were estimated for a variety of acoustic parameters in exposed feline brain. The goal of this study was to improve the estimates for acoustic pressures and intensities present in vivo during those experimental exposures by estimating them using nonlinear rather than linear theory. In our current project, the acoustic pressure waveforms produced in the brains of anesthetized felines were numerically simulated for a spherically focused, nominally f1-transducer (focal length = 13 cm) at increasing values of the source pressure at frequencies of 1, 3, and 9 MHz. The corresponding focal intensities were correlated with the experimental data of Dunn et al. The focal pressure waveforms were also computed at the location of the true maximum. For low source pressures, the computed waveforms were the same as those determined using linear theory, and the focal intensities matched experimentally determined values. For higher source pressures, the focal pressure waveforms became increasingly distorted, with the compressional amplitude of the wave becoming greater, and the rarefactional amplitude becoming lower than the values calculated using linear theory. The implications of these results for clinical exposures are discussed.
Spacecraft orbit/earth scan derivations, associated APL program, and application to IMP-6
NASA Technical Reports Server (NTRS)
Smith, G. A.
1971-01-01
The derivation of a time shared, remote site, demand processed computer program is discussed. The computer program analyzes the effects of selected orbit, attitude, and spacecraft parameters on earth sensor detections of earth. For prelaunch analysis, the program may be used to simulate effects in nominal parameters which are used in preparing attitude data processing programs. After launch, comparison of results from a simulation and from satellite data will produce deviations helpful in isolating problems.
Analyzing Radio-Frequency Coverage for the ISS
NASA Technical Reports Server (NTRS)
Bolen, Steven M.; Sham, Catherine C.
2007-01-01
The Interactive Coverage Analysis Tool (iCAT) is an interactive desktop computer program serving to (1) support planning of coverage, and management of usage of frequencies, of current and proposed radio communication systems on and near the International Space Station (ISS) and (2) enable definition of requirements for development of future such systems. The iCAT can also be used in design trade studies for other (both outer-space and terrestrial) communication systems. A user can enter the parameters of a communication-system link budget in a table in a worksheet. The nominal (onaxis) link values for the bit-to-noise-energy ratio, received isotropic power (RIP), carrier-to-noise ratio (C/N), power flux density (PFD), and link margin of the system are calculated and displayed in the table. Plots of field gradients for the RIP, C/N, PFD, and link margin are constructed in an ISS coordinate system, at a specified link range, for both the forward and return link parameters, and are displayed in worksheets. The forward and reverse link antenna gain patterns are also constructed and displayed. Line-of-sight (LOS) obstructions can be both incorporated into the gradient plots and displayed on separate plots.
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
Design and analysis considerations for deployment mechanisms in a space environment
NASA Technical Reports Server (NTRS)
Vorlicek, P. L.; Gore, J. V.; Plescia, C. T.
1982-01-01
On the second flight of the INTELSAT V spacecraft the time required for successful deployment of the north solar array was longer than originally predicted. The south solar array deployed as predicted. As a result of the difference in deployment times a series of experiments was conducted to locate the cause of the difference. Deployment rate sensitivity to hinge friction and temperature levels was investigated. A digital computer simulation of the deployment was created to evaluate the effects of parameter changes on deployment. Hinge design was optimized for nominal solar array deployment time for future INTELSAT V satellites. The nominal deployment times of both solar arrays on the third flight of INTELSAT V confirms the validity of the simulation and design optimization.
Experimental study of modification mechanism at a wear-resistant surfacing
NASA Astrophysics Data System (ADS)
Dema, R. R.; Amirov, R. N.; Kalugina, O. B.
2018-01-01
In the study, a simulation of the crystallization process was carried out for the deposition of the near-eutectic structure alloys with inoculants presence in order to reveal the regularities of the inoculant effect and parameters of the process mode simulating surfacing on the structure of the crystallization front and on the nucleation rate and kinetics of growth of equiaxed crystallites of primary phases occurring in the volume of the melt. The simulation technique of primary crystallization of alloys similar to eutectic alloys in the presence of modifiers is offered. The possibility of fully eutectic structure during surfacing of nominal hypereutectic alloys of type white cast irons in wide range of deviations from the nominal composition is revealed.
Orbit Stability of OSIRIS-REx in the Vicinity of Bennu Using a High-Fidelity Solar Radiation Model
NASA Technical Reports Server (NTRS)
Williams, Trevor W.; Hughes, Kyle M.; Mashiku, Alinda K.; Longuski, James M.
2015-01-01
Solar radiation pressure is one of the largest perturbing forces on the OSIRISRex trajectory as it orbits the asteroid Bennu. In this work, we investigate how forces due to solar radiation perturb the OSIRIS-REx trajectory in a high-fidelity model. The model accounts for Bennu's non-spherical gravity field, third-body gravity forces from the Sun and Jupiter, as well as solar radiation forces acting on a simplified spacecraft model. Such high-fidelity simulations indicate significant solar radiation pressure perturbations from the nominal orbit. Modifications to the initial design of the nominal orbit are found using a variation of parameters approach that reduce the perturbation in eccentricity by a factor of one-half.
NASA Astrophysics Data System (ADS)
Urban, Rolf-Dieter; Jones, Harold
1991-03-01
The infrared spectrum of the manganese deuteride radical has been observed in its ground electronic state ( 7Σ) using a diode-laser spectrometer. The hyperfine structure of a number of infrared transitions in the bands ν=1←0, ν=2←1 and ν=3←2 were measured with a nominal accuracy of ±0.001 cm -1. In all cases, the complete structure was easily resolved. Dunham parameters, spin—rotation and spin—spin coupling parameters were determined from the MnD data. A simultaneous fit of these data with those determined previously for MnH was carried out to determine mass-independent parameters and mass-scaling coefficients.
Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity
NASA Technical Reports Server (NTRS)
Lin, J. Y.; Mingori, D. L.
1992-01-01
We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.
NASA Astrophysics Data System (ADS)
da Silva, Marcus Fernandes; de Area Leão Pereira, Éder Johnson; da Silva Filho, Aloisio Machado; de Castro, Arleys Pereira Nunes; Miranda, José Garcia Vivas; Zebende, Gilney Figueira
2016-07-01
In this paper we quantify the cross-correlation between the adjusted closing index of the G7 countries, by their Gross Domestic Product (nominal). For this purpose we consider the 2008 financial crisis. Thus, we intend to observe the impact of the 2008 crisis by applying the DCCA cross-correlation coefficient ρDCCA between these countries. As an immediate result we observe that there is a positive cross-correlation between the index, and this coefficient changes with time between weak, medium, and strong values. If we compare the pre-crisis period (before 2008) with the post-crisis period (after 2008), it is noticed that ρDCCA changes its value. From these facts, we propose to study the contagion (interdependence) effect from this change by a new variable, ΔρDCCA. Thus, we present new findings for the 2008 crisis between the members of the G7.
A comparative robustness evaluation of feedforward neurofilters
NASA Technical Reports Server (NTRS)
Troudet, Terry; Merrill, Walter
1993-01-01
A comparative performance and robustness analysis is provided for feedforward neurofilters trained with back propagation to filter additive white noise. The signals used in this analysis are simulated pitch rate responses to typical pilot command inputs for a modern fighter aircraft model. Various configurations of nonlinear and linear neurofilters are trained to estimate exact signal values from input sequences of noisy sampled signal values. In this application, nonlinear neurofiltering is found to be more efficient than linear neurofiltering in removing the noise from responses of the nominal vehicle model, whereas linear neurofiltering is found to be more robust in the presence of changes in the vehicle dynamics. The possibility of enhancing neurofiltering through hybrid architectures based on linear and nonlinear neuroprocessing is therefore suggested as a way of taking advantage of the robustness of linear neurofiltering, while maintaining the nominal performance advantage of nonlinear neurofiltering.
Using Dispersed Modes During Model Correlation
NASA Technical Reports Server (NTRS)
Stewart, Eric C.; Hathcock, Megan L.
2017-01-01
The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.
Polarized Power Spectra from HERA-19 Commissioning Data: Instrument Stability
NASA Astrophysics Data System (ADS)
Fox Fortino, Austin; Chichura, Paul; Igarashi, Amy; Kohn, Saul; Aguirre, James; HERA Collaboration
2018-01-01
The Epoch of Reionization (EoR) is a key period in the universe’s history, containing the formation of the first galaxies and large scale structures. Foreground emission is the limiting factor in detecting the 21 cm emission from the Epoch of Reionization (EoR). The HERA-19 low frequency radio interferometer aims to reduce the obfuscation from the foreground emission with its dish shaped antennae. We generate polarized 2D (cylindrically averaged) power spectra from seven days of observation from the HERA-19 2016 observation season in each of the four Stokes parameters I, Q, U, and V. These power spectra serve as a potent diagnostic tool that allow us to understand the instrument stability by comparison between nominally redundant baselines, and between observations of nominally the same astrophysical sky on successive days. The power spectra are expected to vary among nominally redundant measurements due to ionosphere fluctuations and thermal changes in the electronics and instrument beam patterns, as well as other factors. In this work we investigate the stability over time of these polarized power spectra, and use them to quantify the variation due to these effects.
Re-Assembling Formal Features in Second Language Acquisition: Beyond Minimalism
ERIC Educational Resources Information Center
Carroll, Susanne E.
2009-01-01
In this commentary, Lardiere's discussion of features is compared with the use of features in constraint-based theories, and it is argued that constraint-based theories might offer a more elegant account of second language acquisition (SLA). Further evidence is reported to question the accuracy of Chierchia's (1998) Nominal Mapping Parameter.…
The "Ass" Camouflage Construction: Masks as Parasitic Heads
ERIC Educational Resources Information Center
Levine, Robert D.
2010-01-01
Collins et al. 2008 offers a principles-and-parameters-based analysis of an AAVE construction first described in Spears 1998, in which nominal phrases such as "John's ass" appear to have exactly the same denotation, and behavior with respect to familiar conditions on anaphora, as the possessor ["John," and similarly for pronominal possessors.…
An Architecture for Online Affordance-based Perception and Whole-body Planning
2014-03-16
polyhedron defined relative to the pose of the prior step. This reachable polyhedron was computed offline by min. width nominal width max. width forward...parameters min width, max width, etc. into which the next position of the left foot must be contained. In practice, this polyhedron is represented in four
Hydrometer in the mantle: dln(Vs)/dln(Vp)
NASA Astrophysics Data System (ADS)
Li, L.; Weidner, D. J.
2010-12-01
The absorption of water into nominally non-hydrous phases is the probable storage mechanism of hydrogen throughout most of the mantle. Thus the water capacity in the mantle is greatest in the transition zone owing to the large water-solubility of ringwoodite and wadsleyite. However, the actual amount of water that is stored there is highly uncertain. Since water is probably brought down by subduction activity, it’s abundance is probably laterally variable. Thus, a metric that is sensitive to variations of water content are good candidates for hydrometers. Here we evaluate the parameter, dln(Vs)/dln(Vp), as such a parameter. It is useful to detect lateral variations of water if the effects of hydration on the parameter are different than those of temperature or composition. We compare the value of dln(Vs)/dln(Vp) due to the temperature with that due to the water content as a function of depth for the upper mantle. We have calculated dln(Vs)/dln(Vp) due to both water and temperature using a density functional theory approach, and available experimental data. Our results indicate that dln(Vs)/dln(Vp) due to water is distinguishable from dln(Vs)/dln(Vp) due to temperature or variations in iron content, particularly in ringwoodite. The difference increases with depth and making the lower part of the transition zone most identifiable as a water reservoir.
The Classification of Universes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjorken, J
2004-04-09
We define a universe as the contents of a spacetime box with comoving walls, large enough to contain essentially all phenomena that can be conceivably measured. The initial time is taken as the epoch when the lowest CMB modes undergo horizon crossing, and the final time taken when the wavelengths of CMB photons are comparable with the Hubble scale, i.e. with the nominal size of the universe. This allows the definition of a local ensemble of similarly constructed universes, using only modest extrapolations of the observed behavior of the cosmos. We then assume that further out in spacetime, similar universesmore » can be constructed but containing different standard model parameters. Within this multiverse ensemble, it is assumed that the standard model parameters are strongly correlated with size, i.e. with the value of the inverse Hubble parameter at the final time, in a manner as previously suggested. This allows an estimate of the range of sizes which allow life as we know it, and invites a speculation regarding the most natural distribution of sizes. If small sizes are favored, this in turn allows some understanding of the hierarchy problems of particle physics. Subsequent sections of the paper explore other possible implications. In all cases, the approach is as bottoms up and as phenomenological as possible, and suggests that theories of the multiverse so constructed may in fact lay some claim of being scientific.« less
A simulation study of turbofan engine deterioration estimation using Kalman filtering techniques
NASA Technical Reports Server (NTRS)
Lambert, Heather H.
1991-01-01
Deterioration of engine components may cause off-normal engine operation. The result is an unecessary loss of performance, because the fixed schedules are designed to accommodate a wide range of engine health. These fixed control schedules may not be optimal for a deteriorated engine. This problem may be solved by including a measure of deterioration in determining the control variables. These engine deterioration parameters usually cannot be measured directly but can be estimated. A Kalman filter design is presented for estimating two performance parameters that account for engine deterioration: high and low pressure turbine delta efficiencies. The delta efficiency parameters model variations of the high and low pressure turbine efficiencies from nominal values. The filter has a design condition of Mach 0.90, 30,000 ft altitude, and 47 deg power level angle (PLA). It was evaluated using a nonlinear simulation of the F100 engine model derivative (EMD) engine, at the design Mach number and altitude over a PLA range of 43 to 55 deg. It was found that known high pressure turbine delta efficiencies of -2.5 percent and low pressure turbine delta efficiencies of -1.0 percent can be estimated with an accuracy of + or - 0.25 percent efficiency with a Kalman filter. If both the high and low pressure turbine are deteriorated, the delta efficiencies of -2.5 percent to both turbines can be estimated with the same accuracy.
Ramirez, Jorge L.; Birindelli, Jose L.; Carvalho, Daniel C.; Affonso, Paulo R. A. M.; Venere, Paulo C.; Ortega, Hernán; Carrillo-Avila, Mauricio; Rodríguez-Pulido, José A.; Galetti, Pedro M.
2017-01-01
Molecular studies have improved our knowledge on the neotropical ichthyofauna. DNA barcoding has successfully been used in fish species identification and in detecting cryptic diversity. Megaleporinus (Anostomidae) is a recently described freshwater fish genus within which taxonomic uncertainties remain. Here we assessed all nominal species of this genus using a DNA barcode approach (Cytochrome Oxidase subunit I) with a broad sampling to generate a reference library, characterize new molecular lineages, and test the hypothesis that some of the nominal species represent species complexes. The analyses identified 16 (ABGD and BIN) to 18 (ABGD, GMYC, and PTP) different molecular operational taxonomic units (MOTUs) within the 10 studied nominal species, indicating cryptic biodiversity and potential candidate species. Only Megaleporinus brinco, Megaleporinus garmani, and Megaleporinus elongatus showed correspondence between nominal species and MOTUs. Within six nominal species, a subdivision in two MOTUs was found, while Megaleporinus obtusidens was divided in three MOTUs, suggesting that DNA barcode is a very useful approach to identify the molecular lineages of Megaleporinus, even in the case of recent divergence (< 0.5 Ma). Our results thus provided molecular findings that can be used along with morphological traits to better define each species, including candidate new species. This is the most complete analysis of DNA barcode in this recently described genus, and considering its economic value, a precise species identification is quite desirable and fundamental for conservation of the whole biodiversity of this fish. PMID:29075287
A dynamic system matching technique for improving the accuracy of MEMS gyroscopes
NASA Astrophysics Data System (ADS)
Stubberud, Peter A.; Stubberud, Stephen C.; Stubberud, Allen R.
2014-12-01
A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This can be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.
A dynamic system matching technique for improving the accuracy of MEMS gyroscopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stubberud, Peter A., E-mail: stubber@ee.unlv.edu; Stubberud, Stephen C., E-mail: scstubberud@ieee.org; Stubberud, Allen R., E-mail: stubberud@att.net
A classical MEMS gyro transforms angular rates into electrical values through Euler's equations of angular rotation. Production models of a MEMS gyroscope will have manufacturing errors in the coefficients of the differential equations. The output signal of a production gyroscope will be corrupted by noise, with a major component of the noise due to the manufacturing errors. As is the case of the components in an analog electronic circuit, one way of controlling the variability of a subsystem is to impose extremely tight control on the manufacturing process so that the coefficient values are within some specified bounds. This canmore » be expensive and may even be impossible as is the case in certain applications of micro-electromechanical (MEMS) sensors. In a recent paper [2], the authors introduced a method for combining the measurements from several nominally equal MEMS gyroscopes using a technique based on a concept from electronic circuit design called dynamic element matching [1]. Because the method in this paper deals with systems rather than elements, it is called a dynamic system matching technique (DSMT). The DSMT generates a single output by randomly switching the outputs of several, nominally identical, MEMS gyros in and out of the switch output. This has the effect of 'spreading the spectrum' of the noise caused by the coefficient errors generated in the manufacture of the individual gyros. A filter can then be used to eliminate that part of the spread spectrum that is outside the pass band of the gyro. A heuristic analysis in that paper argues that the DSMT can be used to control the effects of the random coefficient variations. In a follow-on paper [4], a simulation of a DSMT indicated that the heuristics were consistent. In this paper, analytic expressions of the DSMT noise are developed which confirm that the earlier conclusions are valid. These expressions include the various DSMT design parameters and, therefore, can be used as design tools for DSMT systems.« less
Validation of the kinetic-turbulent-neoclassical theory for edge intrinsic rotation in DIII-D
NASA Astrophysics Data System (ADS)
Ashourvan, Arash; Grierson, B. A.; Battaglia, D. J.; Haskey, S. R.; Stoltzfus-Dueck, T.
2018-05-01
In a recent kinetic model of edge main-ion (deuterium) toroidal velocity, intrinsic rotation results from neoclassical orbits in an inhomogeneous turbulent field [T. Stoltzfus-Dueck, Phys. Rev. Lett. 108, 065002 (2012)]. This model predicts a value for the toroidal velocity that is co-current for a typical inboard X-point plasma at the core-edge boundary (ρ ˜ 0.9). Using this model, the velocity prediction is tested on the DIII-D tokamak for a database of L-mode and H-mode plasmas with nominally low neutral beam torque, including both signs of plasma current. Values for the flux-surface-averaged main-ion rotation velocity in the database are obtained from the impurity carbon rotation by analytically calculating the main-ion—impurity neoclassical offset. The deuterium rotation obtained in this manner has been validated by direct main-ion measurements for a limited number of cases. Key theoretical parameters of ion temperature and turbulent scale length are varied across a wide range in an experimental database of discharges. Using a characteristic electron temperature scale length as a proxy for a turbulent scale length, the predicted main-ion rotation velocity has a general agreement with the experimental measurements for neutral beam injection (NBI) powers in the range PNBI < 4 MW. At higher NBI power, the experimental rotation is observed to saturate and even degrade compared to theory. TRANSP-NUBEAM simulations performed for the database show that for discharges with nominally balanced—but high powered—NBI, the net injected torque through the edge can exceed 1 Nm in the counter-current direction. The theory model has been extended to compute the rotation degradation from this counter-current NBI torque by solving a reduced momentum evolution equation for the edge and found the revised velocity prediction to be in agreement with experiment. Using the theory modeled—and now tested—velocity to predict the bulk plasma rotation opens up a path to more confidently projecting the confinement and stability in ITER.
Abdelgaied, Abdellatif; Brockett, Claire L; Liu, Feng; Jennings, Louise M; Fisher, John; Jin, Zhongmin
2013-01-01
Polyethylene wear is a great concern in total joint replacement. It is now considered a major limiting factor to the long life of such prostheses. Cross-linking has been introduced to reduce the wear of ultra-high-molecular-weight polyethylene (UHMWPE). Computational models have been used extensively for wear prediction and optimization of artificial knee designs. However, in order to be independent and have general applicability and predictability, computational wear models should be based on inputs from independent experimentally determined wear parameters (wear factors or wear coefficients). The objective of this study was to investigate moderately cross-linked UHMWPE, using a multidirectional pin-on-plate wear test machine, under a wide range of applied nominal contact pressure (from 1 to 11 MPa) and under five different kinematic inputs, varying from a purely linear track to a maximum rotation of +/- 55 degrees. A computational model, based on a direct simulation of the multidirectional pin-on-plate wear tester, was developed to quantify the degree of cross-shear (CS) of the polyethylene pins articulating against the metallic plates. The moderately cross-linked UHMWPE showed wear factors less than half of that reported in the literature for, the conventional UHMWPE, under the same loading and kinematic inputs. In addition, under high applied nominal contact stress, the moderately crosslinked UHMWPE wear showed lower dependence on the degree of CS compared to that under low applied nominal contact stress. The calculated wear coefficients were found to be independent of the applied nominal contact stress, in contrast to the wear factors that were shown to be highly pressure dependent. This study provided independent wear data for inputs into computational models for moderately cross-linked polyethylene and supported the application of wear coefficient-based computational wear models.
Thrust Stand Characterization of the NASA Evolutionary Xenon Thruster (NEXT)
NASA Technical Reports Server (NTRS)
Diamant, Kevin D.; Pollard, James E.; Crofton, Mark W.; Patterson, Michael J.; Soulas, George C.
2010-01-01
Direct thrust measurements have been made on the NASA Evolutionary Xenon Thruster (NEXT) ion engine using a standard pendulum style thrust stand constructed specifically for this application. Values have been obtained for the full 40-level throttle table, as well as for a few off-nominal operating conditions. Measurements differ from the nominal NASA throttle table 10 (TT10) values by 3.1 percent at most, while at 30 throttle levels (TLs) the difference is less than 2.0 percent. When measurements are compared to TT10 values that have been corrected using ion beam current density and charge state data obtained at The Aerospace Corporation, they differ by 1.2 percent at most, and by 1.0 percent or less at 37 TLs. Thrust correction factors calculated from direct thrust measurements and from The Aerospace Corporation s plume data agree to within measurement error for all but one TL. Thrust due to cold flow and "discharge only" operation has been measured, and analytical expressions are presented which accurately predict thrust based on thermal thrust generation mechanisms.
Low-light divergence in photovoltaic parameter fluctuations
NASA Astrophysics Data System (ADS)
Shvydka, Diana; Karpov, V. G.; Compaan, A. D.
2003-03-01
We study statistics of the major photovoltaic (PV) parameters, such as open-circuit voltage, short-circuit current, etc., versus light intensity on a set of nominally identical thin-film CdTe/CdS solar cells. A crossover light intensity is found, below which the relative fluctuations of the PV parameters diverge inversely proportional to the square root of the light intensity. We propose a model in which the observed fluctuations are due to lateral nonuniformities in the device structure. The crossover is attributed to the lateral nonuniformity screening length exceeding the device size. From the practical standpoint, our study introduces a simple uniformity diagnostic technique.
NASA Astrophysics Data System (ADS)
Sugita, Satoshi; Yamaoka, Kazutaka; Ohno, Masanori; Tashiro, Makoto S.; Nakagawa, Yujin E.; Urata, Yuji; Pal'Shin, Valentin; Golenetskii, Sergei; Sakamoto, Takanori; Cummings, Jay; Krimm, Hans; Stamatikos, Michael; Parsons, Ann; Barthelmy, Scott; Gehrels, Neil
2009-06-01
We present the results of the high-redshift GRB 050904 at z = 6.295 from joint spectral analysis among Swift-BAT, Konus-Wind, and Suzaku-WAM, covering a wide energy range of 15--5000keV. The νFu spectrum peak energy, Epeak, was measured at 314+173-89 keV, corresponding to 2291+1263-634 keV in the source frame, and the isotropic equivalent radiated energy, Eiso, was estimated to be 1.04+0.25-0.17 × 1054erg. Both are among the highest values that have ever been measured. GRBs with such a high Eiso (˜1054erg) might be associated with prompt optical emission. The derived spectral and energetic parameters are consistent with the correlation between the rest-frame Ep,i and the Eiso (Amati relation), but not with the correlation between the intrinsic peak energy Ep,i and the collimation-corrected energy Eγ (Ghirlanda relation), unless the density of the circumburst environment of this burst is much larger than the nominal value, as suggested by other wavelength observations. We also discuss the possibility that this burst is an outlier in the correlation between Ep,i and the peak luminosity Lp (Yonetoku relation).
Frank, Kenneth A.; Muller, Chandra; Mueller, Anna S.
2014-01-01
Although research on social embeddedness and social capital con-firms the value of friendship networks, little has been written about how social relations form and are structured by social institutions. Using data from the Adolescent Health and Academic Achievement study and the National Longitudinal Study of Adolescent Health, the authors show that the odds of a new friendship nomination were 1.77 times greater within clusters of high school students taking courses together than between them. The estimated effect cannot be attributed to exposure to peers in similar grade levels, indirect friendship links, or pair-level course overlap, and the finding is robust to alternative model specifications. The authors also show how tendencies associated with status hierarchy inhering in triadic friendship nominations are neutralized within the clusters. These results have implications for the production and distribution of social capital within social systems such as schools, giving the clusters social salience as “local positions.” PMID:25364011
Skipped Stage Modeling and Testing of the CPAS Main Parachutes
NASA Technical Reports Server (NTRS)
Varela, Jose G.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) has undergone the transition from modeling a skipped stage event using a simulation that treats a cluster of parachutes as a single composite canopy to the capability of simulating each parachute individually. This capability along with data obtained from skipped stage flight tests has been crucial in modeling the behavior of a skipping canopy as well as the crowding effect on non-skipping ("lagging") neighbors. For the finite mass inflation of CPAS Main parachutes, the cluster is assumed to inflate nominally through the nominal fill time, at which point the skipping parachute continues inflating. This sub-phase modeling method was used to reconstruct three flight tests involving skipped stages. Best fit inflation parameters were determined for both the skipping and lagging canopies.
International Space Station Major Constituent Analyzer On-Orbit Performance
NASA Technical Reports Server (NTRS)
Gardner, Ben D.; Erwin, Phillip M.; Thoresen, Souzan; Granahan, John; Matty, Chris
2012-01-01
The Major Constituent Analyzer is a mass spectrometer based system that measures the major atmospheric constituents on the International Space Station. A number of limited-life components require periodic changeout, including the ORU 02 analyzer and the ORU 08 Verification Gas Assembly. Over the past two years, two ORU 02 analyzer assemblies have operated nominally while two others have experienced premature on-orbit failures. These failures as well as nominal performances demonstrate that ORU 02 performance remains a key determinant of MCA performance and logistical support. It can be shown that monitoring several key parameters can maximize the capacity to monitor ORU health and properly anticipate end of life. Improvements to ion pump operation and ion source tuning are expected to improve lifetime performance of the current ORU 02 design.
Orion Burn Management, Nominal and Response to Failures
NASA Technical Reports Server (NTRS)
Odegard, Ryan; Goodman, John L.; Barrett, Charles P.; Pohlkamp, Kara; Robinson, Shane
2016-01-01
An approach for managing Orion on-orbit burn execution is described for nominal and failure response scenarios. The burn management strategy for Orion takes into account per-burn variations in targeting, timing, and execution; crew and ground operator intervention and overrides; defined burn failure triggers and responses; and corresponding on-board software sequencing functionality. Burn-to- burn variations are managed through the identification of specific parameters that may be updated for each progressive burn. Failure triggers and automatic responses during the burn timeframe are defined to provide safety for the crew in the case of vehicle failures, along with override capabilities to ensure operational control of the vehicle. On-board sequencing software provides the timeline coordination for performing the required activities related to targeting, burn execution, and responding to burn failures.
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the compact engine model (CEM). In this step, the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion control law development.
A preliminary evaluation of an F100 engine parameter estimation process using flight data
NASA Technical Reports Server (NTRS)
Maine, Trindel A.; Gilyard, Glenn B.; Lambert, Heather H.
1990-01-01
The parameter estimation algorithm developed for the F100 engine is described. The algorithm is a two-step process. The first step consists of a Kalman filter estimation of five deterioration parameters, which model the off-nominal behavior of the engine during flight. The second step is based on a simplified steady-state model of the 'compact engine model' (CEM). In this step the control vector in the CEM is augmented by the deterioration parameters estimated in the first step. The results of an evaluation made using flight data from the F-15 aircraft are presented, indicating that the algorithm can provide reasonable estimates of engine variables for an advanced propulsion-control-law development.
Testing the non-unity of rate ratio under inverse sampling.
Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing
2007-08-01
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Tardif, Jean-Claude; Ballantyne, Christie M; Barter, Philip; Dasseux, Jean-Louis; Fayad, Zahi A; Guertin, Marie-Claude; Kastelein, John J P; Keyserling, Constance; Klepp, Heather; Koenig, Wolfgang; L'Allier, Philippe L; Lespérance, Jacques; Lüscher, Thomas F; Paolini, John F; Tawakol, Ahmed; Waters, David D
2014-12-07
High-density lipoproteins (HDLs) have several potentially protective vascular effects. Most clinical studies of therapies targeting HDL have failed to show benefits vs. placebo. To investigate the effects of an HDL-mimetic agent on atherosclerosis by intravascular ultrasonography (IVUS) and quantitative coronary angiography (QCA). A prospective, double-blinded, randomized trial was conducted at 51 centres in the USA, the Netherlands, Canada, and France. Intravascular ultrasonography and QCA were performed to assess coronary atherosclerosis at baseline and 3 (2-5) weeks after the last study infusion. Five hundred and seven patients were randomized; 417 and 461 had paired IVUS and QCA measurements, respectively. Patients were randomized to receive 6 weekly infusions of placebo, 3 mg/kg, 6 mg/kg, or 12 mg/kg CER-001. The primary efficacy parameter was the nominal change in the total atheroma volume. Nominal changes in per cent atheroma volume on IVUS and coronary scores on QCA were also pre-specified endpoints. The nominal change in the total atheroma volume (adjusted means) was -2.71, -3.13, -1.50, and -3.05 mm(3) with placebo, CER-001 3 mg/kg, 6 mg/kg, and 12 mg/kg, respectively (primary analysis of 12 mg/kg vs. placebo: P = 0.81). There was also no difference among groups for the nominal change in per cent atheroma volume (0.02, -0.02, 0.01, and 0.19%; nominal P = 0.53 for 12 mg/kg vs. placebo). Change in the coronary artery score was -0.022, -0.036, -0.022, and -0.015 mm (nominal P = 0.25, 0.99, 0.55), and change in the cumulative coronary stenosis score was -0.51, 2.65, 0.71, and -0.77% (compared with placebo, nominal P = 0.85 for 12 mg/kg and nominal P = 0.01 for 3 mg/kg). The number of patients with major cardiovascular events was 10 (8.3%), 16 (13.3%), 17 (13.7%), and 12 (9.8%) in the four groups. CER-001 infusions did not reduce coronary atherosclerosis on IVUS and QCA when compared with placebo. Whether CER-001 administered in other regimens or to other populations could favourably affect atherosclerosis must await further study. Name of the trial registry: Clinicaltrials.gov; Registry's URL: http://clinicaltrials.gov/ct2/show/NCT01201837?term=cer-001&rank=2; NCT01201837. © The Author 2014. Published by Oxford University Press on behalf of the European Society of Cardiology.
Savageau, M A
1998-01-01
Induction of gene expression can be accomplished either by removing a restraining element (negative mode of control) or by providing a stimulatory element (positive mode of control). According to the demand theory of gene regulation, which was first presented in qualitative form in the 1970s, the negative mode will be selected for the control of a gene whose function is in low demand in the organism's natural environment, whereas the positive mode will be selected for the control of a gene whose function is in high demand. This theory has now been further developed in a quantitative form that reveals the importance of two key parameters: cycle time C, which is the average time for a gene to complete an ON/OFF cycle, and demand D, which is the fraction of the cycle time that the gene is ON. Here we estimate nominal values for the relevant mutation rates and growth rates and apply the quantitative demand theory to the lactose and maltose operons of Escherichia coli. The results define regions of the C vs. D plot within which selection for the wild-type regulatory mechanisms is realizable, and these in turn provide the first estimates for the minimum and maximum values of demand that are required for selection of the positive and negative modes of gene control found in these systems. The ratio of mutation rate to selection coefficient is the most relevant determinant of the realizable region for selection, and the most influential parameter is the selection coefficient that reflects the reduction in growth rate when there is superfluous expression of a gene. The quantitative theory predicts the rate and extent of selection for each mode of control. It also predicts three critical values for the cycle time. The predicted maximum value for the cycle time C is consistent with the lifetime of the host. The predicted minimum value for C is consistent with the time for transit through the intestinal tract without colonization. Finally, the theory predicts an optimum value of C that is in agreement with the observed frequency for E. coli colonizing the human intestinal tract. PMID:9691028
Alay, Eren; Zheng, James Q.; Chandra, Namas
2018-01-01
We exposed a headform instrumented with 10 pressure sensors mounted flush with the surface to a shock wave with three nominal intensities: 70, 140 and 210 kPa. The headform was mounted on a Hybrid III neck, in a rigid configuration to eliminate motion and associated pressure variations. We evaluated the effect of the test location by placing the headform inside, at the end and outside of the shock tube. The shock wave intensity gradually decreases the further it travels in the shock tube and the end effect degrades shock wave characteristics, which makes comparison of the results obtained at three locations a difficult task. To resolve these issues, we developed a simple strategy of data reduction: the respective pressure parameters recorded by headform sensors were divided by their equivalents associated with the incident shock wave. As a result, we obtained a comprehensive set of non-dimensional parameters. These non-dimensional parameters (or amplification factors) allow for direct comparison of pressure waveform characteristic parameters generated by a range of incident shock waves differing in intensity and for the headform located in different locations. Using this approach, we found a correlation function which allows prediction of the peak pressure on the headform that depends only on the peak pressure of the incident shock wave (for specific sensor location on the headform), and itis independent on the headform location. We also found a similar relationship for the rise time. However, for the duration and impulse, comparable correlation functions do not exist. These findings using a headform with simplified geometry are baseline values and address a need for the development of standardized parameters for the evaluation of personal protective equipment (PPE) under shock wave loading. PMID:29894521
Experimental evaluation of heat transfer on a 1030:1 area ratio rocket nozzle
NASA Technical Reports Server (NTRS)
Kacynski, Kenneth J.; Pavli, Albert J.; Smith, Tamara A.
1987-01-01
A 1030:1 carbon steel, heat-sink nozzle was tested. The test conditions included a nominal chamber pressure of 2413 kN/sq m and a mixture ratio range of 2.78 to 5.49. The propellants were gaseous oxygen and gaseous hydrogen. Outer wall temperature measurements were used to calculate the inner wall temperature and the heat flux and heat rate to the nozzle at specified axial locations. The experimental heat fluxes were compared to those predicted by the Two-Dimensional Kinetics (TDK) computer model analysis program. When laminar boundary layer flow was assumed in the analysis, the predicted values were within 15 percent of the experimental values for the area ratios of 20 to 975. However, when turbulent boundary layer conditions were assumed, the predicted values were approximately 120 percent higher than the experimental values. A study was performed to determine if the conditions within the nozzle could sustain a laminar boundary layer. Using the flow properties predicted by TDK, the momentum-thickness Reynolds number was calculated, and the point of transition to turbulent flow was predicted. The predicted transition point was within 0.5 inches of the nozzle throat. Calculations of the acceleration parameter were then made to determine if the flow conditions could produce relaminarization of the boundary layer. It was determined that if the boundary layer flow was inclined to transition to turbulent, the acceleration conditions within the nozzle would tend to suppress turbulence and keep the flow laminar-like.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
Experimental evaluation of heat transfer on a 1030:1 area ratio rocket nozzle
NASA Technical Reports Server (NTRS)
Kacynski, Kenneth J.; Pavli, Albert J.; Smith, Tamara A.
1987-01-01
A 1030:1 carbon steel, heat-sink nozzle was tested. The test conditions included a nominal chamber pressure of 2413 kN/sq m and a mixture ratio range of 2.78 to 5.49. The propellants were gaseous oxygen and gaseous hydrogen. Outer wall temperature measurements were used to calculate the inner wall temperature and the heat flux and heat rate to the nozzle at specified axial locations. The experimental heat fluxes were compared to those predicted by the Two-Dimensional Kinetics (TDK) computer model analysis program. When laminar boundary layer flow was assumed in the analysis, the predicted values were within 15% of the experimental values for the area ratios of 20 to 975. However, when turbulent boundary layer conditions were assumed, the predicted values were approximately 120% higher than the experimental values. A study was performed to determine if the conditions within the nozzle could sustain a laminar boundary layer. Using the flow properties predicted by TDK, the momentum-thickness Reynolds number was calculated, and the point of transition to turbulent flow was predicted. The predicted transition point was within 0.5 inches of the nozzle throat. Calculations of the acceleration parameter were then made to determine if the flow conditions could produce relaminarization of the boundary layer. It was determined that if the boundary layer flow was inclined to transition to turbulent, the acceleration conditions within the nozzle would tend to suppress turbulence and keep the flow laminar-like.
Genetic Variants from Lipid-Related Pathways and Risk for Incident Myocardial Infarction
Song, Ci; Pedersen, Nancy L.; Reynolds, Chandra A.; Sabater-Lleal, Maria; Kanoni, Stavroula; Willenborg, Christina; Syvänen, Ann-Christine; Watkins, Hugh; Hamsten, Anders; Prince, Jonathan A.; Ingelsson, Erik
2013-01-01
Background Circulating lipids levels, as well as several familial lipid metabolism disorders, are strongly associated with initiation and progression of atherosclerosis and incidence of myocardial infarction (MI). Objectives We hypothesized that genetic variants associated with circulating lipid levels would also be associated with MI incidence, and have tested this in three independent samples. Setting and Subjects Using age- and sex-adjusted additive genetic models, we analyzed 554 single nucleotide polymorphisms (SNPs) in 41 candidate gene regions proposed to be involved in lipid-related pathways potentially predisposing to incidence of MI in 2,602 participants of the Swedish Twin Register (STR; 57% women). All associations with nominal P<0.01 were further investigated in the Uppsala Longitudinal Study of Adult Men (ULSAM; N = 1,142). Results In the present study, we report associations of lipid-related SNPs with incident MI in two community-based longitudinal studies with in silico replication in a meta-analysis of genome-wide association studies. Overall, there were 9 SNPs in STR with nominal P-value <0.01 that were successfully genotyped in ULSAM. rs4149313 located in ABCA1 was associated with MI incidence in both longitudinal study samples with nominal significance (hazard ratio, 1.36 and 1.40; P-value, 0.004 and 0.015 in STR and ULSAM, respectively). In silico replication supported the association of rs4149313 with coronary artery disease in an independent meta-analysis including 173,975 individuals of European descent from the CARDIoGRAMplusC4D consortium (odds ratio, 1.03; P-value, 0.048). Conclusions rs4149313 is one of the few amino acid changing variants in ABCA1 known to associate with reduced cholesterol efflux. Our results are suggestive of a weak association between this variant and the development of atherosclerosis and MI. PMID:23555974
2013-10-21
depend on the quality of allocating resources. This work uses a reliability model of system and environmental covariates incorporating information at...state space. Further, the use of condition variables allows for the direct modeling of maintenance impact with the assumption that a nominal value ... value ), the model in the application of aviation maintenance can provide a useful estimation of reliability at multiple levels. Adjusted survival
Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun
2016-12-01
There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.
Clinical implementation of photon beam flatness measurements to verify beam quality.
Goodall, Simon; Harding, Nicholas; Simpson, Jake; Alexander, Louise; Morgan, Steve
2015-11-08
This work describes the replacement of Tissue Phantom Ratio (TPR) measurements with beam profile flatness measurements to determine photon beam quality during routine quality assurance (QA) measurements. To achieve this, a relationship was derived between the existing TPR15/5 energy metric and beam flatness, to provide baseline values and clinically relevant tolerances. The beam quality was varied around two nominal beam energy values for four matched Elekta linear accelerators (linacs) by varying the bending magnet currents and reoptimizing the beam. For each adjusted beam quality the TPR15/5 was measured using an ionization chamber and Solid Water phantom. Two metrics of beam flatness were evaluated using two identical commercial ionization chamber arrays. A linear relationship was found between TPR15/5 and both metrics of flatness, for both nominal energies and on all linacs. Baseline diagonal flatness (FDN) values were measured to be 103.0% (ranging from 102.5% to 103.8%) for 6 MV and 102.7% (ranging from 102.6% to 102.8%) for 10 MV across all four linacs. Clinically acceptable tolerances of ± 2% for 6 MV, and ± 3% for 10 MV, were derived to equate to the current TPR15/5 clinical tolerance of ± 0.5%. Small variations in the baseline diagonal flatness values were observed between ionization chamber arrays; however, the rate of change of TPR15/5 with diagonal flatness was found to remain within experimental uncertainty. Measurements of beam flatness were shown to display an increased sensitivity to variations in the beam quality when compared to TPR measurements. This effect is amplified for higher nominal energy photons. The derivation of clinical baselines and associated tolerances has allowed this method to be incorporated into routine QA, streamlining the process whilst also increasing versatility. In addition, the effect of beam adjustment can be observed in real time, allowing increased practicality during corrective and preventive maintenance interventions.
Analysis of Wind Tunnel Longitudinal Static and Oscillatory Data of the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.; Curry, Timothy J.; Brandon, Jay M.
1997-01-01
Static and oscillatory wind tunnel data are presented for a 10-percent-scale model of an F-16XL aircraft. Static data include the effect of angle of attack, sideslip angle, and control surface deflections on aerodynamic coefficients. Dynamic data from small-amplitude oscillatory tests are presented at nominal values of angle of attack between 20 and 60 degrees. Model oscillations were performed at five frequencies from 0.6 to 2.9 Hz and one amplitude of 5 degrees. A simple harmonic analysis of the oscillatory data provided Fourier coefficients associated with the in-phase and out-of-phase components of the aerodynamic coefficients. A strong dependence of the oscillatory data on frequency led to the development of models with unsteady terms in the form of indicial functions. Two models expressing the variation of the in-phase and out-of-phase components with angle of attack and frequency were proposed and their parameters estimated from measured data.
Simulation of UV atomic radiation for application in exhaust plume spectrometry
NASA Astrophysics Data System (ADS)
Wallace, T. L.; Powers, W. T.; Cooper, A. E.
1993-06-01
Quantitative analysis of exhaust plume spectral data has long been a goal of developers of advanced engine health monitoring systems which incorporate optical measurements of rocket exhaust constituents. Discussed herein is the status of present efforts to model and predict atomic radiation spectra and infer free-atom densities from emission/absorption measurements as part of the Optical Plume Anomaly Detection (OPAD) program at Marshall Space Flight Center (MSFC). A brief examination of the mathematical formalism is provided in the context of predicting radiation from the Mach disk region of the SSME exhaust flow at nominal conditions during ground level testing at MSFC. Computational results are provided for Chromium and Copper at selected transitions which indicate a strong dependence upon broadening parameter values determining the absorption-emission line shape. Representative plots of recent spectral data from the Stennis Space Center (SSC) Diagnostic Test Facility (DTF) rocket engine are presented and compared to numerical results from the present self-absorbing model; a comprehensive quantitative analysis will be reported at a later date.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Nonreactive mixing study of a scramjet swept-strut fuel injector
NASA Technical Reports Server (NTRS)
Mcclinton, C. R.; Torrence, M. G.; Gooderum, P. B.; Young, I. G.
1975-01-01
The results are presented of a cold-mixing investigation performed to supply combustor design information and to determine optimum normal fuel-injector configurations for a general scramjet swept-strut fuel injector. The experimental investigation was made with two swept struts in a closed duct at a Mach number of 4.4 and a nominal ratio of jet mass flow to air mass flow of 0.0295, with helium used to simulate hydrogen fuel. Four injector patterns were evaluated; they represented the range of hole spacing and the ratio of jet dynamic pressure to free-stream dynamic pressure. Helium concentration, pitot pressure, and static pressure in the downstream mixing region were measured to generate the contour plots needed to define the mixing-region flow field and the mixing parameters. Experimental results show that the fuel penetration from the struts was less than the predicted values based on flat-plate data; but the mixing rate was faster and produced a mixing length less than one-half that predicted.
A robust nonlinear filter for image restoration.
Koivunen, V
1995-01-01
A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.
Data acquisition and processing in the ATLAS tile calorimeter phase-II upgrade demonstrator
NASA Astrophysics Data System (ADS)
Valero, A.; Tile Calorimeter System, ATLAS
2017-10-01
The LHC has planned a series of upgrades culminating in the High Luminosity LHC which will have an average luminosity 5-7 times larger than the nominal Run 2 value. The ATLAS Tile Calorimeter will undergo an upgrade to accommodate the HL-LHC parameters. The TileCal readout electronics will be redesigned, introducing a new readout strategy. A Demonstrator program has been developed to evaluate the new proposed readout architecture and prototypes of all the components. In the Demonstrator, the detector data received in the Tile PreProcessors (PPr) are stored in pipeline buffers and upon the reception of an external trigger signal the data events are processed, packed and readout in parallel through the legacy ROD system, the new Front-End Link eXchange system and an ethernet connection for monitoring purposes. This contribution describes in detail the data processing and the hardware, firmware and software components of the TileCal Demonstrator readout system.
Sun, Li; Li, Donghai; Gao, Zhiqiang; Yang, Zhao; Zhao, Shen
2016-09-01
Control of the non-minimum phase (NMP) system is challenging, especially in the presence of modelling uncertainties and external disturbances. To this end, this paper presents a combined feedforward and model-assisted Active Disturbance Rejection Control (MADRC) strategy. Based on the nominal model, the feedforward controller is used to produce a tracking performance that has minimum settling time subject to a prescribed undershoot constraint. On the other hand, the unknown disturbances and uncertain dynamics beyond the nominal model are compensated by MADRC. Since the conventional Extended State Observer (ESO) is not suitable for the NMP system, a model-assisted ESO (MESO) is proposed based on the nominal observable canonical form. The convergence of MESO is proved in time domain. The stability, steady-state characteristics and robustness of the closed-loop system are analyzed in frequency domain. The proposed strategy has only one tuning parameter, i.e., the bandwidth of MESO, which can be readily determined with a prescribed robustness level. Some comparative examples are given to show the efficacy of the proposed method. This paper depicts a promising prospect of the model-assisted ADRC in dealing with complex systems. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Plasmonic behaviour of sputtered Au nanoisland arrays
NASA Astrophysics Data System (ADS)
Tvarožek, V.; Szabó, O.; Novotný, I.; Kováčová, S.; Škriniarová, J.; Šutta, P.
2017-02-01
The specificity of the formation of Au sputtered nanoisland arrays (NIA) on a glass substrate or on a ZnO thin film doped by Ga is demonstrated. Statistical analysis of morphology images (SEM, AFM) exhibited the Log-normal distribution of the size (area) of nanoislands-their modus AM varied from 8 to 328 nm2 depending on the sputtering power density, which determined the nominal thicknesses in the range of 2-8 nm. Preferential polycrystalline texture (111) of Au NIA increased with the power density and after annealing. Transverse localised surface plasmonic resonance (LSPR; evaluated by transmission UV-vis spectroscopy) showed the red shift of the extinction peaks (Δl ≤ 100 nm) with an increase of the nominal thickness, and the blue shift (Δλ ≤ -65 nm) after annealing of Au NIA. The plasmonic behaviour of Au NIA was described by modification of a size-scaling universal model using the nominal thin film thickness as a technological scaling parameter. Sputtering of a Ti intermediate adhesive ultrathin film between the glass substrate and gold improves the adhesion of Au nanoislands as well as supporting the formation of more defined Au NIA structures of smaller dimensions.
The potential value of employing a RLV-based ``pop-up'' trajectory approach for space access
NASA Astrophysics Data System (ADS)
Nielsen, Edward; O'Leary, Robert
1997-01-01
This paper presents the potential benefits of employing useful upper stages with planned reusable launch vehicle systems to increase payload performance to various earth orbits. It highlights these benefits through performance analysis on a generic vehicle/upper-stage combination (basing all estimates on realistic technology availability). A nominal 34,019 kg [75,000 lbm] dry mass RLV capable of orbiting 454 kg into a polar orbit by itself (SSTO) would be capable of orbiting 9500-10,000 kg into a polar orbit using a nominal upper stage released from a suborbital trajectory. The paper also emphasizes the technical and operational issues associated with actually executing a ``pop-up'' trajectory launch and deployment.
Establishing the credibility of archaeoastronomical sites
NASA Astrophysics Data System (ADS)
Ruggles, Clive
2016-10-01
In 2011, an attempt to nominate a prehistoric ``observatory'' site onto the World Heritage List proved unsuccessful because UNESCO rejected the interpretation as statistically and archaeologically unproven. The case highlights an issue at the heart of archaeoastronomical methodology and interpretation: the mere existence of astronomical alignments in ancient sites does not prove that they were important to those who constructed and used the sites, let alone giving us insights into their likely significance and meaning. The fact that more archaeoastronomical sites are now appearing on national tentative lists prior to their WHL nomination means that this is no longer just an academic issue; establishing the credibility of the archaeoastronomical interpretations is crucial to any assessment of their value in heritage terms.
Haywood, Kirstie; Lyddiatt, Anne; Brace-McDonnell, Samantha J; Staniszewska, Sophie; Salek, Sam
2017-06-01
Active patient engagement is increasingly viewed as essential to ensuring that patient-driven perspectives are considered throughout the research process. However, guidance for patient engagement (PE) in HRQoL research does not exist, the evidence-base for practice is limited, and we know relatively little about underpinning values that can impact on PE practice. This is the first study to explore the values that should underpin PE in contemporary HRQoL research to help inform future good practice guidance. A modified 'World Café' was hosted as a collaborative activity between patient partners, clinicians and researchers: self-nominated conference delegates participated in group discussions to explore values associated with the conduct and consequences of PE. Values were captured via post-it notes and by nominated note-takers. Data were thematically analysed: emergent themes were coded and agreement checked. Association between emergent themes, values and the Public Involvement Impact Assessment Framework were explored. Eighty participants, including 12 patient partners, participated in the 90-min event. Three core values were defined: (1) building relationships; (2) improving research quality and impact; and (3) developing best practice. Participants valued the importance of building genuine, collaborative and deliberative relationships-underpinned by honesty, respect, co-learning and equity-and the impact of effective PE on research quality and relevance. An explicit statement of values seeks to align all stakeholders on the purpose, practice and credibility of PE activities. An innovative, flexible and transparent research environment was valued as essential to developing a trustworthy evidence-base with which to underpin future guidance for good PE practice.
40 CFR 205.153 - Engine displacement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Engine displacement. 205.153 Section... TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycles § 205.153 Engine displacement. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
40 CFR 205.153 - Engine displacement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Engine displacement. 205.153 Section... TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycles § 205.153 Engine displacement. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
40 CFR 205.153 - Engine displacement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Engine displacement. 205.153 Section... TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycles § 205.153 Engine displacement. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
40 CFR 205.153 - Engine displacement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Engine displacement. 205.153 Section... TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycles § 205.153 Engine displacement. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
40 CFR 205.153 - Engine displacement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Engine displacement. 205.153 Section... TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycles § 205.153 Engine displacement. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
Richard Nixon's Irish Wake: A Case of Generic Transference.
ERIC Educational Resources Information Center
Jablonski, Carol J.
1979-01-01
Discusses the appropriateness of Richard Nixon's staging a ceremony to nominate Gerald Ford as vice-president following Spiro Agnew's resignation, in terms of generic transference (superimposing an established rhetorical form onto an unprecedented rhetorical situation). The ceremony reaffirmed American values and temporarily suspended growing…
7 CFR 800.187 - Conflicts of interest
Code of Federal Regulations, 2010 CFR
2010-01-01
... courtesies of nominal value in a business or work relationship if the exchange is wholly free of any... relationship, rather than a business or work relationship. (c) Conflicts. In addition to the conflicts of... business by buying, selling, transporting, cleaning, elevating, storing, binning, mixing, blending, drying...
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Mondal, Debabrata; Motalab, Mohammad
2016-07-01
In this present study, the stress-strain behavior of the Human Anterior Cruciate Ligament (ACL) is studied under uniaxial loads applied with various strain rates. Tensile testing of the human ACL samples requires state of the art test facilities. Furthermore, difficulty in finding human ligament for testing purpose results in very limited archival data. Nominal Stress vs. deformation gradient plots for different strain rates, as found in literature, is used to model the material behavior either as a hyperelastic or as a viscoelastic material. The well-known five parameter Mooney-Rivlin constitutivemodel for hyperelastic material and the Prony Series model for viscoelastic material are used and the objective of the analyses comprises of determining the model constants and their variation-trend with strain rates for the Human Anterior Cruciate Ligament (ACL) material using the non-linear curve fitting tool. The relationship between the model constants and strain rate, using the Hyperelastic Mooney-Rivlin model, has been obtained. The variation of the values of each coefficient with strain rates, obtained using Hyperelastic Mooney-Rivlin model are then plotted and variation of the values with strain rates are obtained for all the model constants. These plots are again fitted using the software package MATLAB and a power law relationship between the model constants and strain rates is obtained for each constant. The obtained material model for Human Anterior Cruciate Ligament (ACL) material can be implemented in any commercial finite element software package for stress analysis.
Energy-conserving programming of VVI pacemakers: a telemetry-supported, long-term, follow-up study.
Klein, H H; Knake, W
1990-06-01
Thirty patients with VVI pacemakers (Quantum 253-09, 253-19, Intermedics Inc., Freeport, TX) were observed for a mean of 65 months. Within 12 months after implantation, optimized output programming was performed in 29 patients. This included a decrease in pulse amplitude (22 patients), pulse width (4 patients), and/or pacing rate (11 patients). After 65 months postimplantation, telemetered battery voltage and battery impedance were compared with the predicted values expected when the pulse generator constantly stimulates at nominal program conditions (heart rate 72.3 beats/min, pulse amplitude 5.4 V, pulse width 0.61 ms). Instead of an expected cell voltage of 2.6 V and a cell impedance of 10 k omega mean telemetered values amounted to 2.78 V and 1.4 k omega, respectively. These data correspond to a battery age of 12-15 months at nominal program conditions. This long-term follow-up study suggests that adequate programming will extend battery longevity and thus pulse generator survival in many patients.
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
ERIC Educational Resources Information Center
Kalender, Ilker
2012-01-01
catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…
Digital adaptive controllers for VTOL vehicles. Volume 2: Software documentation
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.; Pratt, S. G.
1979-01-01
The VTOL approach and landing test (VALT) adaptive software is documented. Two self-adaptive algorithms, one based on an implicit model reference design and the other on an explicit parameter estimation technique were evaluated. The organization of the software, user options, and a nominal set of input data are presented along with a flow chart and program listing of each algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolski, Jeffrey S.; Barlow, David B.; Macek, Robert J.
2011-01-01
Particle ray tracing through simulated 3D magnetic fields was executed to investigate the effective quadrupole strength of the edge focusing of the rectangular bending magnets in the Los Alamos Proton Storage Ring (PSR). The particle rays receive a kick in the edge field of the rectangular dipole. A focal length may be calculated from the particle tracking and related to the fringe field integral (FINT) model parameter. This tech note introduces the baseline lattice model of the PSR and motivates the need for an improvement in the baseline model's vertical tune prediction, which differs from measurement by .05. An improvedmore » model of the PSR is created by modifying the fringe field integral parameter to those suggested by the ray tracing investigation. This improved model is then verified against measurement at the nominal PSR operating set point and at set points far away from the nominal operating conditions. Lastly, Linear Optics from Closed Orbits (LOCO) is employed in an orbit response matrix method for model improvement to verify the quadrupole strengths of the improved model.« less
Dynamic diagnostics of the error fields in tokamaks
NASA Astrophysics Data System (ADS)
Pustovitov, V. D.
2007-07-01
The error field diagnostics based on magnetic measurements outside the plasma is discussed. The analysed methods rely on measuring the plasma dynamic response to the finite-amplitude external magnetic perturbations, which are the error fields and the pre-programmed probing pulses. Such pulses can be created by the coils designed for static error field correction and for stabilization of the resistive wall modes, the technique developed and applied in several tokamaks, including DIII-D and JET. Here analysis is based on the theory predictions for the resonant field amplification (RFA). To achieve the desired level of the error field correction in tokamaks, the diagnostics must be sensitive to signals of several Gauss. Therefore, part of the measurements should be performed near the plasma stability boundary, where the RFA effect is stronger. While the proximity to the marginal stability is important, the absolute values of plasma parameters are not. This means that the necessary measurements can be done in the diagnostic discharges with parameters below the nominal operating regimes, with the stability boundary intentionally lowered. The estimates for ITER are presented. The discussed diagnostics can be tested in dedicated experiments in existing tokamaks. The diagnostics can be considered as an extension of the 'active MHD spectroscopy' used recently in the DIII-D tokamak and the EXTRAP T2R reversed field pinch.
NASA Astrophysics Data System (ADS)
Živanović, Dragan; Simić, Milan; Kokolanski, Zivko; Denić, Dragan; Dimcev, Vladimir
2018-04-01
Software supported procedure for generation of long-time complex test sentences, suitable for testing the instruments for detection of standard voltage quality (VQ) disturbances is presented in this paper. This solution for test signal generation includes significant improvements of computer-based signal generator presented and described in the previously published paper [1]. The generator is based on virtual instrumentation software for defining the basic signal parameters, data acquisition card NI 6343, and power amplifier for amplification of output voltage level to the nominal RMS voltage value of 230 V. Definition of basic signal parameters in LabVIEW application software is supported using Script files, which allows simple repetition of specific test signals and combination of more different test sequences in the complex composite test waveform. The basic advantage of this generator compared to the similar solutions for signal generation is the possibility for long-time test sequence generation according to predefined complex test scenarios, including various combinations of VQ disturbances defined in accordance with the European standard EN50160. Experimental verification of the presented signal generator capability is performed by testing the commercial power quality analyzer Fluke 435 Series II. In this paper are shown some characteristic complex test signals with various disturbances and logged data obtained from the tested power quality analyzer.
NASA Technical Reports Server (NTRS)
Brendley, K.; Chato, J. C.
1982-01-01
The parameters of the efflux from a helium dewar in space were numerically calculated. The flow was modeled as a one dimensional compressible ideal gas with variable properties. The primary boundary conditions are flow with friction and flow with heat transfer and friction. Two PASCAL programs were developed to calculate the efflux parameters: EFFLUZD and EFFLUXM. EFFLUXD calculates the minimum mass flow for the given shield temperatures and shield heat inputs. It then calculates the pipe lengths, diameter, and fluid parameters which satisfy all boundary conditions. Since the diameter returned by EFFLUXD is only rarely of nominal size, EFFLUXM calculates the mass flow and shield heat exchange for given pipe lengths, diameter, and shield temperatures.
NASA Technical Reports Server (NTRS)
Athans, M.
1974-01-01
A design concept of the dynamic control of aircraft in the near terminal area is discussed. An arbitrary set of nominal air routes, with possible multiple merging points, all leading to a single runway, is considered. The system allows for the automated determination of acceleration/deceleration of aircraft along the nominal air routes, as well as for the automated determination of path-stretching delay maneuvers. In addition to normal operating conditions, the system accommodates: (1) variable commanded separations over the outer marker to allow for takeoffs and between successive landings and (2) emergency conditions under which aircraft in distress have priority. The system design is based on a combination of three distinct optimal control problems involving a standard linear-quadratic problem, a parameter optimization problem, and a minimum-time rendezvous problem.
Using explanatory crop models to develop simple tools for Advanced Life Support system studies
NASA Technical Reports Server (NTRS)
Cavazzoni, J.
2004-01-01
System-level analyses for Advanced Life Support require mathematical models for various processes, such as for biomass production and waste management, which would ideally be integrated into overall system models. Explanatory models (also referred to as mechanistic or process models) would provide the basis for a more robust system model, as these would be based on an understanding of specific processes. However, implementing such models at the system level may not always be practicable because of their complexity. For the area of biomass production, explanatory models were used to generate parameters and multivariable polynomial equations for basic models that are suitable for estimating the direction and magnitude of daily changes in canopy gas-exchange, harvest index, and production scheduling for both nominal and off-nominal growing conditions. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krivtsov, S. N.; Yakimov, I. V.; Ozornin, S. P.
2018-03-01
A mathematical model of a solenoid common rail fuel injector was developed. Its difference from existing models is control valve wear simulation. A common rail injector of 0445110376 Series (Cummins ISf 2.8 Diesel engine) produced by Bosch Company was used as a research object. Injector parameters (fuel delivery and back leakage) were determined by calculation and experimental methods. GT-Suite model average R2 is 0.93 which means that it predicts the injection rate shape very accurately (nominal and marginal technical conditions of an injector). Numerical analysis and experimental studies showed that control valve wear increases back leakage and fuel delivery (especially at 160 MPa). The regression models for determining fuel delivery and back leakage effects on fuel pressure and energizing time were developed (for nominal and marginal technical conditions).
NASA Technical Reports Server (NTRS)
Beck, S. M.
1975-01-01
A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.
Direct recovery of mean gravity anomalies from satellite to satellite tracking
NASA Technical Reports Server (NTRS)
Hajela, D. P.
1974-01-01
The direct recovery was investigated of mean gravity anomalies from summed range rate observations, the signal path being ground station to a geosynchronous relay satellite to a close satellite significantly perturbed by the short wave features of the earth's gravitational field. To ensure realistic observations, these were simulated with the nominal orbital elements for the relay satellite corresponding to ATS-6, and for two different close satellites (one at about 250 km height, and the other at about 900 km height) corresponding to the nominal values for GEOS-C. The earth's gravitational field was represented by a reference set of potential coefficients up to degree and order 12, considered as known values, and by residual gravity anomalies obtained by subtracting the anomalies, implied by the potential coefficients, from their terrestrial estimates. It was found that gravity anomalies could be recovered from strong signal without using any a-priori terrestrial information, i.e. considering their initial values as zero and also assigning them a zero weight matrix. While recovering them from weak signal, it was necessary to use the a-priori estimate of the standard deviation of the anomalies to form their a-priori diagonal weight matrix.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
Tsiliyannis, Christos Aristeides
2013-09-01
Hazardous waste incinerators (HWIs) differ substantially from thermal power facilities, since instead of maximizing energy production with the minimum amount of fuel, they aim at maximizing throughput. Variations in quantity or composition of received waste loads may significantly diminish HWI throughput (the decisive profit factor), from its nominal design value. A novel formulation of combustion balance is presented, based on linear operators, which isolates the wastefeed vector from the invariant combustion stoichiometry kernel. Explicit expressions for the throughput are obtained, in terms of incinerator temperature, fluegas heat recuperation ratio and design parameters, for an arbitrary number of wastes, based on fundamental principles (mass and enthalpy balances). The impact of waste variations, of recuperation ratio and of furnace temperature is explicitly determined. It is shown that in the presence of waste uncertainty, the throughput may be a decreasing or increasing function of incinerator temperature and recuperation ratio, depending on the sign of a dimensionless parameter related only to the uncertain wastes. The dimensionless parameter is proposed as a sharp a' priori waste 'fingerprint', determining the necessary increase or decrease of manipulated variables (recuperation ratio, excess air, auxiliary fuel feed rate, auxiliary air flow) in order to balance the HWI and maximize throughput under uncertainty in received wastes. A 10-step procedure is proposed for direct application subject to process capacity constraints. The results may be useful for efficient HWI operation and for preparing hazardous waste blends. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Performance Optimization of the NASA Large Civil Tiltrotor
2008-07-01
Continuous Power MRP Maximum Rated Power (take-off power) OEI One Engine Inoperative OGE Out of Ground Effect SFC Specific Fuel Consumption SLS Sea...for the LCTR2 based on a service entry date of 2018. Table 1 summarizes the nominal mission, and Table 2 lists key design values (the initial values...Aeroflightdynamics Directorate (AFDD), RDECOM (Ref. 4). RC designs are based upon a physics- based synthesis process calibrated to a database of
Perry, Jonathan; Linsley, Sue
2006-05-01
Nominal group technique is a semi-quantitative/qualitative evaluative methodology. It has been used in health care education for generating ideas to develop curricula and find solutions to problems in programme delivery. This paper aims to describe the use of nominal group technique and present the data from nominal group evaluations of a developing module which used novel approaches to the teaching and assessment of interpersonal skills. Evaluations took place over 3 years. Thirty-six students took part in annual groups. Analysis of the data produced the following themes based on items generated in the groups: role play, marking, course content, teaching style and user involvement. Findings indicate that students valued the role play, feedback from service users and emphasis on engagement and collaboration elements of the module. The areas which participants found difficult and desired change included anxiety during experiential practice, the "snap shot" nature of assessment and the use of specific interventions. Indications are also given regarding the impact of changes made by teaching staff over the 3 year evaluation period. The findings support themes within the existing literature on the teaching of interpersonal skills and may to some extent point the way toward best practice in this area. The paper discusses these findings and their implications for nurse education.
NASA Technical Reports Server (NTRS)
Bare, E. Ann; Capone, Francis J.
1989-01-01
An investigation was conducted in the Static Test Facility of the Langley 16-Foot Transonic Tunnel to determine the effects of five geometric design parameters on the internal performance of convergent single expansion ramp nozzles. The effects of ramp chordal angle, initial ramp angle, flap angle, flap length, and ramp length were determined. All nozzles tested has a nominally constant throat area and aspect ratio. Static pressure distributions along the centerlines of the ramp and flap were also obtained for each configuration. Nozzle pressure ratio was varied up to 10.0 for all configurations.
Self-calibration of robot-sensor system
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1990-01-01
The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
A Comprehensive Robust Adaptive Controller for Gust Load Alleviation
Quagliotti, Fulvia
2014-01-01
The objective of this paper is the implementation and validation of an adaptive controller for aircraft gust load alleviation. The contribution of this paper is the design of a robust controller that guarantees the reduction of the gust loads, even when the nominal conditions change. Some preliminary results are presented, considering the symmetric aileron deflection as control device. The proposed approach is validated on subsonic transport aircraft for different mass and flight conditions. Moreover, if the controller parameters are tuned for a specific gust model, even if the gust frequency changes, no parameter retuning is required. PMID:24688411
Thermal oil recovery method using self-contained windelectric sets
NASA Astrophysics Data System (ADS)
Belsky, A. A.; Korolyov, I. A.
2018-05-01
The paper reviews challenges associated with questions of efficiency of thermal methods of impact on productive oil strata. The concept of using electrothermal complexes with WEG power supply for the indicated purposes was proposed and justified, their operating principles, main advantages and disadvantages, as well as a schematechnical solution for the implementation of the intensification of oil extraction, were considered. A mathematical model for finding the operating characteristics of WEG is presented and its main energy parameters are determined. The adequacy of the mathematical model is confirmed by laboratory simulation stand tests with nominal parameters.
Kielar, Kayla N; Mok, Ed; Hsu, Annie; Wang, Lei; Luxton, Gary
2012-10-01
The dosimetric leaf gap (DLG) in the Varian Eclipse treatment planning system is determined during commissioning and is used to model the effect of the rounded leaf-end of the multileaf collimator (MLC). This parameter attempts to model the physical difference between the radiation and light field and account for inherent leakage between leaf tips. With the increased use of single fraction high dose treatments requiring larger monitor units comes an enhanced concern in the accuracy of leakage calculations, as it accounts for much of the patient dose. This study serves to verify the dosimetric accuracy of the algorithm used to model the rounded leaf effect for the TrueBeam STx, and describes a methodology for determining best-practice parameter values, given the novel capabilities of the linear accelerator such as flattening filter free (FFF) treatments and a high definition MLC (HDMLC). During commissioning, the nominal MLC position was verified and the DLG parameter was determined using MLC-defined field sizes and moving gap tests, as is common in clinical testing. Treatment plans were created, and the DLG was optimized to achieve less than 1% difference between measured and calculated dose. The DLG value found was tested on treatment plans for all energies (6 MV, 10 MV, 15 MV, 6 MV FFF, 10 MV FFF) and modalities (3D conventional, IMRT, conformal arc, VMAT) available on the TrueBeam STx. The DLG parameter found during the initial MLC testing did not match the leaf gap modeling parameter that provided the most accurate dose delivery in clinical treatment plans. Using the physical leaf gap size as the DLG for the HDMLC can lead to 5% differences in measured and calculated doses. Separate optimization of the DLG parameter using end-to-end tests must be performed to ensure dosimetric accuracy in the modeling of the rounded leaf ends for the Eclipse treatment planning system. The difference in leaf gap modeling versus physical leaf gap dimensions is more pronounced in the more recent versions of Eclipse for both the HDMLC and the Millennium MLC. Once properly commissioned and tested using a methodology based on treatment plan verification, Eclipse is able to accurately model radiation dose delivered for SBRT treatments using the TrueBeam STx.
The Western Arabian intracontinental volcanic fields as a potential UNESCO World Heritage site
NASA Astrophysics Data System (ADS)
Németh, Károly; Moufti, Mohammed R.
2017-04-01
UNESCO promotes conservation of the geological and geomoprhological heritage through promotion of protection of these sites and development of educational programs under the umbrella of geoparks among the most globally significant ones labelled as UNESCO Global Geoparks. UNESCO also maintains a call to list those natural sites that provide universal outstanding values to demonstrate geological features or their relevance to our understanding the evolution of Earth. Volcanoes currently got a surge in nomination to be UNESCO World Heritage sites. Volcanic fields in the contrary fell in a grey area of nominations as they represents the most common manifestation of volcanism on Earth hence they are difficult to view as having outstanding universal values. A nearly 2500-km long 300-km wide region of dispersed volcanoes located in the Western Arabian Penninsula mostly in the Kingdom of Saudi Arabia form a near-continuous location that carries universal outstanding value as one of the most representative manifestation of dispersed intracontinental volcanism on Earth to be nominated as an UNESCO World Heritage site. The volcanic fields formed in the last 20 Ma along the Red Sea as group of simple basaltic to more mature and long-lived basalt to trachyte-to-rhyolite volcanic fields each carries high geoheritage values. While these volcanic fields are dominated by scoria and spatter cones and transitional lava fields, there are phreatomagmatic volcanoes among them such as maars and tuff rings. Phreatomagmatism is more evident in association with small volcanic edifices that were fed by primitive magmas, while phreatomagmatic influences during the course of a larger volume eruption are also known in association with the silicic eruptive centres in the harrats of Rahat, Kishb and Khaybar. Three of the volcanic fields are clearly bimodal and host small-volume relatively short-lived lava domes and associated block-and-ash fans providing a unique volcanic landscape commonly not considerred to be associated with dispersed intracontinental volcanic fields. In addition the nominated volcanic region also hosts the largest and youngest historic eruption (Al Madinah Eruption) in Western Saudi Arabia took place at 1256-AD, lasted 52 days and produced at least 0.29-km3 of pahoehoe-to-aa transitional lava fields that were emitted through a 2.3 km-long fissure and associated spatter-to-scoria cone complexes. The Western Arabian intracontinental volcanic fields provide the best exposed and most diverse type of intracontinental volcanic fields on Earth that also occupies the largest surface area. In addition, this chain of volcanic fields are also host significant archaeological and human occupation sites help to understand early human evolution as well as hosting several historic locations with high cultural heritage values. These generally intact and well-exposed volcanic zones hosting globally unique geoheritage sites can form the basis of complex geoeducational programs through the establishment of various volcanic geoparks in the region that can link together a UNESCO World Heritage Site on the basis of their global universal volcanic geoheritage values.
A General Closed-Form Solution for the Lunar Reconnaissance Orbiter (LRO) Antenna Pointing System
NASA Technical Reports Server (NTRS)
Shah, Neerav; Chen, J. Roger; Hashmall, Joseph A.
2010-01-01
The National Aeronautics and Space Administration s (NASA) Lunar Reconnaissance Orbiter (LRO) launched on June 18, 2009 from the Cape Canaveral Air Force Station aboard an Atlas V launch vehicle into a direct insertion trajectory to the Moon LRO, designed, built, and operated by the NASA Goddard Space Flight Center in Greenbelt, MD, is gathering crucial data on the lunar environment that will help astronauts prepare for long-duration lunar expeditions. During the mission s nominal life of one year its six instruments and one technology demonstrator will find safe landing site, locate potential resources, characterize the radiation environment and test new technology. To date, LRO has been operating well within the bounds of its requirements and has been collecting excellent science data images taken from the LRO Camera Narrow Angle Camera (LROC NAC) of the Apollo landing sites have appeared on cable news networks. A significant amount of information on LRO s science instruments is provided at the LRO mission webpage. LRO s Attitude Control System (ACS), in addition to controlling the orientation of the spacecraft is also responsible for pointing the High Gain Antenna (HGA). A dual-axis (or double-gimbaled) antenna, deployed on a meter-long boom, is required to point at a selected Earth ground station. Due to signal loss over the distance from the Moon to Earth, pointing precision for the antenna system is very tight. Since the HGA has to be deployed in spaceflight, its exact geometry relative to the spacecraft body is uncertain. In addition, thermal distortions and mechanical errors/tolerances must be characterized and removed to realize the greatest gain from the antenna system. These reasons necessitate the need for an in-flight calibration. Once in orbit around the moon, a series of attitude maneuvers was conducted to provide data needed to determine optimal parameters to load onboard, which would account for the environmental and mechanical errors at any antenna orientation. The nominal geometry for the HGA involves an outer gimbal axis that is exactly perpendicular to the inner gimbal axis, and a target direction that is exactly perpendicular to the outer gimbal axis. For this nominal geometry, closed-form solutions of the desired gimbal angles are simple to get for a desired target direction specified in the spacecraft body fame. If the gimbal axes and the antenna boresight are slightly misaligned, the nominal closed-form solution is not sufficiently accurate for computing the gimbal angles needed to point at a target. In this situation, either a general closed-form solution has to be developed for a mechanism with general geometries, or a correction scheme has to be applied to the nominal closed-form solutions. The latter has been adopted for Solar Dynamics Observatory (SDO) as can be seen in Reference 1, and the former has been used for LRO. The advantage of the general closed-form solution is the use of a small number of parameters for the correction of nominal solutions, especially in the regions near singularities. Singularities here refer to cases when the nominal closed-form solutions have two or more solutions. Algorithm complexity, however, is the disadvantage of the general closed-form solution.
Keene, Keith L.; Mychaleckyj, Josyf C.; Smith, Shelly G.; Leak, Tennille S.; Perlegas, Peter S.; Langefeld, Carl D.; Herrington, David M.; Freedman, Barry I.; Rich, Stephen S.; Bowden, Donald W.; Sale, Michèle M.
2009-01-01
We previously investigated the estrogen receptor α gene (ESR1) as a positional candidate for type 2 diabetes (T2DM), and found evidence for association between the intron 1-intron 2 region of this gene and type 2 diabetes and/or nephropathy in an African American (AA) population. Our objective was to comprehensively evaluate variants across the entire ESR1 gene for association in AA with T2DM and End Stage Renal Disease (T2DM-ESRD). One hundred fifty SNPs in ESR1, spanning 476 kb, were genotyped in 577 AA individuals with T2DM-ESRD and 596 AA controls. Genotypic association tests for dominant, additive, and recessive models, and haplotypic association, were calculated using a χ2 statistic and corresponding P value. Thirty-one SNPs showed nominal evidence for association (P< 0.05) with T2DM-ESRD in one or more genotypic model. After correcting for multiple tests, promoter SNP rs11964281 (nominal P=0.000291, adjusted P=0.0289), and intron 4 SNPs rs1569788 (nominal P=0.000754, adjusted P=0.0278) and rs9340969 (nominal P=0.00109, adjusted P=0.0467) remained significant at experimentwise error rate (EER) P<0.05 for the dominant class of tests. Twenty-three of the thirty-one associated SNPs cluster within the intron 4-intron 6 region. Gender stratification revealed nominal evidence for association with 35 SNPs in females (352 cases; 306 controls) and seven SNPs in males (225 cases; 290 controls). We have identified a novel region of the ESR1 gene that may contain important functional polymorphisms in relation to susceptibility to T2DM and/or diabetic nephropathy. PMID:18305958
Keene, Keith L; Mychaleckyj, Josyf C; Smith, Shelly G; Leak, Tennille S; Perlegas, Peter S; Langefeld, Carl D; Herrington, David M; Freedman, Barry I; Rich, Stephen S; Bowden, Donald W; Sale, Michèle M
2008-05-01
We previously investigated the estrogen receptor alpha gene (ESR1) as a positional candidate for type 2 diabetes (T2DM), and found evidence for association between the intron 1-intron 2 region of this gene and T2DM and/or nephropathy in an African American (AA) population. Our objective was to comprehensively evaluate variants across the entire ESR1 gene for association in AA with T2DM and end stage renal disease (T2DM-ESRD). One hundred fifty SNPs in ESR1, spanning 476 kb, were genotyped in 577 AA individuals with T2DM-ESRD and 596 AA controls. Genotypic association tests for dominant, additive, and recessive models, and haplotypic association, were calculated using a chi(2) statistic and corresponding P value. Thirty-one SNPs showed nominal evidence for association (P < 0.05) with T2DM-ESRD in one or more genotypic model. After correcting for multiple tests, promoter SNP rs11964281 (nominal P = 0.000291, adjusted P = 0.0289), and intron 4 SNPs rs1569788 (nominal P = 0.000754, adjusted P = 0.0278) and rs9340969 (nominal P = 0.00109, adjusted P = 0.0467) remained significant at experimentwise error rate (EER) P = 0.05 for the dominant class of tests. Twenty-three of the thirty-one associated SNPs cluster within the intron 4-intron 6 regions. Gender stratification revealed nominal evidence for association with 35 SNPs in females (352 cases; 306 controls) and seven SNPs in males (225 cases; 290 controls). We have identified a novel region of the ESR1 gene that may contain important functional polymorphisms in relation to susceptibility to T2DM and/or diabetic nephropathy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, B R; McEwen, M R
2015-06-15
Purpose: To investigate uncertainties in small field output factors and detector specific correction factors from variations in field size for nominally identical fields using measurements and Monte Carlo simulations. Methods: Repeated measurements of small field output factors are made with the Exradin W1 (plastic scintillation detector) and the PTW microDiamond (synthetic diamond detector) in beams from the Elekta Precise linear accelerator. We investigate corrections for a 0.6x0.6 cm{sup 2} nominal field size shaped with secondary photon jaws at 100 cm source to surface distance (SSD). Measurements of small field profiles are made in a water phantom at 10 cm depthmore » using both detectors and are subsequently used for accurate detector positioning. Supplementary Monte Carlo simulations with EGSnrc are used to calculate the absorbed dose to the detector and absorbed dose to water under the same conditions when varying field size. The jaws in the BEAMnrc model of the accelerator are varied by a reasonable amount to investigate the same situation without the influence of measurements uncertainties (such as detector positioning or variation in beam output). Results: For both detectors, small field output factor measurements differ by up to 11 % when repeated measurements are made in nominally identical 0.6x0.6 cm{sup 2} fields. Variations in the FWHM of measured profiles are consistent with field size variations reported by the accelerator. Monte Carlo simulations of the dose to detector vary by up to 16 % under worst case variations in field size. These variations are also present in calculations of absorbed dose to water. However, calculated detector specific correction factors are within 1 % when varying field size because of cancellation of effects. Conclusion: Clinical physicists should be aware of potentially significant uncertainties in measured output factors required for dosimetry of small fields due to field size variations for nominally identical fields.« less
Emirates Mars Mission Planetary Protection Plan
NASA Astrophysics Data System (ADS)
Awadhi, Mohsen Al
2016-07-01
The United Arab Emirates is planning to launch a spacecraft to Mars in 2020 as part of the Emirates Mars Mission (EMM). The EMM spacecraft, Amal, will arrive in early 2021 and enter orbit about Mars. Through a sequence of subsequent maneuvers, the spacecraft will enter a large science orbit and remain there throughout the primary mission. This paper describes the planetary protection plan for the EMM mission. The EMM science orbit, where Amal will conduct the majority of its operations, is very large compared to other Mars orbiters. The nominal orbit has a periapse altitude of 20,000 km, an apoapse altitude of 43,000 km, and an inclination of 25 degrees. From this vantage point, Amal will conduct a series of atmospheric investigations. Since Amal's orbit is very large, the planetary protection plan is to demonstrate a very low probability that the spacecraft will ever encounter Mars' surface or lower atmosphere during the mission. The EMM team has prepared methods to demonstrate that (1) the launch vehicle targets support a 0.01% probability of impacting Mars, or less, within 50 years; (2) the spacecraft has a 1% probability or less of impacting Mars during 20 years; and (3) the spacecraft has a 5% probability or less of impacting Mars during 50 years. The EMM mission design resembles the mission design of many previous missions, differing only in the specific parameters and final destination. The following sequence describes the mission: 1.The mission will launch in July, 2020. The launch includes a brief parking orbit and a direct injection to the interplanetary cruise. The launch targets are specified by the hyperbolic departure's energy C3, and the hyperbolic departure's direction in space, captured by the right ascension and declination of the launch asymptote, RLA and DLA, respectively. The targets of the launch vehicle are biased away from Mars such that there is a 0.01% probability or less that the launch vehicle arrives onto a trajectory that impacts Mars. 2.The spacecraft is deployed from the launch vehicle and powers on. 3.Within the first month, the spacecraft executes a trajectory correction maneuver to remove the launch bias. The target of this maneuver may still have a small bias to further reduce the probability of inadvertently impacting Mars. 4.Four additional trajectory correction maneuvers are scheduled and planned in the interplanetary cruise in order to target the precise arrival conditions at Mars. The targeted arrival conditions are specified by an altitude above the surface of Mars and an inclination relative to Mars' equator. The closest approach to Mars during the Mars Orbit Insertion (MOI) is over 600 km and the periapsis altitude of the first orbit about Mars is nominally 500 km. The inclination of the first orbit about Mars is nominally around 18 degrees. 5.The Mars Orbit Insertion is performed as a pitch-over burn, approaching no closer than approximately 600 km, and targeting a capture orbit period of 35-40 hours. 6.The spacecraft Capture Orbit has a nominal periapse altitude of 500 km, a nominal apoapse altitude of approximately 45,000 km, and a nominal period of approximately 35 hours. The mission expects that this orbit will be somewhat different after executing the real MOI due to maneuver execution errors. The full range of expected Capture Orbit sizes is acceptable from a planetary protection perspective. 7.The spacecraft remains in the Capture Orbit for two months. 8.The spacecraft then executes three maneuvers in the Transition to Science phase, raising the orbital periapse, raising the orbit inclination, adjusting the apoapse, and placing the argument of periapse near a value of 177 deg. The three maneuvers are nominally one week apart. The first maneuver is large and will raise the periapse significantly, thereafter significantly reducing the probability of Amal impacting Mars in the future.
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Posadas, Agham; Hadamek, Tobias; Demkov, Alexander A.
2015-07-01
We investigate the x-ray photoelectron spectroscopy (XPS) of nominally d1 and n -doped d0 transition-metal oxides including NbO2,SrVO3, and LaTiO3 (nominally d1), as well as n -doped SrTiO3 (nominally d0). In the case of single phase d1 oxides, we find that the XPS spectra (specifically photoelectrons from Nb 3 d , V 2 p , Ti 2 p core levels) all display at least two, and sometimes three distinct components, which can be consistently identified as d0,d1, and d2 oxidation states (with decreasing order in binding energy). Electron doping increases the d2 component but decreases the d0 component, whereas hole doping reverses this trend; a single d1 peak is never observed, and the d0 peak is always present even in phase-pure samples. In the case of n -doped SrTiO3, the d1 component appears as a weak shoulder with respect to the main d0 peak. We argue that these multiple peaks should be understood as being due to the final-state effect and are intrinsic to the materials. Their presence does not necessarily imply the existence of spatially localized ions of different oxidation states nor of separate phases. A simple model is provided to illustrate this interpretation, and several experiments are discussed accordingly. The key parameter to determine the relative importance between the initial-state and final-state effects is also pointed out.
Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models
NASA Astrophysics Data System (ADS)
Rothenberger, Michael J.
This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input-output measurements, and is the approach used in this dissertation. Research in the literature studies optimal current input shaping for high-order electrochemical battery models and focuses on offline laboratory cycling. While this body of research highlights improvements in identifiability through optimal input shaping, each optimal input is a function of nominal parameters, which creates a tautology. The parameter values must be known a priori to determine the optimal input for maximizing estimation speed and accuracy. The system identification literature presents multiple studies containing methods that avoid the challenges of this tautology, but these methods are absent from the battery parameter estimation domain. The gaps in the above literature are addressed in this dissertation through the following five novel and unique contributions. First, this dissertation optimizes the parameter identifiability of a thermal battery model, which Sergio Mendoza experimentally validates through a close collaboration with this dissertation's author. Second, this dissertation extends input-shaping optimization to a linear and nonlinear equivalent-circuit battery model and illustrates the substantial improvements in Fisher identifiability for a periodic optimal signal when compared against automotive benchmark cycles. Third, this dissertation presents an experimental validation study of the simulation work in the previous contribution. The estimation study shows that the automotive benchmark cycles either converge slower than the optimized cycle, or not at all for certain parameters. Fourth, this dissertation examines how automotive battery packs with additional power electronic components that dynamically route current to individual cells/modules can be used for parameter identifiability optimization. While the user and vehicle supervisory controller dictate the current demand for these packs, the optimized internal allocation of current still improves identifiability. Finally, this dissertation presents a robust Bayesian sequential input shaping optimization study to maximize the conditional Fisher information of the battery model parameters without prior knowledge of the nominal parameter set. This iterative algorithm only requires knowledge of the prior parameter distributions to converge to the optimal input trajectory.
40 CFR 92.112 - Analytical gases.
Code of Federal Regulations, 2010 CFR
2010-07-01
... hydrocarbons plus impurities or by dynamic blending. Nitrogen shall be the predominant diluent with the balance... grade nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be single blends of propane... with a maximum NO2 concentration of 5 percent of the nominal value using zero grade nitrogen as the...
40 CFR 92.112 - Analytical gases.
Code of Federal Regulations, 2013 CFR
2013-07-01
... hydrocarbons plus impurities or by dynamic blending. Nitrogen shall be the predominant diluent with the balance... grade nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be single blends of propane... with a maximum NO2 concentration of 5 percent of the nominal value using zero grade nitrogen as the...
40 CFR 92.112 - Analytical gases.
Code of Federal Regulations, 2012 CFR
2012-07-01
... hydrocarbons plus impurities or by dynamic blending. Nitrogen shall be the predominant diluent with the balance... grade nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be single blends of propane... with a maximum NO2 concentration of 5 percent of the nominal value using zero grade nitrogen as the...
40 CFR 92.112 - Analytical gases.
Code of Federal Regulations, 2011 CFR
2011-07-01
... hydrocarbons plus impurities or by dynamic blending. Nitrogen shall be the predominant diluent with the balance... grade nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be single blends of propane... with a maximum NO2 concentration of 5 percent of the nominal value using zero grade nitrogen as the...
40 CFR 92.112 - Analytical gases.
Code of Federal Regulations, 2014 CFR
2014-07-01
... dynamic blending. Nitrogen shall be the predominant diluent with the balance being oxygen. Oxygen... the CO and CO2 analyzers shall be single blends of CO and CO2, respectively, using zero grade nitrogen... maximum NO2 concentration of 5 percent of the nominal value using zero grade nitrogen as the diluent. (e...
77 FR 38803 - Request for Nominations to the Great Lakes Advisory Board (GLAB)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-29
... affiliations and other considerations); Demonstrated experience with Great Lakes issues; Leadership experience... nominees will include: The background and experiences that would help members contribute to the diversity... the nominee's experience and knowledge will bring value to the work of the GLAB. To help the Agency in...
16 CFR § 1631.4 - Test procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... source. A methenamine tablet, flat, with a nominal heat of combustion value of 7180 calories/gram, a mass... combustion following each test. The front or sides of the hood should be transparent to permit observation of... available for inspection at the National Archives and Records Administration (NARA). For information on the...
16 CFR 1631.4 - Test procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... source. A methenamine tablet, flat, with a nominal heat of combustion value of 7180 calories/gram, a mass... combustion following each test. The front or sides of the hood should be transparent to permit observation of... available for inspection at the National Archives and Records Administration (NARA). For information on the...
16 CFR 1630.4 - Test procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., flat, with a nominal heat of combustion value of 7180 calories/gram, a mass of 150 mg ±5 mg and a... draft turned off during each test and capable of rapidly removing the products of combustion following... available for inspection at the National Archives and Records Administration (NARA). For information on the...
Code of Federal Regulations, 2014 CFR
2014-07-01
... engine displacement, engine class, and engine families. 90.116 Section 90.116 Protection of Environment... Certification procedure—determining engine displacement, engine class, and engine families. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
40 CFR 86.419-78 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Engine displacement, motorcycle... Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.419-78 Engine displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values and rounded to...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-78 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Engine displacement, motorcycle... Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.419-78 Engine displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values and rounded to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... engine displacement, engine class, and engine families. 90.116 Section 90.116 Protection of Environment... Certification procedure—determining engine displacement, engine class, and engine families. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
Code of Federal Regulations, 2012 CFR
2012-07-01
... engine displacement, engine class, and engine families. 90.116 Section 90.116 Protection of Environment... Certification procedure—determining engine displacement, engine class, and engine families. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-78 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Engine displacement, motorcycle... Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.419-78 Engine displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values and rounded to...
40 CFR 86.419-78 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Engine displacement, motorcycle... Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.419-78 Engine displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values and rounded to...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
40 CFR 86.419-78 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Engine displacement, motorcycle... Emission Regulations for 1978 and Later New Motorcycles, General Provisions § 86.419-78 Engine displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values and rounded to...
40 CFR 86.419-2006 - Engine displacement, motorcycle classes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Engine displacement, motorcycle... displacement, motorcycle classes. (a)(1) Engine displacement shall be calculated using nominal engine values... reference in § 86.1). (2) For rotary engines, displacement means the maximum volume of a combustion chamber...
Code of Federal Regulations, 2011 CFR
2011-07-01
... engine displacement, engine class, and engine families. 90.116 Section 90.116 Protection of Environment... Certification procedure—determining engine displacement, engine class, and engine families. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
Code of Federal Regulations, 2010 CFR
2010-07-01
... engine displacement, engine class, and engine families. 90.116 Section 90.116 Protection of Environment... Certification procedure—determining engine displacement, engine class, and engine families. (a) Engine displacement must be calculated using nominal engine values and rounded to the nearest whole cubic centimeter...
37 CFR 251.30 - Basic obligations of arbitrators.
Code of Federal Regulations, 2010 CFR
2010-07-01
... conflict with the conscientious performance of their service. (3) Arbitrators shall not engage in financial... arbitrators may accept gifts of nominal value or gifts from friends and family as specified in § 251.34(b). (5... employment or activities, including seeking or negotiating for employment, that conflicts with the...
22 CFR 1203.735-202 - Gifts, entertainment, and favors.
Code of Federal Regulations, 2010 CFR
2010-04-01
...; (2) Acceptance of loans from banks or other financial institutions on customary terms to finance... apply to: Acceptance of food and refreshments of nominal value on infrequent occasions in the ordinary... paragraph (a) of this section do not apply in the following situations: (1) Acceptance of food, refreshments...
NASA Astrophysics Data System (ADS)
Nwankwo, Victor U. J.; Chakrabarti, Sandip K.
2018-04-01
We study the effects of space weather on the ionosphere and low Earth orbit (LEO) satellites' orbital trajectory in equatorial, low- and mid-latitude (EQL, LLT and MLT) regions during (and around) the notable storms of October/November, 2003. We briefly review space weather effects on the thermosphere and ionosphere to demonstrate that such effects are also latitude-dependent and well established. Following the review we simulate the trend in variation of satellite's orbital radius (r), mean height (h) and orbit decay rate (ODR) during 15 October-14 November 2003 in EQL, LLT and MLT. Nominal atmospheric drag on LEO satellite is usually enhanced by space weather or solar-induced variations in thermospheric temperature and density profile. To separate nominal orbit decay from solar-induced accelerated orbit decay, we compute r, h and ODR in three regimes viz. (i) excluding solar indices (or effect), where r =r0, h =h0 and ODR =ODR0 (ii) with mean value of solar indices for the interval, where r =rm, h =hm and ODR =ODRm and (iii) with actual daily values of solar indices for the interval (r, h and ODR). For a typical LEO satellite at h = 450 km, we show that the total decay in r during the period is about 4.20 km, 3.90 km and 3.20 km in EQL, LLT and MLT respectively; the respective nominal decay (r0) is 0.40 km, 0.34 km and 0.22 km, while solar-induced orbital decay (rm) is about 3.80 km, 3.55 km and 2.95 km. h also varied in like manner. The respective nominal ODR0 is about 13.5 m/day, 11.2 m/day and 7.2 m/day, while solar-induced ODRm is about 124.3 m/day, 116.9 m/day and 97.3 m/day. We also show that severe geomagnetic storms can increase ODR by up to 117% (from daily mean value). However, the extent of space weather effects on LEO Satellite's trajectory significantly depends on the ballistic co-efficient and orbit of the satellite, and phase of solar cycles, intensity and duration of driving (or influencing) solar event.
Error in the Sampling Area of an Optical Disdrometer: Consequences in Computing Rain Variables
Fraile, R.; Castro, A.; Fernández-Raga, M.; Palencia, C.; Calvo, A. I.
2013-01-01
The aim of this study is to improve the estimation of the characteristic uncertainties of optic disdrometers in an attempt to calculate the efficient sampling area according to the size of the drop and to study how this influences the computation of other parameters, taking into account that the real sampling area is always smaller than the nominal area. For large raindrops (a little over 6 mm), the effective sampling area may be half the area indicated by the manufacturer. The error committed in the sampling area is propagated to all the variables depending on this surface, such as the rain intensity and the reflectivity factor. Both variables tend to underestimate the real value if the sampling area is not corrected. For example, the rainfall intensity errors may be up to 50% for large drops, those slightly larger than 6 mm. The same occurs with reflectivity values, which may be up to twice the reflectivity calculated using the uncorrected constant sampling area. The Z-R relationships appear to have little dependence on the sampling area, because both variables depend on it the same way. These results were obtained by studying one particular rain event that occurred on April 16, 2006. PMID:23844393
SU-E-T-627: Precision Modelling of the Leaf-Bank Rotation in Elekta’s Agility MLC: Is It Necessary?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vujicic, M; Belec, J; Heath, E
Purpose: To demonstrate the method used to determine the leaf bank rotation angle (LBROT) as a parameter for modeling the Elekta Agility multi-leaf collimator (MLC) for Monte Carlo simulations and to evaluate the clinical impact of LBROT. Methods: A detailed model of an Elekta Infinity linac including an Agility MLC was built using the EGSnrc/BEAMnrc Monte Carlo code. The Agility 160-leaf MLC is modelled using the MLCE component module which allows for leaf bank rotation using the parameter LBROT. A precise value of LBROT is obtained by comparing measured and simulated profiles of a specific field, which has leaves arrangedmore » in a repeated pattern such that one leaf is opened and the adjacent one is closed. Profile measurements from an Agility linac are taken with gafchromic film, and an ion chamber is used to set the absolute dose. The measurements are compared to Monte Carlo (MC) simulations and the LBROT is adjusted until a match is found. The clinical impact of LBROT is evaluated by observing how an MC dose calculation changes with LBROT. A clinical Stereotactic Body Radiation Treatment (SBRT) plan is calculated using BEAMnrc/DOSXYZnrc simulations with different input values for LBROT. Results: Using the method outlined above, the LBROT is determined to be 9±1 mrad. Differences as high as 4% are observed in a clinical SBRT plan between the extreme case (LBROT not modeled) and the nominal case. Conclusion: In small-field radiation therapy treatment planning, it is important to properly account for LBROT as an input parameter for MC dose calculations with the Agility MLC. More work is ongoing to elucidate the observed differences by determining the contributions from transmission dose, change in field size, and source occlusion, which are all dependent on LBROT. This work was supported by OCAIRO (Ontario Consortium of Adaptive Interventions in Radiation Oncology), funded by the Ontario Research Fund.« less
Hubble Space Telescope secondary mirror vertex radius/conic constant test
NASA Technical Reports Server (NTRS)
Parks, Robert
1991-01-01
The Hubble Space Telescope backup secondary mirror was tested to determine the vertex radius and conic constant. Three completely independent tests (to the same procedure) were performed. Similar measurements in the three tests were highly consistent. The values obtained for the vertex radius and conic constant were the nominal design values within the error bars associated with the tests. Visual examination of the interferometric data did not show any measurable zonal figure error in the secondary mirror.
Dallago, M; Fontanari, V; Torresani, E; Leoni, M; Pederzolli, C; Potrich, C; Benedetti, M
2018-02-01
Traditional implants made of bulk titanium are much stiffer than human bone and this mismatch can induce stress shielding. Although more complex to produce and with less predictable properties compared to bulk implants, implants with a highly porous structure can be produced to match the bone stiffness and at the same time favor bone ingrowth and regeneration. This paper presents the results of the mechanical and dimensional characterization of different regular cubic open-cell cellular structures produced by Selective Laser Melting (SLM) of Ti6Al4V alloy, all with the same nominal elastic modulus of 3GPa that matches that of human trabecular bone. The main objective of this research was to determine which structure has the best fatigue resistance through fully reversed fatigue tests on cellular specimens. The quality of the manufacturing process and the discrepancy between the actual measured cell parameters and the nominal CAD values were assessed through an extensive metrological analysis. The results of the metrological assessment allowed us to discuss the effect of manufacturing defects (porosity, surface roughness and geometrical inaccuracies) on the mechanical properties. Half of the specimens was subjected to a stress relief thermal treatment while the other half to Hot Isostatic Pressing (HIP), and we compared the effect of the treatments on porosity and on the mechanical properties. Fatigue strength seems to be highly dependent on the surface irregularities and notches introduced during the manufacturing process. In fully reversed fatigue tests, the high performances of stretching dominated structures compared to bending dominated structures are not found. In fact, with thicker struts, such structures proved to be more resistant, even if bending actions were present. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, X.; Torstensson, P. T.; Nielsen, J. C. O.
2017-12-01
Vertical dynamic vehicle-track interaction in the through route of a railway crossing is simulated in the time domain based on a Green's function approach for the track in combination with an implementation of Kalker's variational method to solve the non-Hertzian, and potentially multiple, wheel-rail contact. The track is described by a linear, three-dimensional and non-periodic finite element model of a railway turnout accounting for the variations in rail cross-sections and sleeper lengths, and including baseplates and resilient elements. To reduce calculation time due to the complexity of the track model, involving a large number of elements and degrees-of-freedom, a complex-valued modal superposition with a truncated mode set is applied before the impulse response functions are calculated at various positions along the crossing panel. The variation in three-dimensional contact geometry of the crossing and wheel is described by linear surface elements. In each time step of the contact detection algorithm, the lateral position of the wheelset centre is prescribed but the contact positions on wheel and rail are not, allowing for an accurate prediction of the wheel transition between wing rail and crossing rail. The method is demonstrated by calculating the wheel-rail impact load and contact stress distribution for a nominal S1002 wheel profile passing over a nominal crossing geometry. A parameter study is performed to determine the influence of vehicle speed, rail pad stiffness, lateral wheelset position and wheel profile on the impact load generated at the crossing. It is shown that the magnitude of the impact load is more influenced the wheel-rail contact geometry than by the selection of rail pad stiffness.
Role of exposure mode in the bioavailability of triphenyl phosphate to aquatic organisms
Huckins, James N.; Fairchild, James F.; Boyle, Terence P.
1991-01-01
A laboratory study was conducted to investigate the role of the route of triphenyl phosphate (TPP) entry on its aquatic bioavailability and acute biological effects. Three TPP treatments were used for exposures of fish and invertebrates. These consisted of TPP dosed directly into water with and without clean sediment and TPP spiked onto sediment prior to aqueous exposures. Results of static acute toxicity tests (no sediment) were 0.78 mg/L (96-h LC50) for bluegill, 0.36 mg/L (48-h EC50) for midge, and 0.25 mg/L (96-h EC50) for scud. At 24 h, the sediment (1.1% organic carbon)/water partition coefficient (Kp) for TPP was 112. Use of this partition coefficient model to predict the sediment-mediated reduction of TPP concentration in water during toxicity tests resulted in a value that was only 10% less than the nominal value. However, the required nominal concentration of TPP to cause acute toxicity responses in test organisms was significantly higher than the predicted value by the model for both clay and soil-derived sediment. Direct spiking of TPP to soil minimized TPP bioavailability. Data from parallel experiments designed to track TPP residues in water through time suggest that sorption kinetics control residue bioavailability in the initial 24 h of exposure and may account for observed differences in LC50 and EC50 values from the sediment treatments.
NASA Technical Reports Server (NTRS)
Williams, Jacob; Stewart, Shaun M.; Lee, David E.; Davis, Elizabeth C.; Condon, Gerald L.; Senent, Juan
2010-01-01
The National Aeronautics and Space Administration s (NASA) Constellation Program paves the way for a series of lunar missions leading to a sustained human presence on the Moon. The proposed mission design includes an Earth Departure Stage (EDS), a Crew Exploration Vehicle (Orion) and a lunar lander (Altair) which support the transfer to and from the lunar surface. This report addresses the design, development and implementation of a new mission scan tool called the Mission Assessment Post Processor (MAPP) and its use to provide insight into the integrated (i.e., EDS, Orion, and Altair based) mission cost as a function of various mission parameters and constraints. The Constellation architecture calls for semiannual launches to the Moon and will support a number of missions, beginning with 7-day sortie missions, culminating in a lunar outpost at a specified location. The operational lifetime of the Constellation Program can cover a period of decades over which the Earth-Moon geometry (particularly, the lunar inclination) will go through a complete cycle (i.e., the lunar nodal cycle lasting 18.6 years). This geometry variation, along with other parameters such as flight time, landing site location, and mission related constraints, affect the outbound (Earth to Moon) and inbound (Moon to Earth) translational performance cost. The mission designer must determine the ability of the vehicles to perform lunar missions as a function of this complex set of interdependent parameters. Trade-offs among these parameters provide essential insights for properly assessing the ability of a mission architecture to meet desired goals and objectives. These trades also aid in determining the overall usable propellant required for supporting nominal and off-nominal missions over the entire operational lifetime of the program, thus they support vehicle sizing.
Analysis of the passive stabilization of the long duration exposure facility
NASA Technical Reports Server (NTRS)
Siegel, S. H.; Vishwanath, N. S.
1977-01-01
The nominal Long Duration Exposure Facility (LDEF) configurations and the anticipated orbit parameters are presented. A linear steady state analysis was performed using these parameters. The effects of orbit eccentricity, solar pressure, aerodynamic pressure, magnetic dipole, and the magnetically anchored rate damper were evaluated to determine the configuration sensitivity to variations in these parameters. The worst case conditions for steady state errors were identified, and the performance capability calculated. Garber instability bounds were evaluated for the range of configuration and damping coefficients under consideration. The transient damping capabilities of the damper were examined, and the time constant as a function of damping coefficient and spacecraft moment of inertia determined. The capture capabilities of the damper were calculated, and the results combined with steady state, transient, and Garber instability analyses to select damper design parameters.
Control Room Training for the Hyper-X Project Utilizing Aircraft Simulation
NASA Technical Reports Server (NTRS)
Lux-Baumann, Jesica; Dees, Ray; Fratello, David
2006-01-01
The NASA Dryden Flight Research Center flew two Hyper-X research vehicles and achieved hypersonic speeds over the Pacific Ocean in March and November 2004. To train the flight and mission control room crew, the NASA Dryden simulation capability was utilized to generate telemetry and radar data, which was used in nominal and emergency mission scenarios. During these control room training sessions personnel were able to evaluate and refine data displays, flight cards, mission parameter allowable limits, and emergency procedure checklists. Practice in the mission control room ensured that all primary and backup Hyper-X staff were familiar with the nominal mission and knew how to respond to anomalous conditions quickly and successfully. This report describes the technology in the simulation environment and the Mission Control Center, the need for and benefit of control room training, and the rationale and results of specific scenarios unique to the Hyper-X research missions.
Control Room Training for the Hyper-X Program Utilizing Aircraft Simulation
NASA Technical Reports Server (NTRS)
Lux-Baumann, Jessica R.; Dees, Ray A.; Fratello, David J.
2006-01-01
The NASA Dryden Flight Research Center flew two Hyper-X Research Vehicles and achieved hypersonic speeds over the Pacific Ocean in March and November 2004. To train the flight and mission control room crew, the NASA Dryden simulation capability was utilized to generate telemetry and radar data, which was used in nominal and emergency mission scenarios. During these control room training sessions, personnel were able to evaluate and refine data displays, flight cards, mission parameter allowable limits, and emergency procedure checklists. Practice in the mission control room ensured that all primary and backup Hyper-X staff were familiar with the nominal mission and knew how to respond to anomalous conditions quickly and successfully. This paper describes the technology in the simulation environment and the mission control center, the need for and benefit of control room training, and the rationale and results of specific scenarios unique to the Hyper-X research missions.
A STUDY OF DUST AND GAS AT MARS FROM COMET C/2013 A1 (SIDING SPRING)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, Michael S. P.; Farnham, Tony L.; Bodewits, Dennis
Although the nucleus of comet C/2013 A1 (Siding Spring) will safely pass Mars in 2014 October, the dust in the coma and tail will more closely approach the planet. Using a dynamical model of comet dust, we estimate the impact fluence. Based on our nominal model no impacts are expected at Mars. Relaxing our nominal model's parameters, the fluence is no greater than ∼10{sup –7} grains m{sup –2} for grain radii larger than 10 μm. Mars-orbiting spacecraft are unlikely to be impacted by large dust grains, but Mars may receive as many as ∼10{sup 7} grains, or ∼100 kg of total dust.more » We also estimate the flux of impacting gas molecules commonly observed in comet comae.« less
Crew Exploration Vehicle Launch Abort Controller Performance Analysis
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.; Raney, David L.
2007-01-01
This paper covers the simulation and evaluation of a controller design for the Crew Module (CM) Launch Abort System (LAS), to measure its ability to meet the abort performance requirements. The controller used in this study is a hybrid design, including features developed by the Government and the Contractor. Testing is done using two separate 6-degree-of-freedom (DOF) computer simulation implementations of the LAS/CM throughout the ascent trajectory: 1) executing a series of abort simulations along a nominal trajectory for the nominal LAS/CM system; and 2) using a series of Monte Carlo runs with perturbed initial flight conditions and perturbed system parameters. The performance of the controller is evaluated against a set of criteria, which is based upon the current functional requirements of the LAS. Preliminary analysis indicates that the performance of the present controller meets (with the exception of a few cases) the evaluation criteria mentioned above.
Automated screening of propulsion system test data by neural networks, phase 1
NASA Technical Reports Server (NTRS)
Hoyt, W. Andes; Whitehead, Bruce A.
1992-01-01
The evaluation of propulsion system test and flight performance data involves reviewing an extremely large volume of sensor data generated by each test. An automated system that screens large volumes of data and identifies propulsion system parameters which appear unusual or anomalous will increase the productivity of data analysis. Data analysts may then focus on a smaller subset of anomalous data for further evaluation of propulsion system tests. Such an automated data screening system would give NASA the benefit of a reduction in the manpower and time required to complete a propulsion system data evaluation. A phase 1 effort to develop a prototype data screening system is reported. Neural networks will detect anomalies based on nominal propulsion system data only. It appears that a reasonable goal for an operational system would be to screen out 95 pct. of the nominal data, leaving less than 5 pct. needing further analysis by human experts.
Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N
2016-06-14
Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung
2012-01-01
The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.
NASA Technical Reports Server (NTRS)
Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung
2013-01-01
The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.
Sources of Uncertainty in the Prediction of LAI / fPAR from MODIS
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Ganapol, Barry D.; Brass, James A. (Technical Monitor)
2002-01-01
To explicate the sources of uncertainty in the prediction of biophysical variables over space, consider the general equation: where z is a variable with values on some nominal, ordinal, interval or ratio scale; y is a vector of input variables; u is the spatial support of y and z ; x and u are the spatial locations of y and z , respectively; f is a model and B is the vector of the parameters of this model. Any y or z has a value and a spatial extent which is called its support. Viewed in this way, categories of uncertainty are from variable (e.g. measurement), parameter, positional. support and model (e.g. structural) sources. The prediction of Leaf Area Index (LAI) and the fraction of absorbed photosynthetically active radiation (fPAR) are examples of z variables predicted using model(s) as a function of y variables and spatially constant parameters. The MOD15 algorithm is an example of f, called f(sub 1), with parameters including those defined by one of six biome types and solar and view angles. The Leaf Canopy Model (LCM)2, a nested model that combines leaf radiative transfer with a full canopy reflectance model through the phase function, is a simpler though similar radiative transfer approach to f(sub 1). In a previous study, MOD15 and LCM2 gave similar results for the broadleaf forest biome. Differences between these two models can be used to consider the structural uncertainty in prediction results. In an effort to quantify each of the five sources of uncertainty and rank their relative importance for the LAI/fPAR prediction problem, we used recent data for an EOS Core Validation Site in the broadleaf biome with coincident surface reflectance, vegetation index, fPAR and LAI products from the Moderate Resolution Imaging Spectrometer (MODIS). Uncertainty due to support on the input reflectance variable was characterized using Landsat ETM+ data. Input uncertainties were propagated through the LCM2 model and compared with published uncertainties from the MOD15 algorithm.
NASA Astrophysics Data System (ADS)
Ananthakrishna, G.; K, Srikanth
2018-03-01
It is well known that plastic deformation is a highly nonlinear dissipative irreversible phenomenon of considerable complexity. As a consequence, little progress has been made in modeling some well-known size-dependent properties of plastic deformation, for instance, calculating hardness as a function of indentation depth independently. Here, we devise a method of calculating hardness by calculating the residual indentation depth and then calculate the hardness as the ratio of the load to the residual imprint area. Recognizing the fact that dislocations are the basic defects controlling the plastic component of the indentation depth, we set up a system of coupled nonlinear time evolution equations for the mobile, forest, and geometrically necessary dislocation densities. Within our approach, we consider the geometrically necessary dislocations to be immobile since they contribute to additional hardness. The model includes dislocation multiplication, storage, and recovery mechanisms. The growth of the geometrically necessary dislocation density is controlled by the number of loops that can be activated under the contact area and the mean strain gradient. The equations are then coupled to the load rate equation. Our approach has the ability to adopt experimental parameters such as the indentation rates, the geometrical parameters defining the Berkovich indenter, including the nominal tip radius. The residual indentation depth is obtained by integrating the Orowan expression for the plastic strain rate, which is then used to calculate the hardness. Consistent with the experimental observations, the increasing hardness with decreasing indentation depth in our model arises from limited dislocation sources at small indentation depths and therefore avoids divergence in the limit of small depths reported in the Nix-Gao model. We demonstrate that for a range of parameter values that physically represent different materials, the model predicts the three characteristic features of hardness, namely, increase in the hardness with decreasing indentation depth, and the linear relation between the square of the hardness and the inverse of the indentation depth, for all but 150 nm, deviating for smaller depths. In addition, we also show that it is straightforward to obtain optimized parameter values that give good fit to the hardness data for polycrystalline cold worked copper and single crystals of silver.
Tensile Flow Behavior of Tungsten Heavy Alloys Produced by CIPing and Gelcasting Routes
NASA Astrophysics Data System (ADS)
Panchal, Ashutosh; Ravi Kiran, U.; Nandy, T. K.; Singh, A. K.
2018-04-01
Present work describes the flow behavior of tungsten heavy alloys with nominal compositions 90W-7Ni-3Fe, 93W-4.9Ni-2.1Fe, and 95W-3.5Ni-1.5Fe (wt pct) produced by CIPing and gelcasting routes. The overall microstructural features of gelcasting are finer than those of CIPing alloys. Both the grain size of W and corresponding contiguity values increase with increase in W content in the present alloys. The volume fraction of matrix phase decreases with increase in W content in both the alloys. The lattice parameter values of the matrix phase also increase with increase in W content. The yield strength (σ YS) continuously increases with increase in W content in both the alloys. The σ YS values of CIPing alloys are marginally higher than those of gelcasting at constant W. The ultimate tensile strength (σ UTS) and elongation values are maximum at intermediate W content. Present alloys exhibit two slopes in true stress-true plastic strain curves in low and high strain regimes and follow a characteristic Ludwigson relation. The two slopes are associated with two deformation mechanisms that are occurring during tensile deformation. The overall nature of differential curves of all the alloys is different and these curves contain three distinctive stages of work hardening (I, II, and III). This suggests varying deformation mechanisms during tensile testing due to different volume fractions of constituent phases. The slip is the predominant deformation mechanism of the present alloys during tensile testing.
Tensile Flow Behavior of Tungsten Heavy Alloys Produced by CIPing and Gelcasting Routes
NASA Astrophysics Data System (ADS)
Panchal, Ashutosh; Ravi Kiran, U.; Nandy, T. K.; Singh, A. K.
2018-06-01
Present work describes the flow behavior of tungsten heavy alloys with nominal compositions 90W-7Ni-3Fe, 93W-4.9Ni-2.1Fe, and 95W-3.5Ni-1.5Fe (wt pct) produced by CIPing and gelcasting routes. The overall microstructural features of gelcasting are finer than those of CIPing alloys. Both the grain size of W and corresponding contiguity values increase with increase in W content in the present alloys. The volume fraction of matrix phase decreases with increase in W content in both the alloys. The lattice parameter values of the matrix phase also increase with increase in W content. The yield strength ( σ YS) continuously increases with increase in W content in both the alloys. The σ YS values of CIPing alloys are marginally higher than those of gelcasting at constant W. The ultimate tensile strength ( σ UTS) and elongation values are maximum at intermediate W content. Present alloys exhibit two slopes in true stress-true plastic strain curves in low and high strain regimes and follow a characteristic Ludwigson relation. The two slopes are associated with two deformation mechanisms that are occurring during tensile deformation. The overall nature of differential curves of all the alloys is different and these curves contain three distinctive stages of work hardening (I, II, and III). This suggests varying deformation mechanisms during tensile testing due to different volume fractions of constituent phases. The slip is the predominant deformation mechanism of the present alloys during tensile testing.
128 slice computed tomography dose profile measurement using thermoluminescent dosimeter
NASA Astrophysics Data System (ADS)
Salehhon, N.; Hashim, S.; Karim, M. K. A.; Ang, W. C.; Musa, Y.; Bahruddin, N. A.
2017-05-01
The increasing use of computed tomography (CT) in clinical practice marks the needs to understand the dose descriptor and dose profile. The purposes of the current study were to determine the CT dose index free-in-air (CTDIair) in 128 slice CT scanner and to evaluate the single scan dose profile (SSDP). Thermoluminescent dosimeters (TLD-100) were used to measure the dose profile of the scanner. There were three sets of CT protocols where the tube potential (kV) setting was manipulated for each protocol while the rest of parameters were kept constant. These protocols were based from routine CT abdominal examinations for male adult abdomen. It was found that the increase of kV settings made the values of CTDIair increased as well. When the kV setting was changed from 80 kV to 120 kV and from 120 kV to 140 kV, the CTDIair values were increased as much as 147.9% and 53.9% respectively. The highest kV setting (140 kV) led to the highest CTDIair value (13.585 mGy). The p-value of less than 0.05 indicated that the results were statistically different. The SSDP showed that when the kV settings were varied, the peak sharpness and height of Gaussian function profiles were affected. The full width at half maximum (FWHM) of dose profiles for all protocols were coincided with the nominal beam width set for the measurements. The findings of the study revealed much information on the characterization and performance of 128 slice CT scanner.
NASA Astrophysics Data System (ADS)
Vergaz, Ricardo; Cachorro, Victoria E.; de Frutos, Ángel M.; Vilaplana, José M.; de La Morena, Benito A.
2005-11-01
Atmospheric aerosol characteristics represented by the spectral aerosol optical depth AOD) and the Ångström turbidity parameter were determined in the coastal area of the Gulf of Cádiz, (southwest of Spain). The columnar aerosol properties presented here correspond to the 1996-1999 period, and were obtained by solar direct irradiance measurements carried out by a Licor1800 spectroradiometer. The performance of this type of medium-spectral resolution radiometric system is analysed over the measured period. The detailed spectral information of these irradiance measurements enabled the use of selected non-absorption gases spectral windows to determine the columnar spectral AOD that was modelled by Ångström formula to obtain the coefficient. Temporal evolutions of instantaneous values together with a general statistical analysis represented by seasonal values, frequency distributions and some representative correlations for the AOD and the derived Ångström coefficient gave us the first insight of aerosol characteristics in this coastal area. Special attention was paid to the analysis of these aerosol properties at the nominal wavelengths of 440 nm, 670 nm, 870 nm and 1020 nm for the near-future comparisons with the Cimel sun-photometer data. However, taking the most representative aerosol wavelength of 500 nm, the variability of the AOD ranges from 0.005 to 0.53, with a mean of 0.12 (s.d = 0.07) and that of the parameter is given by a mean value of 0.93 (s.d. = 0.58) falling inside the range of marine aerosols. A quantitative discrimination of aerosol types was conducted on the basis of the spectral aerosol properties and air mass back trajectory analysis, which resulted in a mixed type because of the specificity of this area, given by very frequent desert dust episodes, continental and polluted local influences. This study represents the first extended data characterization about columnar properties of aerosols in Spain which has been continued by Cimel-AERONET data. Copyright
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poukey, J.W.; Coleman, P.D.; Sanford, T.W.L.
1985-10-01
MABE is a multistage linear electron accelerator which accelerates up to nine beams in parallel. Nominal parameters per beam are 25 kA, final energy 7 MeV, and guide field 20 kG. We report recent progress via theory and simulation in understanding the beam dynamics in such a system. In particular, we emphasize our results on the radial oscillations and emittance growth for a beam passing through a series of accelerating gaps.
Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment
NASA Technical Reports Server (NTRS)
Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.
2012-01-01
Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.
Gebert, A; Peters, J; Bishop, N E; Westphal, F; Morlock, M M
2009-01-01
Primary stability is essential to the success of uncemented prostheses. It is strongly influenced by implantation technique, implant design and bone quality. The goal of this study was to investigate the effect of press-fit parameters on the primary stability of uncemented femoral head resurfacing prostheses. An in vitro study with human specimens and prototype implants (nominal radial interference 170 and 420 microm) was used to investigate the effect of interference on primary stability. A finite element model was used to assess the influence of interference, friction between implant and bone, and bone quality. Primary stability was represented by the torque capacity of the implant. The model predicted increasing stability with actual interference, bone quality and friction coefficient; plastic deformation of the bone began at interferences of less than 100 microm. Experimentally, however, stability was not related to interference. This may be due to abrasion or the collapse of trabecular bone structures at higher interferences, which could not be captured by the model. High nominal interferences as tested experimentally appear unlikely to result in improved stability clinically. An implantation force of about 2,500 N was estimated to be sufficient to achieve a torque capacity of about 30 N m with a small interference (70 microm).
40 CFR 1033.140 - Rated power.
Code of Federal Regulations, 2010 CFR
2010-07-01
... configuration's rated power is the maximum brake power point on the nominal power curve for the locomotive configuration, as defined in this section. See § 1033.901 for the definition of brake power. Round the power value to the nearest whole horsepower. Generally, this will be the brake power of the engine in notch 8...
40 CFR 86.1314-94 - Analytical gases.
Code of Federal Regulations, 2010 CFR
2010-07-01
... CO2. respectively, using nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be: (1... named as NOX with a maximum NO2 concentration of five percent of the nominal value using nitrogen as the... the balance being helium. The mixture shall contain less than 1 ppm equivalent carbon response. 98 to...
40 CFR 86.1314-94 - Analytical gases.
Code of Federal Regulations, 2012 CFR
2012-07-01
... CO2. respectively, using nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be: (1... named as NOX with a maximum NO2 concentration of five percent of the nominal value using nitrogen as the... the balance being helium. The mixture shall contain less than 1 ppm equivalent carbon response. 98 to...
40 CFR 86.1314-94 - Analytical gases.
Code of Federal Regulations, 2013 CFR
2013-07-01
... CO2. respectively, using nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be: (1... named as NOX with a maximum NO2 concentration of five percent of the nominal value using nitrogen as the... the balance being helium. The mixture shall contain less than 1 ppm equivalent carbon response. 98 to...
40 CFR 86.1314-94 - Analytical gases.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CO2. respectively, using nitrogen as the diluent. (b) Gases for the hydrocarbon analyzer shall be: (1... named as NOX with a maximum NO2 concentration of five percent of the nominal value using nitrogen as the... the balance being helium. The mixture shall contain less than 1 ppm equivalent carbon response. 98 to...
16 CFR § 1630.4 - Test procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., flat, with a nominal heat of combustion value of 7180 calories/gram, a mass of 150 mg ±5 mg and a... draft turned off during each test and capable of rapidly removing the products of combustion following... available for inspection at the National Archives and Records Administration (NARA). For information on the...
40 CFR 1033.140 - Rated power.
Code of Federal Regulations, 2011 CFR
2011-07-01
... value to the nearest whole horsepower. Generally, this will be the brake power of the engine in notch 8... each possible operator demand setpoint or “notch”. See 40 CFR 1065.1001 for the definition of operator... discrete operator demand setpoints, or notches, the nominal power curve would be a series of eight power...
49 CFR 805.735-5 - Receipt of gifts, entertainment, and favors by Members or employees.
Code of Federal Regulations, 2010 CFR
2010-10-01
... motivating factors; (2) Acceptance of food and refreshments of nominal value on infrequent occasions in the... of loans from banks or other financial institutions on customary terms to finance proper and usual... transportation, and accept food, lodging, and entertainment incident thereto. (c) Members and employees shall not...
What Students Value as Inspirational and Transformative Teaching
ERIC Educational Resources Information Center
Bradley, Sally; Kirby, Emma; Madriaga, Manuel
2015-01-01
Evidence presented here stems from an analysis of student comments derived from a student-nominated inspirational teaching awards scheme at a large university in the United Kingdom (UK). There is a plethora of literature on teaching excellence and the scholarship of teaching, frequently based upon portfolios or personal claims of excellence, and…
Rogers, Kim R; Navratilova, Jana; Stefaniak, Aleksandr; Bowers, Lauren; Knepp, Alycia K; Al-Abed, Souhail R; Potter, Phillip; Gitipour, Alireza; Radwan, Islam; Nelson, Clay; Bradham, Karen D
2018-04-01
Given the potential for human exposure to silver nanoparticles from spray disinfectants and dietary supplements, we characterized the silver-containing nanoparticles in 22 commercial products that advertised the use of silver or colloidal silver as the active ingredient. Characterization parameters included: total silver, fractionated silver (particulate and dissolved), primary particle size distribution, hydrodynamic diameter, particle number, and plasmon resonance absorbance. A high degree of variability between claimed and measured values for total silver was observed. Only 7 of the products showed total silver concentrations within 20% of their nominally reported values. In addition, significant variations in the relative percentages of particulate vs. soluble silver were also measured in many of these products reporting to be colloidal. Primary silver particle size distributions by transmission electron microscopy (TEM) showed two populations of particles - smaller particles (<5nm) and larger particles between 20 and 40nm. Hydrodynamic diameter measurements using nanoparticle tracking analysis (NTA) correlated well with TEM analysis for the larger particles. Z-average (Z-Avg) values measured using dynamic light scattering (DLS); however, were typically larger than both NTA or TEM particle diameters. Plasmon resonance absorbance signatures (peak absorbance at around 400nm indicative of metallic silver nanoparticles) were only noted in 4 of the 9 yellow-brown colored suspensions. Although the total silver concentrations were variable among products, ranging from 0.54mg/L to 960mg/L, silver containing nanoparticles were identified in all of the product suspensions by TEM. Published by Elsevier B.V.
Magnetic and electrical properties in Co-doped KNbO3 bulk samples
NASA Astrophysics Data System (ADS)
Astudillo, Jairo A.; Dionizio, Stivens A.; Izquierdo, Jorge L.; Morán, Oswaldo; Heiras, Jesús; Bolaños, Gilberto
2018-05-01
Multiferroic materials exhibit in the same phase at least two of the ferroic properties: ferroelectricity, ferromagnetism, and ferroelasticity, which may be coupled to each other. In this work, we investigated bulk materials with a nominal composition KNb0.95Co0.05O3 (KN:Co) fabricated by the standard solid-state reaction technique. X-ray diffraction analysis of the polycrystalline sample shows the respective polycrystalline perovskite structure of the KNbO3 phase with only small variation due to the Co doping. No secondary or segregated phases are observed. The values of the extracted lattice parameters are very close to those reported in the literature for KNbO3 with orthorhombic symmetry (a = 5.696 Å, b = 3.975 Å, and c = 5.721 Å) with space group Bmm2. Measurements of the electric polarization as a function of the electric field at different temperatures indicate the presence of ferroelectricity in our samples. Magnetic response of the pellets, detected by high sensitivity measurements of magnetization as a function of field, reveal weak ferromagnetic behavior in the doped sample at room temperature. Also, ferroelectric hysteresis loops were measured in a magnetic field of 1 T, applied perpendicular to the plane of the sample. Values of the remnant polarization as high as 7.19 and 7.69 μC/cm2 are obtained for 0 applied field and for 1 T, respectively; the value for the strength of the magnetoelectric coupling obtained is 6.9 %.
Surface roughness mediated adhesion forces between borosilicate glass and gram-positive bacteria.
Preedy, Emily; Perni, Stefano; Nipiĉ, Damijan; Bohinc, Klemen; Prokopovich, Polina
2014-08-12
It is well-known that a number of surface characteristics affect the extent of adhesion between two adjacent materials. One of such parameters is the surface roughness as surface asperities at the nanoscale level govern the overall adhesive forces. For example, the extent of bacterial adhesion is determined by the surface topography; also, once a bacteria colonizes a surface, proliferation of that species will take place and a biofilm may form, increasing the resistance of bacterial cells to removal. In this study, borosilicate glass was employed with varying surface roughness and coated with bovine serum albumin (BSA) in order to replicate the protein layer that covers orthopedic devices on implantation. As roughness is a scale-dependent process, relevant scan areas were analyzed using atomic force microscope (AFM) to determine Ra; furthermore, appropriate bacterial species were attached to the tip to measure the adhesion forces between cells and substrates. The bacterial species chosen (Staphylococci and Streptococci) are common pathogens associated with a number of implant related infections that are detrimental to the biomedical devices and patients. Correlation between adhesion forces and surface roughness (Ra) was generally better when the surface roughness was measured through scanned areas with size (2 × 2 μm) comparable to bacteria cells. Furthermore, the BSA coating altered the surface roughness without correlation with the initial values of such parameter; therefore, better correlations were found between adhesion forces and BSA-coated surfaces when actual surface roughness was used instead of the initial (nominal) values. It was also found that BSA induced a more hydrophilic and electron donor characteristic to the surfaces; in agreement with increasing adhesion forces of hydrophilic bacteria (as determined through microbial adhesion to solvents test) on BSA-coated substrates.
NASA Astrophysics Data System (ADS)
Mautjana, R. T.; Molefe, P. T.; Mayindu, N. F.; Armah, M. N.; Ramasawmy, V.; Albasini, G. L.; Matali, S.; Richmond, H.; Rusimbi, V.; Kiwanuka, J.; Mutale, D. M.; Mutsimba, F.
2018-01-01
This report summarizes the results of AFRIMETS.M.M-S6 mass standards comparison conducted between eleven participating laboratories/countries. Two sets of five weights with nominal values 100 mg, 100 g, 500 g, 1 kg and 5 kg were used as the traveling standards. These nominal values were decided from the needs of participating laboratories submitted to the pilot laboratory through a questionnaire and agreed upon by all participants. The traveling standards were hand carried between laboratories starting from February 2014 and were received from the last participants in October 2014. The programme was coordinated by National Metrology Institute of South Africa (NMISA), who provided the travelling standards and reference values for the comparison. The corrections to the BIPM as-maintained mass unit [5] have insignificant influence on the results of this comparison. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
Gu, Z.; Sam, S. S.; Sun, Y.; Tang, L.; Pounds, S.; Caliendo, A. M.
2016-01-01
A potential benefit of digital PCR is a reduction in result variability across assays and platforms. Three sets of PCR reagents were tested on two digital PCR systems (Bio-Rad and RainDance), using three different sets of PCR reagents for quantitation of cytomegalovirus (CMV). Both commercial quantitative viral standards and 16 patient samples (n = 16) were tested. Quantitative accuracy (compared to nominal values) and variability were determined based on viral standard testing results. Quantitative correlation and variability were assessed with pairwise comparisons across all reagent-platform combinations for clinical plasma sample results. The three reagent sets, when used to assay quantitative standards on the Bio-Rad system, all showed a high degree of accuracy, low variability, and close agreement with one another. When used on the RainDance system, one of the three reagent sets appeared to have a much better correlation to nominal values than did the other two. Quantitative results for patient samples showed good correlation in most pairwise comparisons, with some showing poorer correlations when testing samples with low viral loads. Digital PCR is a robust method for measuring CMV viral load. Some degree of result variation may be seen, depending on platform and reagents used; this variation appears to be greater in samples with low viral load values. PMID:27535685
Pouwels, J Loes; Lansu, Tessa A M; Cillessen, Antonius H N
2016-01-01
This study had three goals. First, we examined the prevalence of the participant roles of bullying in middle adolescence and possible gender differences therein. Second, we examined the behavioral and status characteristics associated with the participant roles in middle adolescence. Third, we compared two sets of criteria for assigning students to the participant roles of bullying. Participants were 1,638 adolescents (50.9% boys, M(age) = 16.38 years, SD =.80) who completed the shortened participant role questionnaire and peer nominations for peer status and behavioral characteristics. Adolescents were assigned to the participant roles according to the relative criteria of Salmivalli, Lagerspetz, Björkqvist, Österman, and Kaukiainen (1996). Next, the students in each role were divided in two subgroups based on an additional absolute criterion: the Relative Only Criterion subgroup (nominated by less than 10% of their classmates) and the Absolute & Relative Criterion subgroup (nominated by at least 10% of their classmates). Adolescents who bullied or reinforced or assisted bullies were highly popular and disliked and scored high on peer-valued characteristics. Adolescents who were victimized held the weakest social position in the peer group. Adolescents who defended victims were liked and prosocial, but average in popularity and peer-valued characteristics. Outsiders held a socially weak position in the peer group, but were less disliked, less aggressive, and more prosocial than victims. The behavior and status profiles of adolescents in the participant roles were more extreme for the Absolute & Relative Criterion subgroup than for the Relative Only Criterion subgroup. © 2015 Wiley Periodicals, Inc.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
OPAD status report - Investigation of SSME component erosion
NASA Astrophysics Data System (ADS)
Powers, W. T.; Cooper, A. E.; Wallace, T. L.
1992-04-01
Significant erosion of preburner faceplates was observed during recent SSME test firings at the NASA Technology Test Bed (TTB). The OPAD instrumentation acquired exhaust-plume spectral data during each test which indicate the occurrence of metallic species consistent with faceplate component composition. A qualitative analysis of the spectral data was conducted to evaluate the state of the engine versus time for each test according to the nominal conditions of TTB firing number 17 and number 18. In general the analyses indicate abnormal erosion levels at or near startup. Subsequent to the initial erosion event, signal levels tend to decrease towards nominal baseline values. These findings, in conjunction with post-test engine inspections, suggest that in cases under study, the erosion may not have been catastrophic to the immediate operation of the engine.
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1993-01-01
A formulation that makes possible the integration of collision prediction and avoidance stages for mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the distance between the robot and a moving obstacle is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. A feedback control is developed and local asymptotic stability is proved if the velocity of the moving obstacle is bounded. Furthermore, a solution to the problem of inverse dynamics for the mobile robot is given. Simulation results verify the value of the proposed strategy.
A Self-Adapting System for the Automated Detection of Inter-Ictal Epileptiform Discharges
Lodder, Shaun S.; van Putten, Michel J. A. M.
2014-01-01
Purpose Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Methods Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form “IED nominations”, each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Key Findings Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20–30 min recordings 1took approximately 5 min. Significance The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents. PMID:24454813
Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System
NASA Technical Reports Server (NTRS)
Williams, Peggy S.
2004-01-01
The NASA F-15 Intelligent Flight Control System project team has developed a series of flight control concepts designed to demonstrate the benefits of a neural network-based adaptive controller. The objective of the team is to develop and flight-test control systems that use neural network technology to optimize the performance of the aircraft under nominal conditions as well as stabilize the aircraft under failure conditions. Failure conditions include locked or failed control surfaces as well as unforeseen damage that might occur to the aircraft in flight. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to the baseline aerodynamic derivatives in flight. This set of open-loop flight tests was performed in preparation for a future phase of flights in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed a pitch frequency sweep and an automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. An examination of flight data shows that addition of the flight-identified aerodynamic derivative increments into the simulation improved the pitch handling qualities of the aircraft.
Bond length variation in Zn substituted NiO studied from extended X-ray absorption fine structure
NASA Astrophysics Data System (ADS)
Singh, S. D.; Poswal, A. K.; Kamal, C.; Rajput, Parasmani; Chakrabarti, Aparna; Jha, S. N.; Ganguli, Tapas
2017-06-01
Bond length behavior for Zn substituted NiO is determined through extended x-ray absorption fine structure (EXAFS) measurements performed at ambient conditions. We report bond length value of 2.11±0.01 Å for Zn-O of rock salt (RS) symmetry, when Zn is doped in RS NiO. Bond length for Zn substituted NiO RS ternary solid solutions shows relaxed behavior for Zn-O bond, while it shows un-relaxed behavior for Ni-O bond. These observations are further supported by first-principles calculations. It is also inferred that Zn sublattice remains nearly unchanged with increase in lattice parameter. On the other hand, Ni sublattice dilates for Zn compositions up to 20% to accommodate increase in the lattice parameter. However, for Zn compositions more than 20%, it does not further dilate. It has been attributed to the large disorder that is incorporated in the system at and beyond 20% of Zn incorporation in the cubic RS lattice of ternary solid solutions. For these large percentages of Zn incorporation, the Ni and the Zn atoms re-arrange themselves microscopically about the same nominal bond length rather than systematically increase in magnitude to minimize the energy of the system. This results in an increase in the Debye-Waller factor with increase in the Zn concentration rather than a systematic increase in the bond lengths.
Flight motor set 360L008 (STS-32R). Volume 1: System overview
NASA Technical Reports Server (NTRS)
Garecht, D. M.
1990-01-01
Flight motor set 360L008 was launched as part of NASA space shuttle mission STS-32R. As with all previous redesigned solid rocket motor launches, overall motor performance was excellent. All ballistic contract end item specification parameters were verified with the exception of ignition interval and rise rates, which could not be verified due to elimination of developmental flight instrumentation. But the available low sample rate data showed nominal propulsion performance. All ballistic and mass property parameters closely matched the predicted values and were well within the required contract end item specification levels that could be assessed. All field joint heaters and igniter joint heaters performed without anomalies. Redesigned field joint heaters and the redesigned left-hand igniter heater were used on this flight. The changes to the heaters were primarily to improve durability and reducing handling damage. Evaluation of the ground environment instrumentation measurements again verified thermal mode analysis data and showed agreement with predicted environmental effects. No launch commit criteria violation occurred. Postflight inspection again verified superior performance of the insulation, phenolics, metal parts, and seals. Postflight evaluation indicated both nozzles performed as expected during flight. All combustion gas was contained by insulation in the field and case-to-nozzle joints. Recommendations were made concerning improved thermal modeling and measurements. The rationale for these recommendations and complete result details are presented.
NASA Astrophysics Data System (ADS)
Protassov, R.; van Dyk, D.; Connors, A.; Kashyap, V.; Siemiginowska, A.
2000-12-01
We examine the x-ray spectrum of the afterglow of GRB 970508, analyzed for Fe line emission by Piro et al (1999, ApJL, 514, L73). This is a difficult and extremely important measurement: the detection of x-ray afterglows from γ -ray bursts is at best a tricky business, relying on near-real satellite time response to unpredictable events; and a great deal of luck in catching a burst bright enough for a useful spectral analysis. Detecting a clear atomic (or cyclotron) line in the generally smooth and featureless afterglow (or burst) emission not only gives one of the few very specific keys to the physics local to the emission region, but also provides clues or confirmation of its distance (via redshift). Unfortunately, neither the likelihood ratio test or the related F-statistic commonly used to detect spectral lines adhere to their nominal Chi square and F-distributions. Thus we begin by calibrating the F-statistic used in Piro et al (1999, ApJL, 514, L73) via a simulation study. The simulation study relies on a completely specified source model, i.e. we do Monte Carlo simulations with all model parameters fixed (so--called ``parametric bootstrapping''). Second, we employ the method of posterior predictive p-values to calibrate a LRT statistic while accounting for the uncertainty in the parameters of the source model. Our analysis reveals evidence for the Fe K line.
Duft, Martina; Schulte-Oehlmann, Ulrike; Tillmann, Michaela; Markert, Bernd; Oehlmann, Jörg
2003-01-01
The effects of two suspected endocrine-disrupting chemicals, the xeno-androgens triphenyltin (TPT) and tributyltin (TBT), were investigated in a new whole-sediment biotest with the freshwater mudsnail Potamopyrgus antipodarum (Gastropoda, Prosobranchia). Artificial sediments were spiked with seven concentrations, ranging from 10 to 500 microg nominal TPT-Sn/kg dry weight and TBT-Sn/kg dry weight, respectively. We analyzed the responses of the test species after two, four, and eight weeks exposure. For both compounds, P. antipodarum exhibited a sharp decline in the number of embryos sheltered in its brood pouch in a time- and concentration-dependent manner in comparison to the control sediment. The number of new, still unshelled embryos turned out to be the most sensitive parameter. The lowest-observed-effect concentration (LOEC) was equivalent to the lowest administered concentration (10 microg/kg of each test compound) for most parameters and thus no no-observed-effect concentration (NOEC) could be established. The calculation of effect concentrations (EC10) resulted in even lower values for both substances (EC10 after eight weeks for unshelled embryos: 0.03 microg TPT-Sn/kg, EC10 after four weeks for unshelled embryos: 0.98 microg TBT-Sn/kg). Our results indicate that P. antipodarum is highly sensitive to both endocrine disruptors TPT and TBT at environmentally relevant concentrations.
Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557
Hyun, Seung Won; Wong, Weng Kee
2015-11-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, T.; Fondeur, F.; Fink, S.
Solvent Hold Tank (SHT) samples are sent to Savannah River National Laboratory (SRNL) to examine solvent composition changes over time. On December 5, 2011, Operations personnel delivered six samples from the SHT (MCU-11-1452 through -1457) for analysis. These samples are intended to verify that the solvent is within the specified composition range. The results from the analyses are presented in this document. Samples were received in p-nut vials containing {approx}10 mL each. Once taken into the Shielded Cells, the samples were combined. Samples were removed for analysis by density, semi-volatile organic analysis (SVOA), high performance liquid chromatography (HPLC), and Fourier-Transformmore » Infra-Red spectroscopy (FTIR). Details for the work are contained in a controlled laboratory notebook. Each of the six p-nut vials contained a single phase, with no apparent solids contamination or cloudiness. Table 1 contains the results of the analyses for the combined samples. A duplicate density measurement of the organic phase gave a result of 0.844 g/mL (1.2% residual standard deviation - RSD). Using the density as a starting point, we know that the Isopar{reg_sign} L should be slightly higher than nominal and the other components should be slightly lower than nominal. The results as a whole are internally consistent. All measurements indicate Isopar{reg_sign} L higher than nominal, and Modifier lower than nominal. The extractant result is higher than expected - given the other results, the extractant concentration should be under nominal values. Using the measured density as well as the Isopar{reg_sign} L and Modifier concentrations from the FTIR results, we calculate an extractant concentration of 6888 mg/L. This value is outside the analytical uncertainty of the reported HPLC value. Given the other results, this most likely indicates that the HPLC extractant result was biased high. When compared to the MCU density target of 0.845 g/mL, there is no need to add an Isopar{reg_sign} L trim. However, it is advisable to add sufficient trioctylamine (TOA) to return the solvent composition to within specifications as that component has declined to about 64% the concentration since the last analysis. The TOA measurement was performed twice, so the result is not an analytical aberration. TOA has not been added to the system since the previous quarterly sample in October 2011. As with the previous solvent sample results, these analyses indicate that the solvent does not require Isopar{reg_sign} L trimming at this time. However, addition of TOA is warranted. These findings indicate that the new protocols for solvent monitoring and control are yielding favorable results. Nevertheless, the deviation in the TOA concentration since the last analysis indicates continued periodic (i.e., quarterly) monitoring is recommended.« less
Modal parameters of space structures in 1 G and 0 G
NASA Technical Reports Server (NTRS)
Bicos, Andrew S.; Crawley, Edward F.; Barlow, Mark S.; Van Schoor, Marthinus C.; Masters, Brett
1993-01-01
Analytic and experimental results are presented from a study of the changes in the modal parameters of space structural test articles from one- to zero-gravity. Deployable, erectable, and rotary modules was assembled to form three one- and two-dimensional structures, in which variations in bracing wire and rotary joint preload could be introduced. The structures were modeled as if hanging from a suspension system in one gravity, and unconstrained, as if free floating in zero-gravity. The analysis is compared with ground experimental measurements, which were made on a spring-wire suspension system with a nominal plunge frequency of one Hertz, and with measurements made on the Shuttle middeck. The degree of change in linear modal parameters as well as the change in nonlinear nature of the response is examined. Trends in modal parameters are presented as a function of force amplitude, joint preload, reassembly, shipset, suspension, and ambient gravity level.
ELECTRIC AND MAGNETIC FIELDS <100 KHZ IN ELECTRIC AND GASOLINE-POWERED VEHICLES.
Tell, Richard A; Kavet, Robert
2016-12-01
Measurements were conducted to investigate electric and magnetic fields (EMFs) from 120 Hz to 10 kHz and 1.2 to 100 kHz in 9 electric or hybrid vehicles and 4 gasoline vehicles, all while being driven. The range of fields in the electric vehicles enclosed the range observed in the gasoline vehicles. Mean magnetic fields ranged from nominally 0.6 to 3.5 µT for electric/hybrids depending on the measurement band compared with nominally 0.4 to 0.6 µT for gasoline vehicles. Mean values of electric fields ranged from nominally 2 to 3 V m -1 for electric/hybrid vehicles depending on the band, compared with 0.9 to 3 V m -1 for gasoline vehicles. In all cases, the fields were well within published exposure limits for the general population. The measurements were performed with Narda model EHP-50C/EHP-50D EMF analysers that revealed the presence of spurious signals in the EHP-50C unit, which were resolved with the EHP-50D model. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ductile-regime turning of germanium and silicon
NASA Technical Reports Server (NTRS)
Blake, Peter N.; Scattergood, Ronald O.
1989-01-01
Single-point diamond turning of silicon and germanium was investigated in order to clarify the role of cutting depth in coaxing a ductile chip formation in normally brittle substances. Experiments based on the rapid withdrawal of the tool from the workpiece have shown that microfracture damage is a function of the effective depth of cut (as opposed to the nominal cutting depth). In essence, damage created by the leading edge of the tool is removed several revolutions later by lower sections of the tool edge, where the effective cutting depth is less. It appears that a truly ductile cutting response can be achieved only when the effective cutting depth, or critical chip thickness, is less than about 20 nm. Factors such as tool rake angle are significant in that they will affect the actual value of the critical chip thickness for transition from brittle to ductile response. It is concluded that the critical chip thickness is an excellent parameter for measuring the effects of machining conditions on the ductility of the cut and for designing tool-workpiece geometry in both turning and grinding.
High Accuracy Transistor Compact Model Calibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less
Advanced Control Synthesis for Reverse Osmosis Water Desalination Processes.
Phuc, Bui Duc Hong; You, Sam-Sang; Choi, Hyeung-Six; Jeong, Seok-Kwon
2017-11-01
In this study, robust control synthesis has been applied to a reverse osmosis desalination plant whose product water flow and salinity are chosen as two controlled variables. The reverse osmosis process has been selected to study since it typically uses less energy than thermal distillation. The aim of the robust design is to overcome the limitation of classical controllers in dealing with large parametric uncertainties, external disturbances, sensor noises, and unmodeled process dynamics. The analyzed desalination process is modeled as a multi-input multi-output (MIMO) system with varying parameters. The control system is decoupled using a feed forward decoupling method to reduce the interactions between control channels. Both nominal and perturbed reverse osmosis systems have been analyzed using structured singular values for their stabilities and performances. Simulation results show that the system responses meet all the control requirements against various uncertainties. Finally the reduced order controller provides excellent robust performance, with achieving decoupling, disturbance attenuation, and noise rejection. It can help to reduce the membrane cleanings, increase the robustness against uncertainties, and lower the energy consumption for process monitoring.
Multiple Hollow Cathode Wear Testing for the Space Station Plasma Contactor
NASA Technical Reports Server (NTRS)
Soulas, George C.
1994-01-01
A wear test of four hollow cathodes was conducted to resolve issues associated with the Space Station plasma contactor. The objectives of this test were to evaluate unit-to-unit dispersions, verify the transportability of contamination control protocols developed by the project, and to evaluate cathode contamination control and activation procedures to enable simplification of the gas feed system and heater power processor. These objectives were achieved by wear testing four cathodes concurrently to 2000 hours. Test results showed maximum unit-to-unit deviations for discharge voltages and cathode tip temperatures to be +/-3 percent and +/-2 percent, respectively, of the nominal values. Cathodes utilizing contamination control procedures known to increase cathode lifetime showed no trends in their monitored parameters that would indicate a possible failure, demonstrating that contamination control procedures had been successfully transferred. Comparisons of cathodes utilizing and not utilizing a purifier or simplified activation procedure showed similar behavior during wear testing and pre- and post-test performance characterizations. This behavior indicates that use of simplified cathode systems and procedures is consistent with long cathode lifetimes.
Application of X-ray micro-computed tomography on high-speed cavitating diesel fuel flows
NASA Astrophysics Data System (ADS)
Mitroglou, N.; Lorenzi, M.; Santini, M.; Gavaises, M.
2016-11-01
The flow inside a purpose built enlarged single-orifice nozzle replica is quantified using time-averaged X-ray micro-computed tomography (micro-CT) and high-speed shadowgraphy. Results have been obtained at Reynolds and cavitation numbers similar to those of real-size injectors. Good agreement for the cavitation extent inside the orifice is found between the micro-CT and the corresponding temporal mean 2D cavitation image, as captured by the high-speed camera. However, the internal 3D structure of the developing cavitation cloud reveals a hollow vapour cloud ring formed at the hole entrance and extending only at the lower part of the hole due to the asymmetric flow entry. Moreover, the cavitation volume fraction exhibits a significant gradient along the orifice volume. The cavitation number and the needle valve lift seem to be the most influential operating parameters, while the Reynolds number seems to have only small effect for the range of values tested. Overall, the study demonstrates that use of micro-CT can be a reliable tool for cavitation in nozzle orifices operating under nominal steady-state conditions.
Optimal Load-Side Control for Frequency Regulation in Smart Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Changhong; Mallada, Enrique; Low, Steven
Frequency control rebalances supply and demand while maintaining the network state within operational margins. It is implemented using fast ramping reserves that are expensive and wasteful, and which are expected to become increasingly necessary with the current acceleration of renewable penetration. The most promising solution to this problem is the use of demand response, i.e., load participation in frequency control. Yet it is still unclear how to efficiently integrate load participation without introducing instabilities and violating operational constraints. In this paper, we present a comprehensive load-side frequency control mechanism that can maintain the grid within operational constraints. In particular, ourmore » controllers can rebalance supply and demand after disturbances, restore the frequency to its nominal value, and preserve interarea power flows. Furthermore, our controllers are distributed (unlike the currently implemented frequency control), can allocate load updates optimally, and can maintain line flows within thermal limits. We prove that such a distributed load-side control is globally asymptotically stable and robust to unknown load parameters. We illustrate its effectiveness through simulations.« less
Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred
2011-11-01
Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.
RSRM and ETM03 Internal Flow Simulations and Comparisons
NASA Technical Reports Server (NTRS)
Ahmad, R. A.; Morstadt, R. A.; Eaton, A. M.
2003-01-01
ETM03 (Engineering Test Motor-03) is an extended length RSRM (Reusable Solid Rocket Motor) designed to increase motor performance and create more severe internal environments compared with the standard four-segment RSRM motor configuration. This is achieved primarily through three unique design features. First is the incorporation of an additional RSRM center segment, second is a slight increase in throat diameter, and third is the use of an Extended Aft Exit Cone (EAEC). As a result of these design features, parameters such as web time, action time, head end pressure, web time average pressure, maximum thrust, mass flow rate, centerline Mach number, pressure and thrust integrals have all increased compared with nominal RSRM values. In some cases these increases are substantial. The primary objective of the ETM03 test program is to provide a platform for RSRM component margin testing. Test results will not only provide direct data concerning component performance under more adverse conditions, but serve as a second design data point for developing, validating and enhancing component analytical modeling techniques. To help component designers assess how the changes in motor environment will affect performance, internal flow simulations for both the nominal RSRM and ETM03 motor designs were completed to obtain comparisons of aero-thermal boundary conditions and system loads. Full geometries for both motors were characterized with two-dimensional axi-symmetric models at burn times of 1, 20, 54, 67 and 80-seconds. A sixth set considered burn times of 110 and 117-seconds for RSRM and ETM03, respectively. The simulations were performed using the computational fluid dynamics (CFD) commercial code FLUENT (trademark). Of particular interest were any differences between the two motor environments that could lead to a significant increase in system loads, or in internal insulation and/or nozzle component charring and erosion in ETM03 compared with RSRM. Based on these comparative analyses conducted in this study, the objective of ETM03 will be achieved by providing a more adverse operating environment for motor components than the nominal RSRM environment. For example: Higher chamber pressure drop in ETM03 than in RSRM; higher centerline Mach numbers approaching the nozzle in ETM03 than in RSRM; higher heat transfer rates for the internal insulation and nozzle components in ETM03 than in RSRM; and higher levels of droplet impingement and slag accumulation in ETM03 than in the RSRM.
NASA Astrophysics Data System (ADS)
McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin
2017-12-01
We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.
2009-12-01
xix USLE Universal Soil Loss Equation UV Ultraviolet UZSN Upper Zone Nominal Storage WA-DOE Washington State Department of Ecology WA-DOH...Effective Impervious Area IMPLND Impervious Land Cover INFILT Interflow Inflow Parameter (related to infiltration capacity of the soil ) INSUR...within Watershed (#/Km) SCCWRP Southern California Coastal Water Research Project SCS Soil Conservation Service SGA Shellfish Growing Area SPAWAR
X-Ray Simulator Theory Support
1993-11-01
the pulse power elements in existing and future DNA flash x-ray simulators, in particular DECADE. The pulse power for this machine is based on...usually requires usage at less than the radiation the longer the radiation pulse. full power . Energy delivered to the plasma load is converted into...on the Proto II generator sured with ap-i-n diode filtered with 25 pm ofaluminum; the TABLE 1. Nominal parameters for some pulse power generators used
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poukey, J.W.; Coleman, P.D.; Sanford, T.W.L.
1985-01-01
MABE is a multistage linear electron accelerator which accelerates up to nine beams in parallel. Nominal parameters per beam are 25 kA, final energy 7 MeV, and guide field 20 kG. We report recent progress via theory and simulation in understanding the beam dynamics in such a system. In particular, we emphasize our results on the radial oscillations and emittance growth for a beam passing through a series of accelerating gaps. 12 refs., 8 figs.
A cost/benefit analysis of commercial fusion-fission hybrid reactor development
NASA Astrophysics Data System (ADS)
Kostoff, Ronald N.
1983-04-01
A simple algorithm was developed that allows rapid computation of the ratio, R, of present worth of benefits to present worth of hybrid R&D program costs as a function of potential hybrid unit electricity cost savings, discount rate, electricity demand growth rate, total hybrid R&D program cost, and time to complete a demonstration reactor. In the sensitivity study, these variables were assigned nominal values (unit electricity cost savings of 4 mills/kW-hr, discount rate of 4%/year, growth rate of 2.25%/year, total R&D program cost of 20 billion, and time to complete a demonstration reactor of 30 years), and the variable of interest was varied about its nominal value. Results show that R increases with decreasing discount rate and increasing unit electricity savings and ranges from 4 to 94 as discount rate ranges from 5 to 3%/year and unit electricity savings range from 2 to 6 mills/kW-hr. R increases with increasing growth rate and ranges from 3 to 187 as growth rate ranges from 1 to 3.5%/year and unit electricity cost savings range from 2 to 6 mills/kW-hr. R attains a maximum value when plotted against time to complete a demonstration reactor. The location of this maximum value occurs at shorter completion times as discount rate increases, and this optimal completion time ranges from 20 years for a discount rate of 4%/year to 45 years for a discount rate of 3%/year.
Cost of Living and Taxation Adjustments in Salary Comparisons. AIR 1993 Annual Forum Paper.
ERIC Educational Resources Information Center
Zeglen, Marie E.; Tesfagiorgis, Gebre
This study examined faculty salaries at 50 higher education institutions using methods to adjust salaries for geographic differences, cost of living, and tax burdens so that comparisons were based on real rather than nominal value of salaries. The study sample consisted of one public doctorate granting institution from each state and used salary…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-04
... protect the unique and important resources and values of the land for the benefit and enjoyment of present..., paleontological, natural, scientific, recreational, wilderness, wildlife, riparian, historical, educational, and... Uncompahgre Field Offices, or may be downloaded from the following Web site: http://www.blm.gov/co/st/en/nca...
40 CFR 86.114-94 - Analytical gases.
Code of Federal Regulations, 2010 CFR
2010-07-01
... and CO2 analyzers shall be single blends of CO and CO2 respectively using nitrogen as the diluent. (2... of the nominal value, using nitrogen as the diluent. (5) Fuel for FIDs and HFIDs and the methane analyzer shall be a blend of 40 ±2 percent hydrogen with the balance being helium. The mixture shall...
40 CFR 86.114-94 - Analytical gases.
Code of Federal Regulations, 2013 CFR
2013-07-01
... and CO2 analyzers shall be single blends of CO and CO2 respectively using nitrogen as the diluent. (2... of the nominal value, using nitrogen as the diluent. (5) Fuel for FIDs and HFIDs and the methane analyzer shall be a blend of 40 ±2 percent hydrogen with the balance being helium. The mixture shall...
40 CFR 86.114-94 - Analytical gases.
Code of Federal Regulations, 2012 CFR
2012-07-01
... and CO2 analyzers shall be single blends of CO and CO2 respectively using nitrogen as the diluent. (2... of the nominal value, using nitrogen as the diluent. (5) Fuel for FIDs and HFIDs and the methane analyzer shall be a blend of 40 ±2 percent hydrogen with the balance being helium. The mixture shall...
40 CFR 86.114-94 - Analytical gases.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and CO2 analyzers shall be single blends of CO and CO2 respectively using nitrogen as the diluent. (2... of the nominal value, using nitrogen as the diluent. (5) Fuel for FIDs and HFIDs and the methane analyzer shall be a blend of 40 ±2 percent hydrogen with the balance being helium. The mixture shall...
40 CFR 86.114-94 - Analytical gases.
Code of Federal Regulations, 2014 CFR
2014-07-01
... and CO2 analyzers shall be single blends of CO and CO2 respectively using nitrogen as the diluent. (2... of the nominal value, using nitrogen as the diluent. (5) Fuel for FIDs and HFIDs and the methane analyzer shall be a blend of 40 ±2 percent hydrogen with the balance being helium. The mixture shall...
49 CFR 178.68 - Specification 4E welded aluminum cylinders.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Where: S = wall stress in psi; P = minimum test pressure prescribed for water jacket test; D = outside... and service pressure. A DOT 4E cylinder is a welded aluminum cylinder with a water capacity (nominal... stress at twice service pressure may not exceed the lesser value of either of the following: (i) 20,000...
ERIC Educational Resources Information Center
Haynes, Abby; Butow, Phyllis; Brennan, Sue; Williamson, Anna; Redman, Sally; Carter, Stacy; Gallego, Gisselle; Rudge, Sian
2018-01-01
This paper explores the enormous variation in views, championing behaviours and impacts of liaison people: staff nominated to facilitate, tailor and promote SPIRIT (a research utilisation intervention trial in six Australian health policy agencies). Liaison people made cost/benefit analyses: they weighed the value of participation against its…
30 CFR 90.204 - Approved sampling devices; maintenance and calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... pack multiplied by 1.25. The voltage for other than nickel cadmium cell batteries shall not be lower than the product of the number of cells in the battery pack multiplied by the manufacturer's nominal voltage per cell value; (2) Examination of all components of the cyclone to assure that they are clean and...
30 CFR 70.204 - Approved sampling devices; maintenance and calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... battery pack multiplied by the manufacturer's nominal voltage per cell value; (2) Examination of all... calibrated at the flowrate of 2.0 liters of air per minute, or at a different flowrate as prescribed by the... than the product of the number of cells in the battery pack multiplied by 1.25. The voltage for other...
Exploring a possible origin of a 14 deg y-normal spin tilt at RHIC polarimeter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meot, F.; Huang, H.
2015-06-15
A possible origin of a 14 deg y-normal spin n → 0 tilt at the polarimeter is in snake angle defects. This possible cause is investigated by scanning the snake axis angle µ, and the spin rotation angle at the snake, φ, in the vicinity of their nominal values.
NASA Technical Reports Server (NTRS)
Stokes, R. L.
1979-01-01
Tests performed to determine accuracy and efficiency of bus separators used in microprocessors are presented. Functional, AC parametric, and DC parametric tests were performed in a Tektronix S-3260 automated test system. All the devices passed the functional tests and yielded nominal values in the parametric test.
Srivastava, Abneesh; Michael Verkouteren, R
2018-07-01
Isotope ratio measurements have been conducted on a series of isotopically distinct pure CO 2 gas samples using the technique of dual-inlet isotope ratio mass spectrometry (DI-IRMS). The influence of instrumental parameters, data normalization schemes on the metrological traceability and uncertainty of the sample isotope composition have been characterized. Traceability to the Vienna PeeDee Belemnite(VPDB)-CO 2 scale was realized using the pure CO 2 isotope reference materials(IRMs) 8562, 8563, and 8564. The uncertainty analyses include contributions associated with the values of iRMs and the repeatability and reproducibility of our measurements. Our DI-IRMS measurement system is demonstrated to have high long-term stability, approaching a precision of 0.001 parts-per-thousand for the 45/44 and 46/44 ion signal ratios. The single- and two-point normalization bias for the iRMs were found to be within their published standard uncertainty values. The values of 13 C/ 12 C and 18 O/ 16 O isotope ratios are expressed relative to VPDB-CO 2 using the [Formula: see text] and [Formula: see text] notation, respectively, in parts-per-thousand (‰ or per mil). For the samples, value assignments between (-25 to +2) ‰ and (-33 to -1) ‰ with nominal combined standard uncertainties of (0.05, 0.3) ‰ for [Formula: see text] and [Formula: see text], respectively were obtained. These samples are used as laboratory reference to provide anchor points for value assignment of isotope ratios (with VPDB traceability) to pure CO 2 samples. Additionally, they serve as potential parent isotopic source material required for the development of gravimetric based iRMs of CO 2 in CO 2 -free dry air in high pressure gas cylinder packages at desired abundance levels and isotopic composition values. Graphical abstract CO 2 gas isotope ratio metrology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Shen, C; Tian, Z
Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less
Extreme-value dependence: An application to exchange rate markets
NASA Astrophysics Data System (ADS)
Fernandez, Viviana
2007-04-01
Extreme value theory (EVT) focuses on modeling the tail behavior of a loss distribution using only extreme values rather than the whole data set. For a sample of 10 countries with dirty/free float regimes, we investigate whether paired currencies exhibit a pattern of asymptotic dependence. That is, whether an extremely large appreciation or depreciation in the nominal exchange rate of one country might transmit to another. In general, after controlling for volatility clustering and inertia in returns, we do not find evidence of extreme-value dependence between paired exchange rates. However, for asymptotic-independent paired returns, we find that tail dependency of exchange rates is stronger under large appreciations than under large depreciations.
NASA Astrophysics Data System (ADS)
Lee, J.
2013-12-01
Ground-Based Augmentation Systems (GBAS) support aircraft precision approach and landing by providing differential GPS corrections to aviation users. For GBAS applications, most of ionospheric errors are removed by applying the differential corrections. However, ionospheric correction errors may exist due to ionosphere spatial decorrelation between GBAS ground facility and users. Thus, the standard deviation of ionosphere spatial decorrelation (σvig) is estimated and included in the computation of error bounds on user position solution. The σvig of 4mm/km, derived for the Conterminous United States (CONUS), bounds one-sigma ionospheric spatial gradients under nominal conditions (including active, but not stormy condition) with an adequate safety margin [1]. The conservatism residing in the current σvig by fixing it to a constant value for all non-stormy conditions could be mitigated by subdividing ionospheric conditions into several classes and using different σvig for each class. This new concept, real-time σvig adaptation, will be possible if the level of ionospheric activity can be well classified based on space weather intensity. This paper studies correlation between the statistics of nominal ionospheric spatial gradients and space weather indices. The analysis was carried out using two sets of data collected from Continuous Operating Reference Station (CORS) Network; 9 consecutive (nominal and ionospherically active) days in 2004 and 19 consecutive (relatively 'quiet') days in 2010. Precise ionospheric delay estimates are obtained using the simplified truth processing method and vertical ionospheric gradients are computed using the well-known 'station pair method' [2]. The remaining biases which include carrier-phase leveling errors and Inter-frequency Bias (IFB) calibration errors are reduced by applying linear slip detection thresholds. The σvig was inflated to overbound the distribution of vertical ionospheric gradients with the required confidence level. Using the daily maximum values of σvig, day-to-day variations of spatial gradients are compared to those of two space weather indices; Disturbance, Storm Time (Dst) index and Interplanetary Magnetic Field Bz (IMF Bz). The day-to-day variations of both space weather indices showed a good agreement with those of daily maximum σvig. The results demonstrate that ionospheric gradient statistics are highly correlated with space weather indices on nominal and off-nominal days. Further investigation on this relationship would facilitate prediction of upcoming ionospheric behavior based on space weather information and adjusting σvig in real time. Consequently it will improve GBAS availability by adding external information to operation. [1] Lee, J., S. Pullen, S. Datta-Barua, and P. Enge (2007), Assessment of ionosphere spatial decorrelation for GPS-based aircraft landing systems, J. Aircraft, 44(5), 1662-1669, doi:10.2514/1.28199. [2] Jung, S., and J. Lee (2012), Long-term ionospheric anomaly monitoring for ground based augmentation systems, Radio Sci., 47, RS4006, doi:10.1029/2012RS005016.
Xu, Yidong; Qian, Chunxiang
2013-01-01
Based on meso-damage mechanics and finite element analysis, the aim of this paper is to describe the feasibility of the Gurson–Tvergaard–Needleman (GTN) constitutive model in describing the tensile behavior of corroded reinforcing bars. The orthogonal test results showed that different fracture pattern and the related damage evolution process can be simulated by choosing different material parameters of GTN constitutive model. Compared with failure parameters, the two constitutive parameters are significant factors affecting the tensile strength. Both the nominal yield and ultimate tensile strength decrease markedly with the increase of constitutive parameters. Combining with the latest data and trial-and-error method, the suitable material parameters of GTN constitutive model were adopted to simulate the tensile behavior of corroded reinforcing bars in concrete under carbonation environment attack. The numerical predictions can not only agree very well with experimental measurements, but also simplify the finite element modeling process. PMID:23342140
Liang, Shih-Hsiung; Walther, Bruno Andreas; Shieh, Bao-Sen
2017-01-01
Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies.
Liang, Shih-Hsiung; Walther, Bruno Andreas
2017-01-01
Background Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. Methods We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. Results The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Discussion Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies. PMID:28316893
Photochemical control of the distribution of Venusian water
NASA Astrophysics Data System (ADS)
Parkinson, Christopher D.; Gao, Peter; Esposito, Larry; Yung, Yuk; Bougher, Stephen; Hirtzig, Mathieu
2015-08-01
We use the JPL/Caltech 1-D photochemical model to solve continuity diffusion equation for atmospheric constituent abundances and total number density as a function of radial distance from the planet Venus. Photochemistry of the Venus atmosphere from 58 to 112 km is modeled using an updated and expanded chemical scheme (Zhang et al., 2010, 2012), guided by the results of recent observations and we mainly follow these references in our choice of boundary conditions for 40 species. We model water between 10 and 35 ppm at our 58 km lower boundary using an SO2 mixing ratio of 25 ppm as our nominal reference value. We then vary the SO2 mixing ratio at the lower boundary between 5 and 75 ppm holding water mixing ratio of 18 ppm at the lower boundary and finding that it can control the water distribution at higher altitudes. SO2 and H2O can regulate each other via formation of H2SO4. In regions of high mixing ratios of SO2 there exists a "runaway effect" such that SO2 gets oxidized to SO3, which quickly soaks up H2O causing a major depletion of water between 70 and 100 km. Eddy diffusion sensitivity studies performed characterizing variability due to mixing that show less of an effect than varying the lower boundary mixing ratio value. However, calculations using our nominal eddy diffusion profile multiplied and divided by a factor of four can give an order of magnitude maximum difference in the SO2 mixing ratio and a factor of a few difference in the H2O mixing ratio when compared with the respective nominal mixing ratio for these two species. In addition to explaining some of the observed variability in SO2 and H2O on Venus, our work also sheds light on the observations of dark and bright contrasts at the Venus cloud tops observed in an ultraviolet spectrum. Our calculations produce results in agreement with the SOIR Venus Express results of 1 ppm at 70-90 km (Bertaux et al., 2007) by using an SO2 mixing ratio of 25 ppm SO2 and 18 ppm water as our nominal reference values. Timescales for a chemical bifurcation causing a collapse of water concentrations above the cloud tops (>64 km) are relatively short and on the order of a less than a few months, decreasing with altitude to less than a few days.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-29
... person or organization may nominate qualified persons to be considered for appointment to this advisory committee. Individuals may self-nominate. Nominations should be submitted in electronic format (preferred) following the instructions for ``Nominating Experts to the Chemical Assessment Advisory Committee'' provided...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-18
... nominate qualified persons to be considered for appointment to this advisory committee. Individuals may self-nominate. Nominations should be submitted in electronic format (preferred) following the instructions for ``Nominating Experts to the Chemical Assessment Advisory Committee'' provided on the SAB Web...
Implications of Systematic Nominator Missingness for Peer Nomination Data
ERIC Educational Resources Information Center
Babcock, Ben; Marks, Peter E. L.; van den Berg, Yvonne H. M.; Cillessen, Antonius H. N.
2018-01-01
Missing data are a persistent problem in psychological research. Peer nomination data present a unique missing data problem, because a nominator's nonparticipation results in missing data for other individuals in the study. This study examined the range of effects of systematic nonparticipation on the correlations between peer nomination data when…
Nominalizations in Spanish. Studies in Linguistics and Language Learning, Volume V.
ERIC Educational Resources Information Center
Falk, Julia Sableski
Using methods developed in transformational generative grammar, three types of nominal constructions in Spanish are treated in this paper: Fact nominalizations ("[El] Escribir es agradable"), Manner nominalizations ("El tocar [de la mujer] es agradable"), and Abstract noun nominalizations ("La construccion rapida de esta escuela es dudosa"). While…
Full-Scale Passive Earth Entry Vehicle Landing Tests: Methods and Measurements
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Kellas, Sotiris
2018-01-01
During the summer of 2016, a series of drop tests were conducted on two passive earth entry vehicle (EEV) test articles at the Utah Test and Training Range (UTTR). The tests were conducted to evaluate the structural integrity of a realistic EEV vehicle under anticipated landing loads. The test vehicles were lifted to an altitude of approximately 400m via a helicopter and released via release hook into a predesignated 61 m landing zone. Onboard accelerometers were capable of measuring vehicle free flight and impact loads. High-speed cameras on the ground tracked the free-falling vehicles and data was used to calculate critical impact parameters during the final seconds of flight. Additional sets of high definition and ultra-high definition cameras were able to supplement the high-speed data by capturing the release and free flight of the test articles. Three tests were successfully completed and showed that the passive vehicle design was able to withstand the impact loads from nominal and off-nominal impacts at landing velocities of approximately 29 m/s. Two out of three test resulted in off-nominal impacts due to a combination of high winds at altitude and the method used to suspend the vehicle from the helicopter. Both the video and acceleration data captured is examined and discussed. Finally, recommendations for improved release and instrumentation methods are presented.
Biomechanical response to ankle-foot orthosis stiffness during running.
Russell Esposito, Elizabeth; Choi, Harmony S; Owens, Johnny G; Blanck, Ryan V; Wilken, Jason M
2015-12-01
The Intrepid Dynamic Exoskeletal Orthosis (IDEO) is an ankle-foot orthosis developed to address the high rates of delayed amputation in the military. Its use has enabled many wounded Service Members to run again. During running, stiffness is thought to influence an orthosis' energy storage and return mechanical properties. This study examined the effect of orthosis stiffness on running biomechanics in patients with lower limb impairments who had undergone unilateral limb salvage. Ten patients with lower limb impairments underwent gait analysis at a self-selected running velocity. 1. Nominal (clinically-prescribed), 2. Stiff (20% stiffer than nominal), and 3. Compliant (20% less stiff than nominal) ankle-foot orthosis stiffnesses were tested. Ankle joint stiffness was greatest in the stiffest strut and lowest in the compliant strut, however ankle mechanical work remained unchanged. Speed, stride length, cycle time, joint angles, moments, powers, and ground reaction forces were not significantly different among stiffness conditions. Ankle joint kinematics and ankle, knee and hip kinetics were different between limbs. Ankle power, in particular, was lower in the injured limb. Ankle-foot orthosis stiffness affected ankle joint stiffness but did not influence other biomechanical parameters of running in individuals with unilateral limb salvage. Foot strike asymmetries may have influenced the kinetics of running. Therefore, a range of stiffness may be clinically appropriate when prescribing ankle-foot orthoses for active individuals with limb salvage. Published by Elsevier Ltd.
Blanco, Juan Felipe; Tamayo, Silvana; Scatena, Frederick N
2014-04-01
Gastropods of the Neritinidae family exhibit an amphidromous life cycle and an impressive variability in shell coloration in Puerto Rican streams and rivers. Various nominal species have been described, but Neritina virginea [Linne 1758], N. punctulata [Lamarck 1816] and N. reclivata [Say 1822] are the only broadly reported. However, recent studies have shown that these three species are sympatric at the river scale and that species determination might be difficult due to the presence of intermediate color morphs. Individuals (8 751) were collected from ten rivers across Puerto Rico, and from various segments and habitats in Mameyes River (the most pristine island-wide) during three years (2000-2003), and they were assigned to one of seven phenotypes corresponding to nominal species and morphs (non-nominal species). The "axial lines and dots" morph corresponding to N. reclivata was the most frequent island-wide, while the patelliform N. punctulata was scant, but the only found in headwater reaches. The "yellowish large tongues" phenotype, typical of N. virginea s.s. was the most frequent in the river mouth. The frequency of secondary phenotypes varied broadly among rivers, along the rivers, and among habitats, seemly influenced by salinity and predation gradients. The occurrence of individuals with coloration shifts after predation injuries, suggests phenotypic plasticity in the three nominal species, and urges for the use of molecular markers to unravel the possible occurrence of a species complex, and to understand the genetic basis of polymorphism. The longitudinal distribution of individual sizes, population density and egg capsules suggested the adaptive value of upstream migration, possibly to avoid marine predators.
NASA Astrophysics Data System (ADS)
Nazari, Mohammad; Hancock, B. Logan; Anderson, Jonathan; Hobart, Karl D.; Feygelson, Tatyana I.; Tadjer, Marko J.; Pate, Bradford B.; Anderson, Travis J.; Piner, Edwin L.; Holtz, Mark W.
2017-10-01
Studies of diamond material for thermal management are reported for a nominally 1-μm thick layer grown on silicon. Thickness of the diamond is measured using spectroscopic ellipsometry. Spectra are consistently modeled using a diamond layer taking into account surface roughness and requiring an interlayer of nominally silicon carbide. The presence of the interlayer is confirmed by transmission electron microscopy. Thermal conductivity is determined based on a heater which is microfabricated followed by back etching to produce a supported diamond membrane. Micro-Raman mapping of the diamond phonon is used to estimate temperature rise under known drive conditions of the resistive heater. Consistent values are obtained for thermal conductivity based on straightforward analytical calculation using phonon shift to estimate temperature and finite element simulations which take both temperature rise and thermal stress into account.
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2016-01-01
The Swift BAT LHP #0 primary heater controller failed on March 31, 2010. It has been disabled. On October 31, 2015, the secondary heater controller of this LHP failed. On November 1, 2015, the LHP #0 CC temperature increased to as 18.6 C, despite that the secondary heater controller set point was 8.8 C. It caused the average DM XA1 temperature to increase to 25.9 C, which was 5 C warmer than nominal. As a result, the detectors became noisy. To solve this problem, the LHP #1 secondary heater controller set point was decreased in 0.5 C decrements to 2.2 C. The set-point decrease restored the average DM XA1 temperature to a nominal value of 19.7 C on November 21.
Detecting grouting quality of tendon ducts using the impact-echo method
NASA Astrophysics Data System (ADS)
Qu, Guangzhen; Sun, Min; Zhou, Guangli
2018-06-01
The performance, durability and safety of prestressed concrete bridge were directly affected by the compaction of prestressed pipe. However, the pipe was hidden in the beam, and its grouting density was difficult to detect. The paper had modified three different status of gouting quality through making test model. After that, the impact-Echo method was adopted to detect the grouting quality of tendon ducts, the study was sunmmarized as follow. If the reflect time of slab bottom and nominal thickness of slab increased, the degree of density will increase; testing from half-hole of web, the reflect time and nominal thickness of slab was biggest. At the same time, the reflect time of compacted and uncompacted tendon ducts were mainly. At last, the method was verified by the engineering project, which provided reference value.
NASA Astrophysics Data System (ADS)
Sazonov, V. V.
An analysis is made of a generalized conservative mechanical system whose equations of motion contain a large parameter characterizing local forces acting along certain generalized coordinates. It is shown that the equations have periodic solutions which are close to periodic solutions to the corresponding degenerate equations. As an example, the periodic motions of a satellite with respect to its center of mass due to gravitational and restoring aerodynamic moments are examined for the case where the aerodynamic moment is much larger than the gravitational moment. Such motions can be treated as nominal unperturbed motions of a satellite under conditions of single-axis aerodynamic attitude control.
Results of chopper-controlled discharge life cycling studies on lead acid batteries
NASA Technical Reports Server (NTRS)
Ewashinka, J. G.; Sidik, S. M.
1982-01-01
A group of 108 state of the art nominally 6 volt lead acid batteries were tested in a program of one charge/discharge cycle per day for over two years or to ultimate battery failure. The primary objective was to determine battery cycle life as a function of depth of discharge (25 to 75 percent), chopper frequency (100 to 1000 Hz), duty cycle (25 to 87.5 percent), and average discharge current (20 to 260 A). The secondary objective was to determine the types of battery failure modes, if any, were due to the above parameters. The four parameters above were incorporated in a statistically designed test program.
Quantitative measure of the variation in fault rheology due to fluid-rock interactions
Blanpied, M.L.; Marone, C.J.; Lockner, D.A.; Byerlee, J.D.; King, D.P.
1998-01-01
We analyze friction data from two published suites of laboratory tests on granite in order to explore and quantify the effects of temperature (T) and pore water pressure (Pp) on the sliding behavior of faults. Rate-stepping sliding tests were performed on laboratory faults in granite containing "gouge" (granite powder), both dry at 23?? to 845??C [Lockner et al., 1986], and wet (Pp = 100 MPa) at 23?? to 600??C [Blanpied et al., 1991, 1995]. Imposed slip velocities (V) ranged from 0.01 to 5.5 ??m/s, and effective normal stresses were near 400 MPa. For dried granite at all temperatures, and wet granite below -300??C, the coefficient of friction (??) shows low sensitivity to V, T, and Pp. For wet granite above -350??, ?? drops rapidly with increasing T and shows a strong, positive rate dependence and protracted strength transients following steps in V, presumably reflecting the activity of a water-aided deformation process. By inverting strength data from velocity stepping tests we determined values for parameters in three formulations of a rate- and state-dependent constitutive law. One or two state variables were used to represent slip history effects. Each velocity step yielded an independent set of values for the nominal friction level, five constitutive parameters (transient parameters a, b1, and b2 and characteristic displacements Dcl and Dc2), and the velocity dependence of steady state friction ?????ss/??? In V = a-b1-b2. Below 250??, data from dry and most wet tests are adequately modeled by using the "slip law" [Ruina, 1983] and one state variable (a = 0.003 to 0.018, b = 0.001 to +0.018, Dc ??? 1 to 20 ??m). Dried tests above 250?? can also be fitted with one state variable. In contrast, wet tests above 350?? require higher direct rate dependence (a = 0.03 to 0.12), plus a second state variable with large, negative amplitude (b2 = -0.03 to -0.14) and large characteristic displacement (Dc2 = 300 to >4000 ??m). Thus the parameters a, b1, and b2 for wet granite show a pronounced change in their temperature dependence in the range 270?? to 350??C, which may reflect a change in underlying deformation mechanism. We quantify the trends in parameter values from 25?? to 600??C by piecewise linear regressions, which provide a straightforward means to incorporate the full constitutive response of granite into numerical models of fault slip. The modeling results suggest that the succeptibility for unstable (stick-slip) sliding is maximized between 90?? and 360??C, in agreement with laboratory observations and consistent with the depth range of earthquakes on mature faults in the continental crust.
CHNO Energetic Polymer Specific Heat Prediction From The Proposed Nominal/Generic (N/G) CP Concept
2007-02-01
HMX can exist in different solid polymorphic forms. At a certain temperature, TT, one form may change to another form if the heat energy of...more than 100 °K for TNT, HNS and HMX and over 200 °K for TETRYL, PETN, and RDX ). So based on the above remarks and similar remarks in References...are very close to (or equal to) the RDX CP values and TNT CP values near absolute zero. In Reference 7, two examples (TNT and HMX ) were selected for
A Simple Ocean Bottom Hydrophone with 200 Megabyte Data Capacity
1993-06-01
Securft Class (This Page) 22. Price (See ANSI- ZSM .18) S mlUdgn on Rwnes OPTIONAL FORM 272 (4-77) (Formerly NTIS- 35 ) Dewmenew ot Commerce ...gain if needed. The filtered signal is then fed in parallel to the input of two fixed gain amplifiers which provide gains of 9 dB and 35 dB...The values stored are the two gains and the two attenuation values - nominally + 35 , +9 -7 and -7 dB - see Figure 5. 16 Table 5: Checking the time of
Mass Uncertainty and Application For Space Systems
NASA Technical Reports Server (NTRS)
Beech, Geoffrey
2013-01-01
Expected development maturity under contract (spec) should correlate with Project/Program Approved MGA Depletion Schedule in Mass Properties Control Plan. If specification NTE, MGA is inclusive of Actual MGA (A5 & A6). If specification is not an NTE Actual MGA (e.g. nominal), then MGA values are reduced by A5 values and A5 is representative of remaining uncertainty. Basic Mass = Engineering Estimate based on design and construction principles with NO embedded margin MGA Mass = Basic Mass * assessed % from approved MGA schedule. Predicted Mass = Basic + MGA. Aggregate MGA % = (Aggregate Predicted - Aggregate Basic) /Aggregate Basic.
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
2017-11-01
To deal with these problems investigators usually rely on a calibration method that makes use of a substance with an accurately known set of interatomic distances. The procedure consists of carrying out a diffraction experiment on the chosen calibrating substance, determining the value of the distances with use of the nominal (meter) value of the voltage, and then correcting the nominal voltage by an amount that produces the distances in the calibration substance. Examples of gases that have been used for calibration are carbon dioxide, carbon tetrachloride, carbon disulfide, and benzene; solids such as zinc oxide smoke (powder) deposited on a screen or slit have also been used. The question implied by the use of any standard molecule is, how accurate are the interatomic distance values assigned to the standard? For example, a solid calibrant is subject to heating by the electron beam, possibly producing unknown changes in the lattice constants, and polyatomic gaseous molecules require corrections for vibrational averaging ("shrinkage") effects that are uncertain at best. It has lately been necessary for us to investigate this matter in connection with on-going studies of several molecules in which size is the most important issue. These studies indicated that our usual method for retrieval of data captured on film needed improvement. The following is an account of these two issues - the accuracy of the distances assigned to the chosen standard molecule, and the improvements in our methods of retrieving the scattered intensity data.
Pearson, Richard
2011-03-01
To assess the possibility of estimating the refractive index of rigid contact lenses on the basis of measurements of their back vertex power (BVP) in air and when immersed in liquid. First, a spreadsheet model was used to quantify the magnitude of errors arising from simulated inaccuracies in the variables required to calculate refractive index. Then, refractive index was calculated from in-air and in-liquid measurements of BVP of 21 lenses that had been made in three negative BVPs from materials with seven different nominal refractive index values. The power measurements were made by two operators on two occasions. Intraobserver reliability showed a mean difference of 0.0033±0.0061 (t = 0.544, P = 0.59), interobserver reliability showed a mean difference of 0.0043±0.0061 (t = 0.707, P = 0.48), and the mean difference between the nominal and calculated refractive index values was -0.0010±0.0111 (t = -0.093, P = 0.93). The spreadsheet prediction that low-powered lenses might be subject to greater errors in the calculated values of refractive index was substantiated by the experimental results. This method shows good intra and interobserver reliabilities and can be used easily in a clinical setting to provide an estimate of the refractive index of rigid contact lenses having a BVP of 3 D or more.
NASA Technical Reports Server (NTRS)
Glick, B. J.
1985-01-01
Techniques for classifying objects into groups or clases go under many different names including, most commonly, cluster analysis. Mathematically, the general problem is to find a best mapping of objects into an index set consisting of class identifiers. When an a priori grouping of objects exists, the process of deriving the classification rules from samples of classified objects is known as discrimination. When such rules are applied to objects of unknown class, the process is denoted classification. The specific problem addressed involves the group classification of a set of objects that are each associated with a series of measurements (ratio, interval, ordinal, or nominal levels of measurement). Each measurement produces one variable in a multidimensional variable space. Cluster analysis techniques are reviewed and methods for incuding geographic location, distance measures, and spatial pattern (distribution) as parameters in clustering are examined. For the case of patterning, measures of spatial autocorrelation are discussed in terms of the kind of data (nominal, ordinal, or interval scaled) to which they may be applied.
Solid rocket booster performance evaluation model. Volume 1: Engineering description
NASA Technical Reports Server (NTRS)
1974-01-01
The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.
Electrical and Optical Studies of Deep Levels in Nominally Undoped Thallium Bromide
NASA Astrophysics Data System (ADS)
Smith, Holland M.; Haegel, Nancy M.; Phillips, David J.; Cirignano, Leonard; Ciampi, Guido; Kim, Hadong; Chrzan, Daryl C.; Haller, Eugene E.
2014-02-01
Photo-induced conductivity transient spectroscopy (PICTS) and cathodoluminescence (CL) measurements were performed on nominally undoped detector grade samples of TlBr. In PICTS measurements, nine traps were detected in the temperature range 80-250 K using four-gate analysis. Five of the traps are tentatively identified as electron traps, and four as hole traps. CL measurements yielded two broad peaks common to all samples and most likely associated with defects. Correlations between the optically and electrically detected deep levels are considered. Above 250 K, the photoconductivity transients measured in the PICTS experiments exhibited anomalous transient behavior, indicated by non-monotonic slope variations as a function of time. The origin of the transients is under further investigation, but their presence precludes the accurate determination of trap parameters in TlBr above 250 K with traditional PICTS analysis. Their discovery was made possible by the use of a PICTS system that records whole photoconductivity transients, as opposed to reduced and processed signals.
FORGE Milford Triaxial Test Data and Summary from EGI labs
Joe Moore
2016-03-01
Six samples were evaluated in unconfined and triaxial compression, their data are included in separate excel spreadsheets, and summarized in the word document. Three samples were plugged along the axis of the core (presumed to be nominally vertical) and three samples were plugged perpendicular to the axis of the core. A designation of "V"indicates vertical or the long axis of the plugged sample is aligned with the axis of the core. Similarly, "H" indicates a sample that is nominally horizontal and cut orthogonal to the axis of the core. Stress-strain curves were made before and after the testing, and are included in the word doc.. The confining pressure for this test was 2800 psi. A series of tests are being carried out on to define a failure envelope, to provide representative hydraulic fracture design parameters and for future geomechanical assessments. The samples are from well 52-21, which reaches a maximum depth of 3581 ft +/- 2 ft into a gneiss complex.
NASA Technical Reports Server (NTRS)
Pei, Jing; Wall, John
2013-01-01
This paper describes the techniques involved in determining the aerodynamic stability derivatives for the frequency domain analysis of the Space Launch System (SLS) vehicle. Generally for launch vehicles, determination of the derivatives is fairly straightforward since the aerodynamic data is usually linear through a moderate range of angle of attack. However, if the wind tunnel data lacks proper corrections then nonlinearities and asymmetric behavior may appear in the aerodynamic database coefficients. In this case, computing the derivatives becomes a non-trivial task. Errors in computing the nominal derivatives could lead to improper interpretation regarding the natural stability of the system and tuning of the controller parameters, which would impact both stability and performance. The aerodynamic derivatives are also provided at off nominal operating conditions used for dispersed frequency domain Monte Carlo analysis. Finally, results are shown to illustrate that the effects of aerodynamic cross axis coupling can be neglected for the SLS configuration studied
NASA Technical Reports Server (NTRS)
Nelms, W. P., Jr.; Bailey, R. O.
1974-01-01
A computerized aircraft synthesis program has been used to assess the effects of various vehicle and mission parameters on the performance of an oblique, all-wing, remotely piloted vehicle (RPV) for the highly maneuverable, air-to-air combat role. The study mission consists of an outbound cruise, an acceleration phase, a series of subsonic and supersonic turns, and a return cruise. The results are presented in terms of both the required vehicle weight to accomplish this mission and the combat effectiveness as measured by turning and acceleration capability. This report describes the synthesis program, the mission, the vehicle, and results from sensitivity studies. An optimization process has been used to establish the nominal RPV configuration of the oblique, all-wing concept for the specified mission. In comparison to a previously studied conventional wing-body canard design for the same mission, this oblique, all-wing nominal vehicle is lighter in weight and has higher performance.
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Delgado, Reynolds; Poulin, Greg; Starc, Vito; Arenare, Brian; Rahman, M. A.
2006-01-01
Resting conventional ECG is notoriously insensitive for detecting coronary artery disease (CAD) and only nominally useful in screening for cardiomyopathy (CM). Similarly, conventional exercise stress test ECG is both time- and labor-consuming and its accuracy in identifying CAD is suboptimal for use in population screening. We retrospectively investigated the accuracy of several advanced resting electrocardiographic (ECG) parameters, both alone and in combination, for detecting CAD and cardiomyopathy (CM).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
... duties. Selection of ISPAB members will not be limited to individuals who are nominated. Nominations that.... Selection of MEP Advisory Board members will not be limited to individuals who are nominated. Nominations... in its official role as the private sector policy advisor of the Institute is concerned. Each such...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-09
... duties. Selection of ISPAB members will not be limited to individuals who are nominated. Nominations that... to individuals who are nominated. Nominations that are received and meet the requirements will be... policy advisor of NIST is concerned. Each such report shall identify areas of program emphasis for NIST...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-08
... to individuals who are nominated. Nominations that are received and meet the requirements will be.... Selection of MEP Advisory Board members will not be limited to individuals who are nominated. Nominations... official role as the private sector policy adviser of NIST is concerned. Each such report shall identify...
SU-E-J-152: Evaluation of TrueBeam OBI V. 1.5 CBCT Performance in An Adaptive RT Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, S; Studenski, M; Giaddui, T
2014-06-01
Purpose: To evaluate the image quality and imaging dose of the Varian TrueBeam OBIv.1.5 CBCT system in a clinical adaptive radiation therapy environment, simulated by changing phantom thickness. Methods: Various OBI CBCT protocols(Head, Pelvis, Thorax, Spotlight) were used to acquire images of Catphan504 phantom(nominal phantom thickness and 10 cm additional phantom thickness). The images were analyzed for low contrast detectability(CNR), uniformity(UI), and HU sensitivity. These results were compared to the same image sets for planning CT(pCT)(GE LightSpeed 16- slice). Imaging dose measurements were performed with Gafchromic XRQA2 film for various OBI protocols (Pelvis, Thorax, Spotlight) in a pelvic-sized phantom(nominal thicknessmore » and 4cm additional thickness). Dose measurements were acquired in the interior and at the surface of the phantom. Results: The nominal CNR[additional thickness CNR] for OBI was—Pelvis:1.45[0.81],Thorax:0.86[0.48], Spotlight:0.67[0.39],Head:0.28 [0.10]. The nominal CNR[additional thickness CNR] for pCT was— Pelvis:0.87[0.41],Head:0.60[0.22]. The nominal UI[additional thickness UI] for OBI was—Pelvis:11.5[24.1],Thorax:17.0[20.6], Spotlight:23.2[23.2], Head:15.6[59.9]. The nominal UI[additional thickness UI] for pCT was— Pelvis:9.2[8.6],Head:2.1[2.9]. The HU difference(averaged over all material inserts) between nominal and additional thickness scans for OBI: 8.26HU(Pelvis), 33.39HU(Thorax), 178.98HU(Head), 108.20HU (Spotlight); for pCT: 16.00HU(Pelvis), 19.85HU(Head). Uncertainties in electron density were calculated based on HU values with varying phantom thickness. Average electron-density deviations (ρ(water)=1)for GE-Pelvis, GE-Head, OBI-Pelvis, OBI-Thorax, OBI-Spotlight, and OBI-Head were: 0.0182, 0.0180, 0.0058, 0.0478, 0.2750, and 0.3115, respectively.The average phantom interior dose was(OBI-nominal):2.35cGy(Pelvis), 0.60cGy(Thorax), 1.87cGy(Spotlight); OBI-increased thickness: 1.77cGy(Pelvis), 0.43cGy(Thorax), 1.53cGy (Spotlight). Average surface dose(OBI-nominal): 2.29cGy(Pelvis), 0.56cGy(Thorax), 1.79cGy (Spotlight); OBI-increased thickness: 1.94cGy(Pelvis), 0.48cGy(Thorax), 1.47cGy (Spotlight). Conclusion: The OBI-Pelvis protocol offered comparable CNR and HU constancy to pCT for each geometry; other protocols, particularly Spotlight and Head, exhibited lower HU constancy and CNR. The uniformity of pCT was superior to OBI for all protocols. CNR and UI were degraded for both systems/scan types with increased thickness. The OBI interior dose decreased by approximately 30% with additional thickness. This work was funded, in part, under a grant with the Pennsylvania Department of Health. The Department of Health specifically declaims responsibility for any analyses, interpretations, or conclusions.« less
Bread in a Bag. Teacher's Packet. Revised Edition.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Education, Oklahoma City.
This unit is designed to familiarize students in grades 3-6 with wheat production; teach them the nutritional value of wheat products and their role in a well-balanced diet; and give then an easy, hands-on experience in bread making with a nominal amount of cleanup for teachers. The kit suggests that in the first week, teachers discuss wheat…
A novel compact low impedance Marx generator with quasi-rectangular pulse output
NASA Astrophysics Data System (ADS)
Liu, Hongwei; Jiang, Ping; Yuan, Jianqiang; Wang, Lingyun; Ma, Xun; Xie, Weiping
2018-04-01
In this paper, a novel low impedance compact Marx generator with near-square pulse output based on the Fourier theory is developed. Compared with the traditional Marx generator, capacitors with different capacity have been used. It can generate a high-voltage quasi-rectangular pulse with a width of 100 ns at low impedance load, and it also has high energy density and power density. The generator consists of 16 modules. Each module comprises an integrative single-ended plastic case capacitor with a nominal value of 54 nF, four ceramic capacitors with a nominal value of 1.5 nF, a gas switch, a charging inductor, a grounding inductor, and insulators which provide mechanical support for all elements. In the module, different discharge periods from different capacitors add to the main circuit to form a quasi-rectangular pulse. The design process of the generator is analyzed, and the test results are provided here. The generator achieved pulse output with a rise time of 32 ns, pulse width of 120 ns, flat-topped width (95%-95%) of 50 ns, voltage of 550 kV, and power of 20 GW.
Liszka, Małgorzata; Stolarczyk, Liliana; Kłodowska, Magdalena; Kozera, Anna; Krzempek, Dawid; Mojżeszek, Natalia; Pędracka, Anna; Waligórski, Michael Patrick Russell; Olko, Paweł
2018-01-01
To evaluate the effect on charge collection in the ionization chamber (IC) in proton pencil beam scanning (PBS), where the local dose rate may exceed the dose rates encountered in conventional MV therapy by up to three orders of magnitude. We measured values of the ion recombination (k s ) and polarity (k pol ) correction factors in water, for a plane-parallel Markus TM23343 IC, using the cyclotron-based Proteus-235 therapy system with an active proton PBS of energies 30-230 MeV. Values of k s were determined from extrapolation of the saturation curve and the Two-Voltage Method (TVM), for planar fields. We compared our experimental results with those obtained from theoretical calculations. The PBS dose rates were estimated by combining direct IC measurements with results of simulations performed using the FLUKA MC code. Values of k s were also determined by the TVM for uniformly irradiated volumes over different ranges and modulation depths of the proton PBS, with or without range shifter. By measuring charge collection efficiency versus applied IC voltage, we confirmed that, with respect to ion recombination, our proton PBS represents a continuous beam. For a given chamber parameter, e.g., nominal voltage, the value of k s depends on the energy and the dose rate of the proton PBS, reaching c. 0.5% for the TVM, at the dose rate of 13.4 Gy/s. For uniformly irradiated regular volumes, the k s value was significantly smaller, within 0.2% or 0.3% for irradiations with or without range shifter, respectively. Within measurement uncertainty, the average value of k pol , for the Markus TM23343 IC, was close to unity over the whole investigated range of clinical proton beam energies. While no polarity effect was observed for the Markus TM23343 IC in our pencil scanning proton beam system, the effect of volume recombination cannot be ignored. © 2017 American Association of Physicists in Medicine.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
... will not be limited to individuals who are nominated. Nominations that are received and meet the... individuals who are nominated. Nominations that are received and meet the requirements will be kept on file to... in its official role as the private sector policy advisor of the Institute is concerned. Each such...
Initial report on the photometric study of Vestoids from Modra
NASA Astrophysics Data System (ADS)
Galád, A.; Gajdoš, Š.; Világi, J.
2014-07-01
Our new survey with a 0.6-m f/5.5 telescope starting in August 2012 is intended to enlarge the sample of V-type asteroids studied photometrically. It is focused on objects with unknown rotation periods. Due to some limitations of the facility, exposure times are usually only 60 s and only a clear filter is used. About 12 vestoids with previously unknown rotation periods can be studied in detail during one season (from August to May) in Modra (though in some cases the period is still not determined). The list of studied targets during the first two seasons is available at http://www.fmph.uniba.sk/index.php?id=3161. Lightcurves are roughly linked using the Carlsberg Meridian Catalogue 14 (CMC14) stars in the field of view to about 0.05 mag accuracy. The slope parameter G is assumed to be as high as 0.3--0.4. When the observations cover a wide range of phase angles and the rotation period can be determined (however, not in the case of tumblers), the G value is roughly determined. In some cases, even higher values provide a better match to the lightcurve data. In one case, the best nominal value is formally lower, but the uncertainty is large. Up to date we have detected two binary candidates having attenuation(s) in lightcurves. Lightcurves of a few targets indicate tumbling. Study of rotational properties of Vestoids is a long-term process. To speed it up, we would appreciate collaboration with other research groups and/or volunteers.
Friction measurements in piston-cylinder apparatus using quartz-coesite reversible transition
NASA Technical Reports Server (NTRS)
Akella, J.
1979-01-01
The value of friction determined by monitoring piston displacement as a function of nominal pressure on compression and decompression cycles at 1273 K is compared with the friction value obtained by reversing the quartz-coesite transition at 1273 and 1073 K in a talc-glass-alsimag cell (Akella and Kennedy, 1971) and a low-friction salt cell (Mirwald et al., 1975). Quenching runs at 1273 K gave double values of friction of 0.25 GPa for the talc-glass-alsimag cell and 0.03 GPa for the salt cell. The piston-displacement technique gave somewhat higher values. Use of piston-displacement hysteresis loops in evaluating the actual pressure on a sample may lead to overestimates for decompression runs and underestimates for compression runs.
Conceptual and linguistic representations of kinds and classes
Prasada, Sandeep; Hennefield, Laura; Otap, Daniel
2013-01-01
We investigate the hypothesis that our conceptual systems provide two formally distinct ways of representing categories by investigating the manner in which lexical nominals (e.g., tree, picnic table) and phrasal nominals (e.g., black bird, birds that like rice) are interpreted. Four experiments found that lexical nominals may be mapped onto kind representations whereas phrasal nominals map onto class representations but not kind representations. Experiment 1 found that phrasal nominals, unlike lexical nominals, are mapped onto categories whose members need not be of a single kind. Experiments 2 and 3 found that categories named by lexical nominals enter into both class inclusion and kind hierarchies and thus support both class inclusion (is a) and kind specification (kind of) relations, whereas phrasal nominals map onto class representations which support only class inclusion relations. Experiment 4 showed that the two types of nominals represent hierarchical relations in different ways. Phrasal nominals (e.g., white bear) are mapped onto classes that have criteria of membership in addition to those specified by the class picked out by the head noun of the phrase (e.g., bear). In contrast, lexical nominals (e.g., polar bear) specify one way to meet the criteria specified by the more general kind concept (e.g., bear). Implications for the language-conceptual system interface, representation of hierarchical relations, lexicalization, and theories of conceptual combination are discussed. PMID:22671567
7 CFR 917.18 - Nomination of commodity committee members of the Control Committee.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Administrative Bodies § 917.18 Nomination of commodity committee members of the Control Committee. Nominations... committee shall be entitled to nominate shall be based upon the proportion that the previous three fiscal...
7 CFR 917.18 - Nomination of commodity committee members of the Control Committee.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Administrative Bodies § 917.18 Nomination of commodity committee members of the Control Committee. Nominations... committee shall be entitled to nominate shall be based upon the proportion that the previous three fiscal...
7 CFR 917.18 - Nomination of commodity committee members of the Control Committee.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Administrative Bodies § 917.18 Nomination of commodity committee members of the Control Committee. Nominations... committee shall be entitled to nominate shall be based upon the proportion that the previous three fiscal...
7 CFR 917.18 - Nomination of commodity committee members of the Control Committee.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Administrative Bodies § 917.18 Nomination of commodity committee members of the Control Committee. Nominations... committee shall be entitled to nominate shall be based upon the proportion that the previous three fiscal...
7 CFR 917.18 - Nomination of commodity committee members of the Control Committee.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Administrative Bodies § 917.18 Nomination of commodity committee members of the Control Committee. Nominations... committee shall be entitled to nominate shall be based upon the proportion that the previous three fiscal...
GBAS GAST D availability analysis for business aircraft
NASA Astrophysics Data System (ADS)
Dvorska, J.; Lipp, A.; Podivin, L.; Duchet, D.
This paper analyzes Initial GBAS GAST D availability at a set of current ILS CAT III airports. Eurocontrol Pegasus Availability Tool, designed for SESAR projects is used for the assessment. Overall availability of the GBAS GAST D system is considered, focusing on business aircraft specifics where applicable. Nominal as well as adverse scenarios are presented in order to determine whether GAST D can reach the required availability for business aircraft at CAT III airport locations and under which conditions. The availability target was set at 99.9% availability when considering satellite outages in a given constellation and 99.997% when no outages are included. Sensitivity simulations were run for different scenarios and impacts of geometry screening thresholds, scale heights, aircraft mask, ground mask, sigma pseudorange ground, sigma ionospheric gradient, simulated year and different approach point (decision height) were analyzed. Some were run for a limited set of ILS CAT III airports and most of them for an almost complete set of nominal airports. Business aircraft specific assumptions, as well as aircraft type independent parameters (constellations, satellite outages, etc.) are examined in the paper. Conclusion summarizes the overall outcome of the simulations, showing that Initial GBAS CAT II/III can provide sufficient availability for all or almost all ILS CAT III capable airports considered in this study; and under which conditions. Recommendations for parameters that can be influenced (e.g. antenna location) if necessary are provided. It can also be expected that the availability will increase with the increasing amount of GNSS satellites. The work shows how different parameters impact availability of initial GBAS GAST D service for business aircraft, and that sufficient availability of GAST D service can be expected at most airports.
Contact lens material characteristics associated with hydrogel lens dehydration.
Ramamoorthy, Padmapriya; Sinnott, Loraine T; Nichols, Jason J
2010-03-01
To determine the association between material dehydration and hydrogel contact lens material characteristics, including water content and ionicity. Water content and refractive index data were derived from automated refractometry measurements of worn hydrogel contact lenses of 318 participants in the Contact Lens and Dry Eye Study (CLADES). Dehydration was determined in two ways; as the difference between nominal and measured (1) water content and (2) refractive index. Multiple regression models were used to examine the relation between dehydration and material characteristics, controlling for tear osmolality. The overall measured and nominal water content values were 52.58 +/- 7.49% and 56.88 +/- 7.81% respectively, while the measured and nominal refractive indices were 1.429 +/- 0.015 and 1.410 +/- 0.017. High water content and ionic hydrogel lens materials were associated with greater dehydration (p < 0.0001 for both) than low water content and non-ionic materials. When dehydration was assessed as the difference in refractive index, only high water content was associated with dehydration (p < 0.0001). High water content and ionic characteristics of hydrogel lens materials are associated with hydrogel lens dehydration, with the former being more strongly associated. Such dehydration changes could in turn lead to important clinical ramifications such as reduced oxygen transmissibility, greater lens adherence and reduced tear exchange.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fondeur, F.; Taylor-Pashow, K.
Savannah River National Laboratory (SRNL) received two sets of Solvent Hold Tank (SHT) samples (MCU-15-389 and MCU-15-390 pulled on February 23, 2015 and MCU-15-439, MCU-15-440, and MCU-15-441 pulled on February 28, 2015) for analysis. The samples in each set were combined and analyzed for composition. Analysis of the composite samples MCU-15-389-390 and MCU-15-439-440- 441 indicated a low concentration (~ 92 to 93 % of nominal) of the suppressor (TiDG) and slightly below nominal concentrations of the extractant (MaxCalix), but nominal levels of the modifier (CS-7SB) and of the Isopar™ L. This analysis confirms the addition of TiDG, MaxCalix, and modifiermore » to the solvent on February 22, 2015. Despite that the values are below the target component levels, the current levels of TiDG and MaxCalix are sufficient for continuing operation without adding a trim at this time. No impurities above the 1000 ppm level were found in this solvent. However, the p-nut vials that delivered the samples contained small (1 mm) droplets of oxidized modifier and amides. The laboratory will continue to monitor the quality of the solvent in particular for any new impurity or degradation of the solvent components.« less
NASA Astrophysics Data System (ADS)
Açıkgöz, Muhammed; Rudowicz, Czesław; Gnutek, Paweł
2017-11-01
Theoretical investigations are carried out to determine the temperature dependence of the local structural parameters of Cr3+ and Mn2+ ions doped into RAl3(BO3)4 (RAB, R = Y, Eu, Tm) crystals. The zero-field splitting (ZFS) parameters (ZFSPs) obtained from the spin Hamiltonian (SH) analysis of EMR (EPR) spectra serve for fine-tuning the theoretically predicted ZFSPs obtained using the semi-empirical superposition model (SPM). The SPM analysis enables to determine the local structure changes around Cr3+ and Mn2+ centers in RAB crystals and explain the observed temperature dependence of the ZFSPs. The local monoclinic C2 site symmetry of all Al sites in YAB necessitates consideration of one non-zero monoclinic ZFSP (in the Stevens notation, b21) for Cr3+ ions. However, the experimental second-rank ZFSPs (D =b20 , E = 1 / 3b22) were expressed in a nominal principal axis system. To provide additional insight into low symmetry aspects, the distortions (ligand's distances ΔRi and angular distortions Δθi) have been varied while preserving monoclinic site symmetry, in such way as to obtain the calculated values (D, E) close to the experimental ones, while keeping b21 close to zero. This procedure yields good matching of the calculated ZFSPs and the experimental ones, and enables determination of the corresponding local distortions. The present results may be useful in future studies aimed at technological applications of the Huntite-type borates with the formula RM3(BO3)4. The model parameters determined here may be utilized for ZFSP calculations for Cr3+ and Mn2+ ions at octahedral sites in single-molecule magnets and single-chain magnets.
44 CFR 150.3 - Nomination process.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Section 150.3 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Nomination process. (a) The Nominating Officials nominating Firefighters and Civil Defense Officers shall...) Civil defense officer (or member of a recognized civil defense or emergency preparedness organization...
44 CFR 150.3 - Nomination process.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Section 150.3 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Nomination process. (a) The Nominating Officials nominating Firefighters and Civil Defense Officers shall...) Civil defense officer (or member of a recognized civil defense or emergency preparedness organization...
44 CFR 150.3 - Nomination process.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Section 150.3 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Nomination process. (a) The Nominating Officials nominating Firefighters and Civil Defense Officers shall...) Civil defense officer (or member of a recognized civil defense or emergency preparedness organization...
44 CFR 150.3 - Nomination process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 150.3 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Nomination process. (a) The Nominating Officials nominating Firefighters and Civil Defense Officers shall...) Civil defense officer (or member of a recognized civil defense or emergency preparedness organization...
44 CFR 150.3 - Nomination process.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Section 150.3 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... Nomination process. (a) The Nominating Officials nominating Firefighters and Civil Defense Officers shall...) Civil defense officer (or member of a recognized civil defense or emergency preparedness organization...
77 FR 19056 - Information Reporting Program Advisory Committee (IRPAC); Nominations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-29
... DEPARTMENT OF TREASURY Internal Revenue Service Information Reporting Program Advisory Committee (IRPAC); Nominations AGENCY: Internal Revenue Service, Department of Treasury. ACTION: Request for Nominations. SUMMARY: The Internal Revenue Service (IRS) requests nominations of individuals for selection to...
78 FR 19582 - Information Reporting Program Advisory Committee (IRPAC); Nominations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-01
... DEPARTMENT OF TREASURY Internal Revenue Service Information Reporting Program Advisory Committee (IRPAC); Nominations AGENCY: Internal Revenue Service, Department of Treasury. ACTION: Request for Nominations. SUMMARY: The Internal Revenue Service (IRS) requests nominations of individuals for selection to...
ITGB5 and AGFG1 variants are associated with severity of airway responsiveness.
Himes, Blanca E; Qiu, Weiliang; Klanderman, Barbara; Ziniti, John; Senter-Sylvia, Jody; Szefler, Stanley J; Lemanske, Robert F; Zeiger, Robert S; Strunk, Robert C; Martinez, Fernando D; Boushey, Homer; Chinchilli, Vernon M; Israel, Elliot; Mauger, David; Koppelman, Gerard H; Nieuwenhuis, Maartje A E; Postma, Dirkje S; Vonk, Judith M; Rafaels, Nicholas; Hansel, Nadia N; Barnes, Kathleen; Raby, Benjamin; Tantisira, Kelan G; Weiss, Scott T
2013-08-28
Airway hyperresponsiveness (AHR), a primary characteristic of asthma, involves increased airway smooth muscle contractility in response to certain exposures. We sought to determine whether common genetic variants were associated with AHR severity. A genome-wide association study (GWAS) of AHR, quantified as the natural log of the dosage of methacholine causing a 20% drop in FEV1, was performed with 994 non-Hispanic white asthmatic subjects from three drug clinical trials: CAMP, CARE, and ACRN. Genotyping was performed on Affymetrix 6.0 arrays, and imputed data based on HapMap Phase 2, was used to measure the association of SNPs with AHR using a linear regression model. Replication of primary findings was attempted in 650 white subjects from DAG, and 3,354 white subjects from LHS. Evidence that the top SNPs were eQTL of their respective genes was sought using expression data available for 419 white CAMP subjects. The top primary GWAS associations were in rs848788 (P-value 7.2E-07) and rs6731443 (P-value 2.5E-06), located within the ITGB5 and AGFG1 genes, respectively. The AGFG1 result replicated at a nominally significant level in one independent population (LHS P-value 0.012), and the SNP had a nominally significant unadjusted P-value (0.0067) for being an eQTL of AGFG1. Based on current knowledge of ITGB5 and AGFG1, our results suggest that variants within these genes may be involved in modulating AHR. Future functional studies are required to confirm that our associations represent true biologically significant findings.
Nolte, Frederick S.; Boysza, Jodi; Thurmond, Cathy; Clark, W. Scott; Lennox, Jeffrey L.
1998-01-01
The performance characteristics of an enhanced-sensitivity branched-DNA assay (bDNA) (Quantiplex HIV-1 version 2.0; Chiron Corp., Emeryville, Calif.) and a reverse transcription (RT)-PCR assay (AMPLICOR HIV-1 Monitor; Roche Diagnostic Systems, Inc., Branchburg, N.J.) were compared in a molecular diagnostic laboratory. Samples used in this evaluation included linearity and reproducibility panels made by dilution of a human immunodeficiency virus type 1 (HIV-1) stock culture of known virus particle count in HIV-1-negative plasma, a subtype panel consisting of HIV-1 subtypes A through F at a standardized level, and 64 baseline plasma specimens from HIV-1-infected individuals. Plots of log10 HIV RNA copies per milliliter versus log10 nominal virus particles per milliliter demonstrated that both assays were linear over the stated dynamic ranges (bDNA, r = 0.98; RT-PCR, r = 0.99), but comparison of the slopes of the regression lines (bDNA, m = 0.96; RT-PCR, m = 0.83) suggested that RT-PCR had greater proportional systematic error. The between-run coefficients of variation for bDNA and RT-PCR were 24.3 and 34.3%, respectively, for a sample containing 1,650 nominal virus particles/ml and 44.0 and 42.7%, respectively, for a sample containing 165 nominal virus particles/ml. Subtypes B, C, and D were quantitated with similar efficiencies by bDNA and RT-PCR; however, RT-PCR was less efficient in quantitating subtypes A, E, and F. One non-B subtype was recognized in our clinical specimens based on the ratio of values obtained with the two methods. HIV-1 RNA was quantitated in 53 (83%) baseline plasma specimens by bDNA and in 55 (86%) specimens by RT-PCR. RT-PCR values were consistently greater than bDNA values, with population means of 142,419 and 67,580 copies/ml, respectively (P < 0.01). The results were highly correlated (r = 0.91), but the agreement was poor (mean difference in log10 copies per milliliter ± 2 standard deviations, 0.45 ± 0.61) for the 50 clinical specimens that gave discrete values with both methods. PMID:9508301
NASA Technical Reports Server (NTRS)
Bathker, D. A.; Slobin, S. D.
1989-01-01
The measured Deep Space Network (DSN) 70-meter antenna performance at S- and X-bands is compared with the design expectations. A discussion of natural radio-source calibration standards is given. New estimates of DSN 64-meter antenna performance are given, based on improved values of calibration source flux and size correction. A comparison of the 64- and 70-meter performances shows that average S-band peak gain improvement is 1.94 dB, compared with a design expectation of 1.77 dB. At X-band, the average peak gain improvement is 2.12 dB, compared with the (coincidentally similar) design expectation of 1.77 dB. The average measured 70-meter S-band peak gain exceeds the nominal design-expected gain by 0.02 dB; the average measured 70-meter X-band peak gain is 0.14 dB below the nominal design-expected gain.
Hirani, Zakir M; Decarolis, James F; Lehman, Geno; Adham, Samer S; Jacangelo, Joseph G
2012-01-01
Nine different membrane bioreactor (MBR) systems with different process configurations (submerged and external), membrane geometries (hollow-fiber, flat-sheet, and tubular), membrane materials (polyethersulfone (PES), polyvinylidene fluoride (PVDF), and polytetrafluoroethylene (PTFE)) and membrane nominal pore sizes (0.03-0.2 μm) were evaluated to assess the impact of influent microbial concentration, membrane pore size and membrane material and geometries on removal of microbial indicators by MBR technology. The log removal values (LRVs) for microbial indicators increased as the influent concentrations increased. Among the wide range of MBR systems evaluated, the total and fecal coliform bacteria and indigenous MS-2 coliphage were detected in 32, 9 and 15% of the samples, respectively; the 50th percentile LRVs were measured at 6.6, 5.9 and 4.5 logs, respectively. The nominal pore sizes of the membranes, membrane materials and geometries did not show a strong correlation with the LRVs.