Holmes, Robert R.; Dunn, Chad J.
1996-01-01
A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.
Oguchi, Masahiro; Fuse, Masaaki
2015-02-03
Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.
Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method
NASA Astrophysics Data System (ADS)
Sun, C. J.; Zhou, J. H.; Wu, W.
2017-10-01
During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.
Simplified Life-Cycle Cost Estimation
NASA Technical Reports Server (NTRS)
Remer, D. S.; Lorden, G.; Eisenberger, I.
1983-01-01
Simple method for life-cycle cost (LCC) estimation avoids pitfalls inherent in formulations requiring separate estimates of inflation and interest rates. Method depends for validity observation that interest and inflation rates closely track each other.
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
A simplified parsimonious higher order multivariate Markov chain model
NASA Astrophysics Data System (ADS)
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.
NASA Astrophysics Data System (ADS)
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.
A simplified method for assessing particle deposition rate in aircraft cabins
NASA Astrophysics Data System (ADS)
You, Ruoyu; Zhao, Bin
2013-03-01
Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.
L'her, Erwan; Martin-Babau, Jérôme; Lellouche, François
2016-12-01
Knowledge of patients' height is essential for daily practice in the intensive care unit. However, actual height measurements are unavailable on a daily routine in the ICU and measured height in the supine position and/or visual estimates may lack consistency. Clinicians do need simple and rapid methods to estimate the patients' height, especially in short height and/or obese patients. The objectives of the study were to evaluate several anthropometric formulas for height estimation on healthy volunteers and to test whether several of these estimates will help tidal volume setting in ICU patients. This was a prospective, observational study in a medical intensive care unit of a university hospital. During the first phase of the study, eight limb measurements were performed on 60 healthy volunteers and 18 height estimation formulas were tested. During the second phase, four height estimates were performed on 60 consecutive ICU patients under mechanical ventilation. In the 60 healthy volunteers, actual height was well correlated with the gold standard, measured height in the erect position. Correlation was low between actual and calculated height, using the hand's length and width, the index, or the foot equations. The Chumlea method and its simplified version, performed in the supine position, provided adequate estimates. In the 60 ICU patients, calculated height using the simplified Chumlea method was well correlated with measured height (r = 0.78; ∂ < 1 %). Ulna and tibia estimates also provided valuable estimates. All these height estimates allowed calculating IBW or PBW that were significantly different from the patients' actual weight on admission. In most cases, tidal volume set according to these estimates was lower than what would have been set using the actual weight. When actual height is unavailable in ICU patients undergoing mechanical ventilation, alternative anthropometric methods to obtain patient's height based on lower leg and on forearm measurements could be useful to facilitate the application of protective mechanical ventilation in a Caucasian ICU population. The simplified Chumlea method is easy to achieve in a bed-ridden patient and provides accurate height estimates, with a low bias.
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
Qiu, Ling; Guo, Xiuzhi; Zhu, Yan; Shou, Weilin; Gong, Mengchun; Zhang, Lin; Han, Huijuan; Quan, Guoqiang; Xu, Tao; Li, Hang; Li, Xuewang
2013-01-01
To investigate the impact of serum creatinine measurement on the applicability of glomerular filtration rate (GFR) evaluation equations. 99mTc-DTPA plasma clearance rate was used as GFR reference (rGFR) in patients with chronic kidney disease (CKD). Serum creatinine was measureded using enzymatic or picric acid creatinine reagent. The GFR of the patients were estimated using the Cockcroft-Gault equation corrected for body surface area, simplified Modification of Diet in Renal Disease (MDRD) equation, simplified MDRD equation corrected to isotopes dilution mass spectrometry, the CKD epidemiology collaborative research equation, and two Chinese simplified MDRD equations. Significant differences in the eGFR results estimated through enzymatic and picric acid methods were observed for the same evaluation equation. The intraclass correlation coefficient (ICC) of eGFR when the creatinine was measured by the picric acid method was significantly lower than that of the enzymatic method. The assessment accuracy of every equation using the enzymatic method to measure creatinine was significantly higher than that measured by the picric acid method when rGFR was > or = 60 mL/min/1.73m2. A significant difference was demonstrated in the same GFR evaluation equation using the picric acid and enzymatic methods. The enzymatic creatinine method was better than the picric acid method.
Lajolo, Carlo; Giuliani, Michele; Cordaro, Massimo; Marigo, Luca; Marcelli, Antonio; Fiorillo, Fabio; Pascali, Vincenzo L; Oliva, Antonio
2013-10-01
Chronological age (CA) plays a fundamental role in forensic dentistry (i.e. personal identification and evaluation of imputability). Even though several studies outlined the association between biological and chronological age, there is still great variability in the estimates. The aim of this study was to determine the possible correlation between biological and CA age through the use of two new radiographic indexes (Oro-Cervical Radiographic Simplified Score - OCRSS and Oro-Cervical Radiographic Simplified Score Without Wisdom Teeth - OCRSSWWT) that are based on the oro-cervical area. Sixty Italian Caucasian individuals were divided into 3 groups according to their CA: Group 1: CAG 1 = 8-14 yr; Group 2: CAG 2 = 14-18 yr; Group 3: CAG 3 = 18-25 yr; panorexes and standardised cephalograms were evaluated according Demirjian's Method for dental age calculation (DM), Cervical Vertebral Maturation method for skeletal age calculation (CVMS) and Third Molar Development for age estimation (TMD). The stages of each method were simplified in order to generate OCRSS, which summarized the simplified scores of the three methods, and OCRSSWWT, which summarized the simplified DM and CVMS scores. There was a significant correlation between OCRSS and CAGs (Slope = 0.954, p < 0.001, R-squared = 0.79) and between OCRSSWWT and CAGs (Slope = 0.863, p < 0.001, R-squared = 0.776). Even though the indexes, especially OCRSS, appear to be highly reliable, growth variability among individuals can deeply influence the anatomical changes from childhood to adulthood. A multi-disciplinary approach that considers many different biomarkers could help make radiological age determination more reliable when it is used to predict CA. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
NASA Astrophysics Data System (ADS)
Ribeiro Fontoura, Jessica; Allasia, Daniel; Herbstrith Froemming, Gabriel; Freitas Ferreira, Pedro; Tassi, Rutineia
2016-04-01
Evapotranspiration is a key process of hydrological cycle and a sole term that links land surface water balance and land surface energy balance. Due to the higher information requirements of the Penman-Monteith method and the existing data uncertainty, simplified empirical methods for calculating potential and actual evapotranspiration are widely used in hydrological models. This is especially important in Brazil, where the monitoring of meteorological data is precarious. In this study were compared different methods for estimating evapotranspiration for Rio Grande do Sul, the Southernmost State of Brazil, aiming to suggest alternatives to the recommended method (Penman-Monteith-FAO 56) for estimate daily reference evapotranspiration (ETo) when meteorological data is missing or not available. The input dataset included daily and hourly-observed data from conventional and automatic weather stations respectively maintained by the National Weather Institute of Brazil (INMET) from the period of 1 January 2007 to 31 January 2010. Dataset included maximum temperature (Tmax, °C), minimum temperature (Tmin, °C), mean relative humidity (%), wind speed at 2 m height (u2, m s-1), daily solar radiation (Rs, MJ m- 2) and atmospheric pressure (kPa) that were grouped at daily time-step. Was tested the Food and Agriculture Organization of the United Nations (FAO) Penman-Monteith method (PM) at its full form, against PM assuming missing several variables not normally available in Brazil in order to calculate daily reference ETo. Missing variables were estimated as suggested in FAO56 publication or from climatological means. Furthermore, PM was also compared against the following simplified empirical methods: Hargreaves-Samani, Priestley-Taylor, Mccloud, McGuiness-Bordne, Romanenko, Radiation-Temperature, Tanner-Pelton. The statistical analysis indicates that even if just Tmin and Tmax are available, it is better to use PM estimating missing variables from syntetic data than simplified empirical methods evaluated except for Tanner-Pelton and Priestley-Taylor.
NASA Astrophysics Data System (ADS)
Staszczuk, Anna
2017-03-01
The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.
A simplified method for elastic-plastic-creep structural analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1984-01-01
A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.
A simplified method for elastic-plastic-creep structural analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1985-01-01
A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.
NASA Technical Reports Server (NTRS)
Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.
1999-01-01
OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection fraction or rate-pressure product.
Simplified estimation of age-specific reference intervals for skewed data.
Wright, E M; Royston, P
1997-12-30
Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.
NASA Astrophysics Data System (ADS)
Xiong, Yan; Reichenbach, Stephen E.
1999-01-01
Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.
Maupin, Molly A.; Senay, Gabriel B.; Kenny, Joan F.; Savoca, Mark E.
2012-01-01
Recent advances in remote-sensing technology and Simplified Surface Energy Balance (SSEB) methods can provide accurate and repeatable estimates of evapotranspiration (ET) when used with satellite observations of irrigated lands. Estimates of ET are generally considered equivalent to consumptive use (CU) because they represent the part of applied irrigation water that is evaporated, transpired, or otherwise not available for immediate reuse. The U.S. Geological Survey compared ET estimates from SSEB methods to CU data collected for 1995 using indirect methods as part of the National Water Use Information Program (NWUIP). Ten-year (2000-2009) average ET estimates from SSEB methods were derived using Moderate Resolution Imaging Spectroradiometer (MODIS) 1-kilometer satellite land surface temperature and gridded weather datasets from the Global Data Assimilation System (GDAS). County-level CU estimates for 1995 were assembled and referenced to 1-kilometer grid cells to synchronize with the SSEB ET estimates. Both datasets were seasonally and spatially weighted to represent the irrigation season (June-September) and those lands that were identified in the county as irrigated. A strong relation (R2 greater than 0.7) was determined between NWUIP CU and SSEB ET data. Regionally, the relation is stronger in arid western states than in humid eastern states, and positive and negative biases are both present at state-level comparisons. SSEB ET estimates can play a major role in monitoring and updating county-based CU estimates by providing a quick and cost-effective method to detect major year-to-year changes at county levels, as well as providing a means to disaggregate county-based ET estimates to sub-county levels. More research is needed to identify the causes for differences in state-based relations.
NASA Astrophysics Data System (ADS)
Yeom, Jong-Min; Han, Kyung-Soo; Kim, Jae-Jin
2012-05-01
Solar surface insolation (SSI) represents how much solar radiance reaches the Earth's surface in a specified area and is an important parameter in various fields such as surface energy research, meteorology, and climate change. This study calculates insolation using Multi-functional Transport Satellite (MTSAT-1R) data with a simplified cloud factor over Northeast Asia. For SSI retrieval from the geostationary satellite data, the physical model of Kawamura is modified to improve insolation estimation by considering various atmospheric constituents, such as Rayleigh scattering, water vapor, ozone, aerosols, and clouds. For more accurate atmospheric parameterization, satellite-based atmospheric constituents are used instead of constant values when estimating insolation. Cloud effects are a key problem in insolation estimation because of their complicated optical characteristics and high temporal and spatial variation. The accuracy of insolation data from satellites depends on how well cloud attenuation as a function of geostationary channels and angle can be inferred. This study uses a simplified cloud factor that depends on the reflectance and solar zenith angle. Empirical criteria to select reference data for fitting to the ground station data are applied to suggest simplified cloud factor methods. Insolation estimated using the cloud factor is compared with results of the unmodified physical model and with observations by ground-based pyranometers located in the Korean peninsula. The modified model results show far better agreement with ground truth data compared to estimates using the conventional method under overcast conditions.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Saini, Subhash (Technical Monitor)
1999-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.
Erosion estimation of guide vane end clearance in hydraulic turbines with sediment water flow
NASA Astrophysics Data System (ADS)
Han, Wei; Kang, Jingbo; Wang, Jie; Peng, Guoyi; Li, Lianyuan; Su, Min
2018-04-01
The end surface of guide vane or head cover is one of the most serious parts of sediment erosion for high-head hydraulic turbines. In order to investigate the relationship between erosion depth of wall surface and the characteristic parameter of erosion, an estimative method including a simplified flow model and a modificatory erosion calculative function is proposed in this paper. The flow between the end surfaces of guide vane and head cover is simplified as a clearance flow around a circular cylinder with a backward facing step. Erosion characteristic parameter of csws3 is calculated with the mixture model for multiphase flow and the renormalization group (RNG) k-𝜀 turbulence model under the actual working conditions, based on which, erosion depths of guide vane and head cover end surfaces are estimated with a modification of erosion coefficient K. The estimation results agree well with the actual situation. It is shown that the estimative method is reasonable for erosion prediction of guide vane and can provide a significant reference to determine the optimal maintenance cycle for hydraulic turbine in the future.
Monitoring inter-channel nonlinearity based on differential pilot
NASA Astrophysics Data System (ADS)
Wang, Wanli; Yang, Aiying; Guo, Peng; Lu, Yueming; Qiao, Yaojun
2018-06-01
We modify and simplify the inter-channel nonlinearity (NL) estimation method by using differential pilot. Compared to previous works, the inter-channel NL estimation method we propose has much lower complexity and does not need modification of the transmitter. The performance of inter-channel NL monitoring with different launch power is tested. For both QPSK and 16QAM systems with 9 channels, the estimation error of inter-channel NL is lower than 1 dB when the total launch power is bigger than 12 dBm after 1000 km optical transmission. At last, we compare our inter-channel NL estimation method with other methods.
Study on a pattern classification method of soil quality based on simplified learning sample dataset
Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.
2011-01-01
Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.
NASA Technical Reports Server (NTRS)
Kuchynka, P.; Laskar, J.; Fienga, A.
2011-01-01
Mars ranging observations are available over the past 10 years with an accuracy of a few meters. Such precise measurements of the Earth-Mars distance provide valuable constraints on the masses of the asteroids perturbing both planets. Today more than 30 asteroid masses have thus been estimated from planetary ranging data (see [1] and [2]). Obtaining unbiased mass estimations is nevertheless difficult. Various systematic errors can be introduced by imperfect reduction of spacecraft tracking observations to planetary ranging data. The large number of asteroids and the limited a priori knowledge of their masses is also an obstacle for parameter selection. Fitting in a model a mass of a negligible perturber, or on the contrary omitting a significant perturber, will induce important bias in determined asteroid masses. In this communication, we investigate a simplified version of the mass determination problem. Instead of planetary ranging observations from spacecraft or radar data, we consider synthetic ranging observations generated with the INPOP [2] ephemeris for a test model containing 25000 asteroids. We then suggest a method for optimal parameter selection and estimation in this simplified framework.
International Conference on the Methods of Aerophysical Research 98 "ICMAR 98". Proceedings, Part 1
1998-01-01
pumping air through device and airdrying due to vapour condensation on cooled surfaces. Fig. 1 In this report, approximate estimates are presented...picture is used for flow field between disks and for water vapor condensation on cooled moving surfaces. Shown in Fig. 1 is a simplified flow...frequency of disks rotation), thus, breaking away from channel walls. Regarding condensation process, a number of usual simplifying assumptions is made
NASA Astrophysics Data System (ADS)
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
The electrochemistry-based battery model can provide physics-meaningful knowledge about the lithium-ion battery system with extensive computation burdens. To motivate the development of reduced order battery model, three major contributions have been made throughout this paper: (1) the transfer function type of simplified electrochemical model is proposed to address the current-voltage relationship with Padé approximation method and modified boundary conditions for electrolyte diffusion equations. The model performance has been verified under pulse charge/discharge and dynamic stress test (DST) profiles with the standard derivation less than 0.021 V and the runtime 50 times faster. (2) the parametric relationship between the equivalent circuit model and simplified electrochemical model has been established, which will enhance the comprehension level of two models with more in-depth physical significance and provide new methods for electrochemical model parameter estimation. (3) four simplified electrochemical model parameters: equivalent resistance Req, effective diffusion coefficient in electrolyte phase Deeff, electrolyte phase volume fraction ε and open circuit voltage (OCV), have been identified by the recursive least square (RLS) algorithm with the modified DST profiles under 45, 25 and 0 °C. The simulation results indicate that the proposed model coupled with RLS algorithm can achieve high accuracy for electrochemical parameter identification in dynamic scenarios.
NASA Astrophysics Data System (ADS)
Mel, Riccardo; Viero, Daniele Pietro; Carniello, Luca; Defina, Andrea; D'Alpaos, Luigi
2014-09-01
Providing reliable and accurate storm surge forecasts is important for a wide range of problems related to coastal environments. In order to adequately support decision-making processes, it also become increasingly important to be able to estimate the uncertainty associated with the storm surge forecast. The procedure commonly adopted to do this uses the results of a hydrodynamic model forced by a set of different meteorological forecasts; however, this approach requires a considerable, if not prohibitive, computational cost for real-time application. In the present paper we present two simplified methods for estimating the uncertainty affecting storm surge prediction with moderate computational effort. In the first approach we use a computationally fast, statistical tidal model instead of a hydrodynamic numerical model to estimate storm surge uncertainty. The second approach is based on the observation that the uncertainty in the sea level forecast mainly stems from the uncertainty affecting the meteorological fields; this has led to the idea to estimate forecast uncertainty via a linear combination of suitable meteorological variances, directly extracted from the meteorological fields. The proposed methods were applied to estimate the uncertainty in the storm surge forecast in the Venice Lagoon. The results clearly show that the uncertainty estimated through a linear combination of suitable meteorological variances nicely matches the one obtained using the deterministic approach and overcomes some intrinsic limitations in the use of a statistical tidal model.
Estimating surface temperature in forced convection nucleate boiling - A simplified method
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Papell, S. S.
1977-01-01
A simplified expression to estimate surface temperatures in forced convection boiling was developed using a liquid nitrogen data base. Using the principal of corresponding states and the Kutateladze relation for maximum pool boiling heat flux, the expression was normalized for use with other fluids. The expression was applied also to neon and water. For the neon data base, the agreement was acceptable with the exclusion of one set suspected to be in the transition boiling regime. For the water data base at reduced pressure greater than 0.05 the agreement is generally good. At lower reduced pressures, the water data scatter and the calculated temperature becomes a function of flow rate.
Fuel Burn Estimation Using Real Track Data
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.
2011-01-01
A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.
2014-01-01
Background The measurement of mechanosensitivity is a key method for the study of pain in animal models. This is often accomplished with the use of von Frey filaments in an up-down testing paradigm. The up-down method described by Chaplan et al. (J Neurosci Methods 53:55–63, 1994) for mechanosensitivity testing in rodents remains one of the most widely used methods for measuring pain in animals. However, this method results in animals receiving a varying number of stimuli, which may lead to animals in different groups receiving different testing experiences that influences their later responses. To standardize the measurement of mechanosensitivity we developed a simplified up-down method (SUDO) for estimating paw withdrawal threshold (PWT) with von Frey filaments that uses a constant number of five stimuli per test. We further refined the PWT calculation to allow the estimation of PWT directly from the behavioral response to the fifth stimulus, omitting the need for look-up tables. Results The PWT estimates derived using SUDO strongly correlated (r > 0.96) with the PWT estimates determined with the conventional up-down method of Chaplan et al., and this correlation remained very strong across different levels of tester experience, different experimental conditions, and in tests from both mice and rats. The two testing methods also produced similar PWT estimates in prospective behavioral tests of mice at baseline and after induction of hyperalgesia by intraplantar capsaicin or complete Freund’s adjuvant. Conclusion SUDO thus offers an accurate, fast and user-friendly replacement for the widely used up-down method of Chaplan et al. PMID:24739328
Eisenhauer, Philipp; Heckman, James J.; Mosso, Stefano
2015-01-01
We compare the performance of maximum likelihood (ML) and simulated method of moments (SMM) estimation for dynamic discrete choice models. We construct and estimate a simplified dynamic structural model of education that captures some basic features of educational choices in the United States in the 1980s and early 1990s. We use estimates from our model to simulate a synthetic dataset and assess the ability of ML and SMM to recover the model parameters on this sample. We investigate the performance of alternative tuning parameters for SMM. PMID:26494926
M. C. Neel; K. McKelvey; N. Ryman; M. W. Lloyd; R. Short Bull; F. W. Allendorf; M. K. Schwartz; R. S. Waples
2013-01-01
Use of genetic methods to estimate effective population size (Ne) is rapidly increasing, but all approaches make simplifying assumptions unlikely to be met in real populations. In particular, all assume a single, unstructured population, and none has been evaluated for use with continuously distributed species. We simulated continuous populations with local mating...
Simplified data reduction methods for the ECT test for mode 3 interlaminar fracture toughness
NASA Technical Reports Server (NTRS)
Li, Jian; Obrien, T. Kevin
1995-01-01
Simplified expressions for the parameter controlling the load point compliance and strain energy release rate were obtained for the Edge Crack Torsion (ECT) specimen for mode 3 interlaminar fracture toughness. Data reduction methods for mode 3 toughness based on the present analysis are proposed. The effect of the transverse shear modulus, G(sub 23), on mode 3 interlaminar fracture toughness characterization was evaluated. Parameters influenced by the transverse shear modulus were identified. Analytical results indicate that a higher value of G(sub 23) results in a low load point compliance and lower mode 3 toughness estimation. The effect of G(sub 23) on the mode 3 toughness using the ECT specimen is negligible when an appropriate initial delamination length is chosen. A conservative estimation of mode 3 toughness can be obtained by assuming G(sub 23) = G(sub 12) for any initial delamination length.
Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J
2017-04-01
Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.
Hirve, Siddhivinayak; Vounatsou, Penelope; Juvekar, Sanjay; Blomstedt, Yulia; Wall, Stig; Chatterji, Somnath; Ng, Nawi
2014-03-01
We compared prevalence estimates of self-rated health (SRH) derived indirectly using four different small area estimation methods for the Vadu (small) area from the national Study on Global AGEing (SAGE) survey with estimates derived directly from the Vadu SAGE survey. The indirect synthetic estimate for Vadu was 24% whereas the model based estimates were 45.6% and 45.7% with smaller prediction errors and comparable to the direct survey estimate of 50%. The model based techniques were better suited to estimate the prevalence of SRH than the indirect synthetic method. We conclude that a simplified mixed effects regression model can produce valid small area estimates of SRH. © 2013 Published by Elsevier Ltd.
Simplified model of mean double step (MDS) in human body movement
NASA Astrophysics Data System (ADS)
Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando
In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.
An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC
NASA Astrophysics Data System (ADS)
Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng
2017-04-01
This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.
NASA Astrophysics Data System (ADS)
Forbes, B. T.
2015-12-01
Due to the predominantly arid climate in Arizona, access to adequate water supply is vital to the economic development and livelihood of the State. Water supply has become increasingly important during periods of prolonged drought, which has strained reservoir water levels in the Desert Southwest over past years. Arizona's water use is dominated by agriculture, consuming about seventy-five percent of the total annual water demand. Tracking current agricultural water use is important for managers and policy makers so that current water demand can be assessed and current information can be used to forecast future demands. However, many croplands in Arizona are irrigated outside of areas where water use reporting is mandatory. To estimate irrigation withdrawals on these lands, we use a combination of field verification, evapotranspiration (ET) estimation, and irrigation system qualification. ET is typically estimated in Arizona using the Modified Blaney-Criddle method which uses meteorological data to estimate annual crop water requirements. The Modified Blaney-Criddle method assumes crops are irrigated to their full potential over the entire growing season, which may or may not be realistic. We now use the Operational Simplified Surface Energy Balance (SSEBop) ET data in a remote-sensing and energy-balance framework to estimate cropland ET. SSEBop data are of sufficient resolution (30m by 30m) for estimation of field-scale cropland water use. We evaluate our SSEBop-based estimates using ground-truth information and irrigation system qualification obtained in the field. Our approach gives the end user an estimate of crop consumptive use as well as inefficiencies in irrigation system performance—both of which are needed by water managers for tracking irrigated water use in Arizona.
Frenning, Göran
2015-01-01
When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975
Sousa, Marcelo R; Jones, Jon P; Frind, Emil O; Rudolph, David L
2013-01-01
In contaminant travel from ground surface to groundwater receptors, the time taken in travelling through the unsaturated zone is known as the unsaturated zone time lag. Depending on the situation, this time lag may or may not be significant within the context of the overall problem. A method is presented for assessing the importance of the unsaturated zone in the travel time from source to receptor in terms of estimates of both the absolute and the relative advective times. A choice of different techniques for both unsaturated and saturated travel time estimation is provided. This method may be useful for practitioners to decide whether to incorporate unsaturated processes in conceptual and numerical models and can also be used to roughly estimate the total travel time between points near ground surface and a groundwater receptor. This method was applied to a field site located in a glacial aquifer system in Ontario, Canada. Advective travel times were estimated using techniques with different levels of sophistication. The application of the proposed method indicates that the time lag in the unsaturated zone is significant at this field site and should be taken into account. For this case, sophisticated and simplified techniques lead to similar assessments when the same knowledge of the hydraulic conductivity field is assumed. When there is significant uncertainty regarding the hydraulic conductivity, simplified calculations did not lead to a conclusive decision. Copyright © 2012 Elsevier B.V. All rights reserved.
Oak Ridge Spallation Neutron Source (ORSNS) target station design integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamy, T.; Booth, R.; Cleaves, J.
1996-06-01
The conceptual design for a 1- to 3-MW short pulse spallation source with a liquid mercury target has been started recently. The design tools and methods being developed to define requirements, integrate the work, and provide early cost guidance will be presented with a summary of the current target station design status. The initial design point was selected with performance and cost estimate projections by a systems code. This code was developed recently using cost estimates from the Brookhaven Pulsed Spallation Neutron Source study and experience from the Advanced Neutron Source Project`s conceptual design. It will be updated and improvedmore » as the design develops. Performance was characterized by a simplified figure of merit based on a ratio of neutron production to costs. A work breakdown structure was developed, with simplified systems diagrams used to define interfaces and system responsibilities. A risk assessment method was used to identify potential problems, to identify required research and development (R&D), and to aid contingency development. Preliminary 3-D models of the target station are being used to develop remote maintenance concepts and to estimate costs.« less
Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing
2017-01-01
Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405
Using color histogram normalization for recovering chromatic illumination-changed images.
Pei, S C; Tseng, C L; Wu, C C
2001-11-01
We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-20
... simplify some assumptions and to make estimation methods consistent; and characterization as Agency burden...-1007 to (1) EPA online using http://www.regulations.gov (our preferred method), by e-mail to oppt.ncic...-HQ-OPPT-2010-1007, which is available for online viewing at http://www.regulations.gov , or in person...
Simplified methods for evaluating road prism stability
William J. Elliot; Mark Ballerini; David Hall
2003-01-01
Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...
A simplified close range photogrammetry method for soil erosion assessment
USDA-ARS?s Scientific Manuscript database
With the increased affordability of consumer grade cameras and the development of powerful image processing software, digital photogrammetry offers a competitive advantage as a tool for soil erosion estimation compared to other technologies. One bottleneck of digital photogrammetry is its dependency...
Screening-level estimates of mass discharge uncertainty from point measurement methods
The uncertainty of mass discharge measurements associated with point-scale measurement techniques was investigated by deriving analytical solutions for the mass discharge coefficient of variation for two simplified, conceptual models. In the first case, a depth-averaged domain w...
Varrone, Andrea; Gulyás, Balázs; Takano, Akihiro; Stabin, Michael G; Jonsson, Cathrine; Halldin, Christer
2012-02-01
[(18)F]FE-PE2I is a promising dopamine transporter (DAT) radioligand. In nonhuman primates, we examined the accuracy of simplified quantification methods and the estimates of radiation dose of [(18)F]FE-PE2I. In the quantification study, binding potential (BP(ND)) values previously reported in three rhesus monkeys using kinetic and graphical analyses of [(18)F]FE-PE2I were used for comparison. BP(ND) using the cerebellum as reference region was obtained with four reference tissue methods applied to the [(18)F]FE-PE2I data that were compared with the kinetic and graphical analyses. In the whole-body study, estimates of adsorbed radiation were obtained in two cynomolgus monkeys. All reference tissue methods provided BP(ND) values within 5% of the values obtained with the kinetic and graphical analyses. The shortest imaging time for stable BP(ND) estimation was 54 min. The average effective dose of [(18)F]FE-PE2I was 0.021 mSv/MBq, similar to 2-deoxy-2-[(18)F]fluoro-d-glucose. The results in nonhuman primates suggest that [(18)F]FE-PE2I is suitable for accurate and stable DAT quantification, and its radiation dose estimates would allow for a maximal administered radioactivity of 476 MBq in human subjects. Copyright © 2012 Elsevier Inc. All rights reserved.
Parametric system identification of catamaran for improving controller design
NASA Astrophysics Data System (ADS)
Timpitak, Surasak; Prempraneerach, Pradya; Pengwang, Eakkachai
2018-01-01
This paper presents an estimation of simplified dynamic model for only surge- and yaw- motions of catamaran by using system identification (SI) techniques to determine associated unknown parameters. These methods will enhance the performance of designing processes for the motion control system of Unmanned Surface Vehicle (USV). The simulation results demonstrate an effective way to solve for damping forces and to determine added masses by applying least-square and AutoRegressive Exogenous (ARX) methods. Both methods are then evaluated according to estimated parametric errors from the vehicle’s dynamic model. The ARX method, which yields better estimated accuracy, can then be applied to identify unknown parameters as well as to help improving a controller design of a real unmanned catamaran.
Kadji, Caroline; De Groof, Maxime; Camus, Margaux F; De Angelis, Riccardo; Fellas, Stéphanie; Klass, Magdalena; Cecotti, Vera; Dütemeyer, Vivien; Barakat, Elie; Cannie, Mieke M; Jani, Jacques C
2017-01-01
The aim of this study was to apply a semi-automated calculation method of fetal body volume and, thus, of magnetic resonance-estimated fetal weight (MR-EFW) prior to planned delivery and to evaluate whether the technique of measurement could be simplified while remaining accurate. MR-EFW was calculated using a semi-automated method at 38.6 weeks of gestation in 36 patients and compared to the picture archiving and communication system (PACS). Per patient, 8 sequences were acquired with a slice thickness of 4-8 mm and an intersection gap of 0, 4, 8, 12, 16, or 20 mm. The median absolute relative errors for MR-EFW and the time of planimetric measurements were calculated for all 8 sequences and for each method (assisted vs. PACS), and the difference between the methods was calculated. The median delivery weight was 3,280 g. The overall median relative error for all 288 MR-EFW calculations was 2.4% using the semi-automated method and 2.2% for the PACS method. Measurements did not differ between the 8 sequences using the assisted method (p = 0.313) or the PACS (p = 0.118), while the time of planimetric measurement decreased significantly with a larger gap (p < 0.001) and in the assisted method compared to the PACS method (p < 0.01). Our simplified MR-EFW measurement showed a dramatic decrease in time of planimetric measurement without a decrease in the accuracy of weight estimates. © 2017 S. Karger AG, Basel.
A simplified method for monomeric carbohydrate analysis of corn stover biomass
USDA-ARS?s Scientific Manuscript database
Constituent determination of biomass for theoretical ethanol yield (TEY) estimation requires the removal of non-structural carbohydrates prior to analysis to prevent interference with the analytical procedure. According to the accepted U.S. Dept. of Energy-National Renewable Energy Laboratory (NREL)...
Simplification of an MCNP model designed for dose rate estimation
NASA Astrophysics Data System (ADS)
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
NASA Astrophysics Data System (ADS)
Agata, Ryoichiro; Ichimura, Tsuyoshi; Hori, Takane; Hirahara, Kazuro; Hashimoto, Chihiro; Hori, Muneo
2018-04-01
The simultaneous estimation of the asthenosphere's viscosity and coseismic slip/afterslip is expected to improve largely the consistency of the estimation results to observation data of crustal deformation collected in widely spread observation points, compared to estimations of slips only. Such an estimate can be formulated as a non-linear inverse problem of material properties of viscosity and input force that is equivalent to fault slips based on large-scale finite-element (FE) modeling of crustal deformation, in which the degree of freedom is in the order of 109. We formulated and developed a computationally efficient adjoint-based estimation method for this inverse problem, together with a fast and scalable FE solver for the associated forward and adjoint problems. In a numerical experiment that imitates the 2011 Tohoku-Oki earthquake, the advantage of the proposed method is confirmed by comparing the estimated results with those obtained using simplified estimation methods. The computational cost required for the optimization shows that the proposed method enabled the targeted estimation to be completed with moderate amount of computational resources.
NASA Astrophysics Data System (ADS)
Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio
The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-10-01
Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A147
Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi
2015-01-01
We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Assess the availability of electronic and information technology that meets all or part of the applicable... soliciting offers for acquisitions with an estimated value in excess of the simplified acquisition threshold; (iii) Before soliciting offers for acquisitions with an estimated value less than the simplified...
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Assess the availability of electronic and information technology that meets all or part of the applicable... soliciting offers for acquisitions with an estimated value in excess of the simplified acquisition threshold; (iii) Before soliciting offers for acquisitions with an estimated value less than the simplified...
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Assess the availability of electronic and information technology that meets all or part of the applicable... soliciting offers for acquisitions with an estimated value in excess of the simplified acquisition threshold; (iii) Before soliciting offers for acquisitions with an estimated value less than the simplified...
NASA Technical Reports Server (NTRS)
York, P.; Labell, R. W.
1980-01-01
An aircraft wing weight estimating method based on a component buildup technique is described. A simplified analytically derived beam model, modified by a regression analysis, is used to estimate the wing box weight, utilizing a data base of 50 actual airplane wing weights. Factors representing materials and methods of construction were derived and incorporated into the basic wing box equations. Weight penalties to the wing box for fuel, engines, landing gear, stores and fold or pivot are also included. Methods for estimating the weight of additional items (secondary structure, control surfaces) have the option of using details available at the design stage (i.e., wing box area, flap area) or default values based on actual aircraft from the data base.
Bi-fluorescence imaging for estimating accurately the nuclear condition of Rhizoctonia spp.
USDA-ARS?s Scientific Manuscript database
Aims: To simplify the determination of the nuclear condition of the pathogenic Rhizoctonia, which currently needs to be performed either using two fluorescent dyes, thus is more costly and time-consuming, or using only one fluorescent dye, and thus less accurate. Methods and Results: A red primary ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
A challenge in PBPK model development is estimating the parameters for absorption, distribution, metabolism, and excretion of the parent compound and metabolites of interest. One approach to reduce the number of parameters has been to simplify pharmacokinetic models by lumping p...
Simplified MPN method for enumeration of soil naphthalene degraders using gaseous substrate.
Wallenius, Kaisa; Lappi, Kaisa; Mikkonen, Anu; Wickström, Annika; Vaalama, Anu; Lehtinen, Taru; Suominen, Leena
2012-02-01
We describe a simplified microplate most-probable-number (MPN) procedure to quantify the bacterial naphthalene degrader population in soil samples. In this method, the sole substrate naphthalene is dosed passively via gaseous phase to liquid medium and the detection of growth is based on the automated measurement of turbidity using an absorbance reader. The performance of the new method was evaluated by comparison with a recently introduced method in which the substrate is dissolved in inert silicone oil and added individually to each well, and the results are scored visually using a respiration indicator dye. Oil-contaminated industrial soil showed slightly but significantly higher MPN estimate with our method than with the reference method. This suggests that gaseous naphthalene was dissolved in an adequate concentration to support the growth of naphthalene degraders without being too toxic. The dosing of substrate via gaseous phase notably reduced the work load and risk of contamination. The result scoring by absorbance measurement was objective and more reliable than measurement with indicator dye, and it also enabled further analysis of cultures. Several bacterial genera were identified by cloning and sequencing of 16S rRNA genes from the MPN wells incubated in the presence of gaseous naphthalene. In addition, the applicability of the simplified MPN method was demonstrated by a significant positive correlation between the level of oil contamination and the number of naphthalene degraders detected in soil.
Branching-ratio approximation for the self-exciting Hawkes process
NASA Astrophysics Data System (ADS)
Hardiman, Stephen J.; Bouchaud, Jean-Philippe
2014-12-01
We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.
VERIFICATION OF SIMPLIFIED PROCEDURES FOR SITE- SPECIFIC SO2 AND NOX CONTROL COST ESTIMATES
The report documents results of an evaluation to verify the accuracy of simplified procedures for estimating sulfur dioxide (S02) and nitrogen oxides (NOx) retrofit control costs and performance for 200 502-emitting coal-fired power plants in the 31-state eastern region. nitially...
Lysimetric evaluation of simplified surface energy balance approach in the Texas high plains
Gowda, P.H.; Senay, G.B.; Howell, T.A.; Marek, T.H.
2009-01-01
Numerous energy balance (EB) algorithms have been developed to make use of remote sensing data to estimate evapotranspiration (ET) regionally. However, most EB models are complex to use and efforts are being made to simplify procedures mainly through the scaling of reference ET. The Simplified Surface Energy Balance (SSEB) is one such method. This approach has never been evaluated using measured ET data. In this study, the SSEB approach was applied to 14 Landsat TM images covering a major portion of the Southern High Plains that were acquired during 2006 and 2007 cropping seasons. Performance of the SSEB was evaluated by comparing estimated ET with measured daily ET from four large monolithic lysimeters at the USDA-ARS Conservation and Production Research Laboratory, Bushland, Texas. Statistical evaluation of results indicated that the SSEB accounted for 84% of the variability in the measured ET values with a slope and intercept of 0.75 and 1.1 mm d-1, respectively. Considering the minimal amount of ancillary data required and excellent performance in predicting daily ET, the SSEB approach is a promising tool for mapping ET in the semiarid Texas High Plains and in other parts of the world with similar hydro-climatic conditions.
Lysimetric Evaluation of Simplified Surface Energy Balance Approach in the Texas High Plains
Senay, Gabriel B.; Gowda, P.H.; Howell, T.A.; Marek, T.H.
2009-01-01
Numerous energy balance (EB) algorithms have been developed to make use of remote sensing data to estimate evapotranspiration (ET) regionally. However, most EB models are complex to use and efforts are being made to simplify procedures mainly through the scaling of reference ET. The Simplified Surface Energy Balance (SSEB) is one such method. This approach has never been evaluated using measured ET data. In this study, the SSEB approach was applied to fourteen Landsat TM images covering a major portion of the Southern High Plains that were acquired during 2006 and 2007 cropping seasons. Performance of the SSEB was evaluated by comparing estimated ET with measured daily ET from four large monolithic lysimeters at the USDA-ARS Conservation and Production Research Laboratory, Bushland, Texas. Statistical evaluation of results indicated that the SSEB accounted for 84% of the variability in the measured ET values with a slope and intercept of 0.75 and 1.1 mm d-1, respectively. Considering the minimal amount of ancillary data required and excellent performance in predicting daily ET, the SSEB approach is a promising tool for mapping ET in the semiarid Texas High Plains and in other parts of the world with similar hydro-climatic conditions.
2016-12-01
Simplified example of estimating metabolic rate from core temperature using the SCENARIO thermoregulatory model. 7 4 Edgewood training site, Day 1, core... temperature (TC) and metabolic rate (Ṁ). 10 5 Edgewood training site, Day 2, core temperature (TC) and metabolic rate (Ṁ). 11 6 Hayward...training site, Day 1, core temperature (TC) and metabolic rate (Ṁ). 12 7 Hayward training site, Day 2, core temperature (TC) and metabolic rate (Ṁ). 13
Carmona-Jiménez, Yolanda; García-Moreno, M Valme; Igartuburu, Jose M; Garcia Barroso, Carmelo
2014-12-15
The DPPH assay is one of the most commonly employed methods for measuring antioxidant activity. Even though this method is considered very simple and efficient, it does present various limitations which make it complicated to perform. The range of linearity between the DPPH inhibition percentage and sample concentration has been studied with a view to simplifying the method for characterising samples of wine origin. It has been concluded that all the samples are linear in a range of inhibition below 40%, which allows the analysis to be simplified. A new parameter more appropriate for the simplification, the EC20, has been proposed to express the assay results. Additionally, the reaction time was analysed with the object of avoiding the need for kinetic studies in the method. The simplifications considered offer a more functional method, without significant errors, which could be used for routine analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simplified Equations to Estimate Flushline Diameter for Subsurface Drip Irrigation Systems
USDA-ARS?s Scientific Manuscript database
A formulation of the Hazen-Williams equation is typically used to determine the diameter of the common flushline that is often used at the distal end of subsurface drip irrigation systems to aid in joint flushing of a group of driplines. Although this method is accurate, its usage is not intuitive a...
12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...
12 CFR Appendix C to Part 325 - Risk-Based Capital for State Nonmember Banks: Market Risk
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10Standardized Measurement Method for Specific Risk Section 11Simplified Supervisory Formula Approach Section... apply: Affiliate with respect to a company means any company that controls, is controlled by, or is under common control with, the company. Backtesting means the comparison of a bank's internal estimates...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1987-12-10
A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, Xin-ran; Wang, Xin
2017-04-01
When the genetic algorithm is used to solve the problem of too short-arc (TSA) orbit determination, due to the difference of computing process between the genetic algorithm and the classical method, the original method for outlier deletion is no longer applicable. In the genetic algorithm, the robust estimation is realized by introducing different loss functions for the fitness function, then the outlier problem of the TSA orbit determination is solved. Compared with the classical method, the genetic algorithm is greatly simplified by introducing in different loss functions. Through the comparison on the calculations of multiple loss functions, it is found that the least median square (LMS) estimation and least trimmed square (LTS) estimation can greatly improve the robustness of the TSA orbit determination, and have a high breakdown point.
NASA Astrophysics Data System (ADS)
Senay, G. B.; Budde, M. E.; Allen, R. G.; Verdin, J. P.
2008-12-01
Evapotranspiration (ET) is an important component of the hydrologic budget because it expresses the exchange of mass and energy between the soil-water-vegetation system and the atmosphere. Since direct measurement of ET is difficult, various modeling methods are used to estimate actual ET (ETa). Generally, the choice of method for ET estimation depends on the objective of the study and is further limited by the availability of data and desired accuracy of the ET estimate. Operational monitoring of crop performance requires processing large data sets and a quick response time. A Simplified Surface Energy Balance (SSEB) model was developed by the U.S. Geological Survey's Famine Early Warning Systems Network to estimate irrigation water use in remote places of the world. In this study, we evaluated the performance of the SSEB model with the METRIC (Mapping Evapotranspiration at high Resolution and with Internalized Calibration) model that has been evaluated by several researchers using the Lysimeter data. The METRIC model has been proven to provide reliable ET estimates in different regions of the world. Reference ET fractions of both models (ETrF of METRIC vs. ETf of SSEB) were generated and compared using individual Landsat thermal images collected from 2000 though 2005 in Idaho, New Mexico, and California. In addition, the models were compared using monthly and seasonal total ETa estimates. The SSEB model reproduced both the spatial and temporal variability exhibited by METRIC on land surfaces, explaining up to 80 percent of the spatial variability. However, the ETa estimates over water bodies were systematically higher in the SSEB output, which could be improved by using a correction coefficient to take into account the absorption of solar energy by deeper water layers that has little contribution to the ET process. This study demonstrated the usefulness of the SSEB method for large-scale agro-hydrologic applications for operational monitoring and assessing of crop performance and regional water balance dynamics.
77 FR 54482 - Allocation of Costs Under the Simplified Methods
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... certain costs to the property and that allocate costs under the simplified production method or the simplified resale method. The proposed regulations provide rules for the treatment of negative additional...
NASA Astrophysics Data System (ADS)
Balcerzak, Marek; Dąbrowski, Artur; Pikunov, Danylo
2018-01-01
This paper presents a practical application of a new, simplified method of Lyapunov exponents estimation. The method has been applied to optimization of a real, nonlinear inverted pendulum system. Authors presented how the algorithm of the Largest Lyapunov Exponent (LLE) estimation can be applied to evaluate control systems performance. The new LLE-based control performance index has been proposed. Equations of the inverted pendulum system of the fourth order have been found. The nonlinear friction of the regulation object has been identified by means of the nonlinear least squares method. Three different friction models have been tested: linear, cubic and Coulomb model. The Differential Evolution (DE) algorithm has been used to search for the best set of parameters of the general linear regulator. This work proves that proposed method is efficient and results in faster perturbation rejection, especially when disturbances are significant.
A model for the sustainable selection of building envelope assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huedo, Patricia, E-mail: huedo@uji.es; Mulet, Elena, E-mail: emulet@uji.es; López-Mesa, Belinda, E-mail: belinda@unizar.es
2016-02-15
The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate themore » impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.« less
Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.
Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew
2017-08-10
Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A; Pasciak, A
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that Result in skin reactions can be reached during these procedures. The purpose of this study was to assess the accuracy of different indirect dose estimates and to determine if PSD can be calculated within ±50% for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures. Indirect dose metrics from procedures were collected, including reference air kerma (RAK). Four different estimates of PSD were calculated and compared along with RAK to the measured PSD. The indirect estimates included a standard method,more » use of detailed information from the RDSR, and two simplified calculation methods. Indirect dosimetry was compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the indirect estimates were examined. Results: PSD calculated with the standard calculation method were within ±50% for all 41 procedures. This was also true for a simplified method using a single source-to-patient distance (SPD) for all calculations. RAK was within ±50% for all but one procedure. Cases for which RAK or calculated PSD exhibited large differences from the measured PSD were analyzed, and two causative factors were identified: ‘extreme’ SPD and large contributions to RAK from rotational angiography or runs acquired at large gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±50% for embolization procedures, and usually to within ±35%. RAK can be used without modification to set notification limits and substantial radiation dose levels. These results can be extended to similar procedures, including vascular and interventional oncology. Film dosimetry is likely an unnecessary effort for these types of procedures.« less
Tuo, Rui; Jeff Wu, C. F.
2016-07-19
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
48 CFR 529.401-70 - Purchases at or under the simplified acquisition threshold.
Code of Federal Regulations, 2012 CFR
2012-10-01
... simplified acquisition threshold. 529.401-70 Section 529.401-70 Federal Acquisition Regulations System... Purchases at or under the simplified acquisition threshold. Insert 552.229-70, Federal, State, and Local Taxes, in purchases and contracts estimated to exceed the micropurchase threshold, but not the...
NASA Astrophysics Data System (ADS)
Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen
2018-02-01
A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.
8760-Based Method for Representing Variable Generation Capacity Value in Capacity Expansion Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A
Capacity expansion models (CEMs) are widely used to evaluate the least-cost portfolio of electricity generators, transmission, and storage needed to reliably serve load over many years or decades. CEMs can be computationally complex and are often forced to estimate key parameters using simplified methods to achieve acceptable solve times or for other reasons. In this paper, we discuss one of these parameters -- capacity value (CV). We first provide a high-level motivation for and overview of CV. We next describe existing modeling simplifications and an alternate approach for estimating CV that utilizes hourly '8760' data of load and VG resources.more » We then apply this 8760 method to an established CEM, the National Renewable Energy Laboratory's (NREL's) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016). While this alternative approach for CV is not itself novel, it contributes to the broader CEM community by (1) demonstrating how a simplified 8760 hourly method, which can be easily implemented in other power sector models when data is available, more accurately captures CV trends than a statistical method within the ReEDS CEM, and (2) providing a flexible modeling framework from which other 8760-based system elements (e.g., demand response, storage, and transmission) can be added to further capture important dynamic interactions, such as curtailment.« less
Relative dynamics and motion control of nanosatellite formation flying
NASA Astrophysics Data System (ADS)
Pimnoo, Ammarin; Hiraki, Koju
2016-04-01
Orbit selection is a necessary factor in nanosatellite formation mission design/meanwhile, to keep the formation, it is necessary to consume fuel. Therefore, the best orbit design for nanosatellite formation flying should be one that requires the minimum fuel consumption. The purpose of this paper is to analyse orbit selection with respect to the minimum fuel consumption, to provide a convenient way to estimate the fuel consumption for keeping nanosatellite formation flying and to present a simplified method of formation control. The formation structure is disturbed by J2 gravitational perturbation and other perturbing accelerations such as atmospheric drag. First, Gauss' Variation Equations (GVE) are used to estimate the essential ΔV due to the J2 perturbation and atmospheric drag. The essential ΔV presents information on which orbit is good with respect to the minimum fuel consumption. Then, the linear equations which account for J2 gravitational perturbation of Schweighart-Sedwick are presented and used to estimate the fuel consumption to maintain the formation structure. Finally, the relative dynamics motion is presented as well as a simplified motion control of formation structure by using GVE.
NASA Technical Reports Server (NTRS)
Ungar, Eugene K.; Richards, W. Lance
2015-01-01
The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact tool, final verification of the dewar pressure vessel design requires a complete, detailed real fluid compressible flow model of the vent stack. The wall heat flux resulting from a loss of vacuum insulation increases the dewar pressure, which actuates the pressure relief mechanism and results in high-speed flow through the dewar vent stack. At high pressures, the flow can be choked at the vent stack inlet, at the exit, or at an intermediate transition or restriction. During previous SOFIA analyses, it was observed that there was generally a readily identifiable section of the vent stack that would limit the flow – e.g., a small diameter entrance or an orifice. It was also found that when the supercritical helium was approximated as an ideal gas at the dewar condition, the calculated mass flow rate based on choking at the limiting entrance or transition was less than the mass flow rate calculated using the detailed real fluid model2. Using this lower mass flow rate would yield a conservative prediction of the dewar’s wall heat flux capability. The simplified method of the current work was developed by building on this observation.
NASA Astrophysics Data System (ADS)
Lee, Ming-Wei; Chen, Yi-Chun
2014-02-01
In pinhole SPECT applied to small-animal studies, it is essential to have an accurate imaging system matrix, called H matrix, for high-spatial-resolution image reconstructions. Generally, an H matrix can be obtained by various methods, such as measurements, simulations or some combinations of both methods. In this study, a distance-weighted Gaussian interpolation method combined with geometric parameter estimations (DW-GIMGPE) is proposed. It utilizes a simplified grid-scan experiment on selected voxels and parameterizes the measured point response functions (PRFs) into 2D Gaussians. The PRFs of missing voxels are interpolated by the relations between the Gaussian coefficients and the geometric parameters of the imaging system with distance-weighting factors. The weighting factors are related to the projected centroids of voxels on the detector plane. A full H matrix is constructed by combining the measured and interpolated PRFs of all voxels. The PRFs estimated by DW-GIMGPE showed similar profiles as the measured PRFs. OSEM reconstructed images of a hot-rod phantom and normal rat myocardium demonstrated the effectiveness of the proposed method. The detectability of a SKE/BKE task on a synthetic spherical test object verified that the constructed H matrix provided comparable detectability to that of the H matrix acquired by a full 3D grid-scan experiment. The reduction in the acquisition time of a full 1.0-mm grid H matrix was about 15.2 and 62.2 times with the simplified grid pattern on 2.0-mm and 4.0-mm grid, respectively. A finer-grid H matrix down to 0.5-mm spacing interpolated by the proposed method would shorten the acquisition time by 8 times, additionally.
A Rapid Empirical Method for Estimating the Gross Takeoff Weight of a High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mack, Robert J.
1999-01-01
During the cruise segment of the flight mission, aircraft flying at supersonic speeds generate sonic booms that are usually maximum at the beginning of cruise. The pressure signature with the shocks causing these perceived booms can be predicted if the aircraft's geometry, Mach number, altitude, angle of attack, and cruise weight are known. Most methods for estimating aircraft weight, especially beginning-cruise weight, are empirical and based on least- square-fit equations that best represent a body of component weight data. The empirical method discussed in this report used simplified weight equations based on a study of performance and weight data from conceptual and real transport aircraft. Like other weight-estimation methods, weights were determined at several points in the mission. While these additional weights were found to be useful, it is the determination of beginning-cruise weight that is most important for the prediction of the aircraft's sonic-boom characteristics.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuo, Rui; Jeff Wu, C. F.
Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less
Hageman, Philip L.; Seal, Robert R.; Diehl, Sharon F.; Piatak, Nadine M.; Lowers, Heather
2015-01-01
A comparison study of selected static leaching and acid–base accounting (ABA) methods using a mineralogically diverse set of 12 modern-style, metal mine waste samples was undertaken to understand the relative performance of the various tests. To complement this study, in-depth mineralogical studies were conducted in order to elucidate the relationships between sample mineralogy, weathering features, and leachate and ABA characteristics. In part one of the study, splits of the samples were leached using six commonly used leaching tests including paste pH, the U.S. Geological Survey (USGS) Field Leach Test (FLT) (both 5-min and 18-h agitation), the U.S. Environmental Protection Agency (USEPA) Method 1312 SPLP (both leachate pH 4.2 and leachate pH 5.0), and the USEPA Method 1311 TCLP (leachate pH 4.9). Leachate geochemical trends were compared in order to assess differences, if any, produced by the various leaching procedures. Results showed that the FLT (5-min agitation) was just as effective as the 18-h leaching tests in revealing the leachate geochemical characteristics of the samples. Leaching results also showed that the TCLP leaching test produces inconsistent results when compared to results produced from the other leaching tests. In part two of the study, the ABA was determined on splits of the samples using both well-established traditional static testing methods and a relatively quick, simplified net acid–base accounting (NABA) procedure. Results showed that the traditional methods, while time consuming, provide the most in-depth data on both the acid generating, and acid neutralizing tendencies of the samples. However, the simplified NABA method provided a relatively fast, effective estimation of the net acid–base account of the samples. Overall, this study showed that while most of the well-established methods are useful and effective, the use of a simplified leaching test and the NABA acid–base accounting method provide investigators fast, quantitative tools that can be used to provide rapid, reliable information about the leachability of metals and other constituents of concern, and the acid-generating potential of metal mining waste.
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
NASA Technical Reports Server (NTRS)
Emrich, Bill
2006-01-01
A simple method of estimating vehicle parameters appropriate for interplanetary travel can provide a useful tool for evaluating the suitability of particular propulsion systems to various space missions. Although detailed mission analyses for interplanetary travel can be quite complex, it is possible to derive hirly simple correlations which will provide reasonable trip time estimates to the planets. In the present work, it is assumed that a constant thrust propulsion system propels a spacecraft on a round trip mission having equidistant outbound and inbound legs in which the spacecraft accelerates during the first portion of each leg of the journey and decelerates during the last portion of each leg of the journey. Comparisons are made with numerical calculations from low thrust trajectory codes to estimate the range of applicability of the simplified correlations.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
State and force observers based on multibody models and the indirect Kalman filter
NASA Astrophysics Data System (ADS)
Sanjurjo, Emilio; Dopico, Daniel; Luaces, Alberto; Naya, Miguel Ángel
2018-06-01
The aim of this work is to present two new methods to provide state observers by combining multibody simulations with indirect extended Kalman filters. One of the methods presented provides also input force estimation. The observers have been applied to two mechanism with four different sensor configurations, and compared to other multibody-based observers found in the literature to evaluate their behavior, namely, the unscented Kalman filter (UKF), and the indirect extended Kalman filter with simplified Jacobians (errorEKF). The new methods have some more computational cost than the errorEKF, but still much less than the UKF. Regarding their accuracy, both are better than the errorEKF. The method with input force estimation outperforms also the UKF, while the method without force estimation achieves results almost identical to those of the UKF. All the methods have been implemented as a reusable MATLAB® toolkit which has been released as Open Source in https://github.com/MBDS/mbde-matlab.
Savoca, Mark E.; Senay, Gabriel B.; Maupin, Molly A.; Kenny, Joan F.; Perry, Charles A.
2013-01-01
Remote-sensing technology and surface-energy-balance methods can provide accurate and repeatable estimates of actual evapotranspiration (ETa) when used in combination with local weather datasets over irrigated lands. Estimates of ETa may be used to provide a consistent, accurate, and efficient approach for estimating regional water withdrawals for irrigation and associated consumptive use (CU), especially in arid cropland areas that require supplemental water due to insufficient natural supplies from rainfall, soil moisture, or groundwater. ETa in these areas is considered equivalent to CU, and represents the part of applied irrigation water that is evaporated and/or transpired, and is not available for immediate reuse. A recent U.S. Geological Survey study demonstrated the application of the remote-sensing-based Simplified Surface Energy Balance (SSEB) model to estimate 10-year average ETa at 1-kilometer resolution on national and regional scales, and compared those ETa values to the U.S. Geological Survey’s National Water-Use Information Program’s 1995 county estimates of CU. The operational version of the operational SSEB (SSEBop) method is now used to construct monthly, county-level ETa maps of the conterminous United States for the years 2000, 2005, and 2010. The performance of the SSEBop was evaluated using eddy covariance flux tower datasets compiled from 2005 datasets, and the results showed a strong linear relationship in different land cover types across diverse ecosystems in the conterminous United States (correlation coefficient [r] ranging from 0.75 to 0.95). For example, r for woody savannas (0.75), grassland (0.75), forest (0.82), cropland (0.84), shrub land (0.89), and urban (0.95). A comparison of the remote-sensing SSEBop method for estimating ETa and the Hamon temperature method for estimating potential ET (ETp) also was conducted, using regressions of all available county averages of ETa for 2005 and 2010, and yielded correlations of r = 0.60 and r = 0.71, respectively. Correlations generally are stronger in the Southeast where ETa is close to ETp. SSEBop ETa provides more spatial detail and accuracy in the Southwest where irrigation is practiced in a smaller proportion of the region.
Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors
NASA Astrophysics Data System (ADS)
Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.
2007-12-01
Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.
Wianowska, Dorota; Dawidowicz, Andrzej L
2016-05-01
This paper proposes and shows the analytical capabilities of a new variant of matrix solid phase dispersion (MSPD) with the solventless blending step in the chromatographic analysis of plant volatiles. The obtained results prove that the use of a solvent is redundant as the sorption ability of the octadecyl brush is sufficient for quantitative retention of volatiles from 9 plants differing in their essential oil composition. The extraction efficiency of the proposed simplified MSPD method is equivalent to the efficiency of the commonly applied variant of MSPD with the organic dispersing liquid and pressurized liquid extraction, which is a much more complex, technically advanced and highly efficient technique of plant extraction. The equivalency of these methods is confirmed by the variance analysis. The proposed solventless MSPD method is precise, accurate, and reproducible. The recovery of essential oil components estimated by the MSPD method exceeds 98%, which is satisfactory for analytical purposes. Copyright © 2016 Elsevier B.V. All rights reserved.
A Machine Learning Framework for Plan Payment Risk Adjustment.
Rose, Sherri
2016-12-01
To introduce cross-validation and a nonparametric machine learning framework for plan payment risk adjustment and then assess whether they have the potential to improve risk adjustment. 2011-2012 Truven MarketScan database. We compare the performance of multiple statistical approaches within a broad machine learning framework for estimation of risk adjustment formulas. Total annual expenditure was predicted using age, sex, geography, inpatient diagnoses, and hierarchical condition category variables. The methods included regression, penalized regression, decision trees, neural networks, and an ensemble super learner, all in concert with screening algorithms that reduce the set of variables considered. The performance of these methods was compared based on cross-validated R 2 . Our results indicate that a simplified risk adjustment formula selected via this nonparametric framework maintains much of the efficiency of a traditional larger formula. The ensemble approach also outperformed classical regression and all other algorithms studied. The implementation of cross-validated machine learning techniques provides novel insight into risk adjustment estimation, possibly allowing for a simplified formula, thereby reducing incentives for increased coding intensity as well as the ability of insurers to "game" the system with aggressive diagnostic upcoding. © Health Research and Educational Trust.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-11
.../trade/trade_transformation/simplified_entry/ . Current Actions: CBP proposes to extend the expiration... collected. Type of Review: Extension (without change). Affected Public: Businesses. CBP Form 3461 Estimated...
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
Computational tools for multi-linked flexible structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.
1990-01-01
A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.
Noninvasive estimation of assist pressure for direct mechanical ventricular actuation
NASA Astrophysics Data System (ADS)
An, Dawei; Yang, Ming; Gu, Xiaotong; Meng, Fan; Yang, Tianyue; Lin, Shujing
2018-02-01
Direct mechanical ventricular actuation is effective to reestablish the ventricular function with non-blood contact. Due to the energy loss within the driveline of the direct cardiac compression device, it is necessary to acquire the accurate value of assist pressure acting on the heart surface. To avoid myocardial trauma induced by invasive sensors, the noninvasive estimation method is developed and the experimental device is designed to measure the sample data for fitting the estimation models. By examining the goodness of fit numerically and graphically, the polynomial model presents the best behavior among the four alternative models. Meanwhile, to verify the effect of the noninvasive estimation, the simplified lumped parameter model is utilized to calculate the pre-support and the post-support left ventricular pressure. Furthermore, by adjusting the driving pressure beyond the range of the sample data, the assist pressure is estimated with the similar waveform and the post-support left ventricular pressure approaches the value of the adult healthy heart, indicating the good generalization ability of the noninvasive estimation method.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
A hierarchical estimator development for estimation of tire-road friction coefficient
Zhang, Xudong; Göhlich, Dietmar
2017-01-01
The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified “magic formula” tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method. PMID:28178332
A hierarchical estimator development for estimation of tire-road friction coefficient.
Zhang, Xudong; Göhlich, Dietmar
2017-01-01
The effect of vehicle active safety systems is subject to the friction force arising from the contact of tires and the road surface. Therefore, an adequate knowledge of the tire-road friction coefficient is of great importance to achieve a good performance of these control systems. This paper presents a tire-road friction coefficient estimation method for an advanced vehicle configuration, four-motorized-wheel electric vehicles, in which the longitudinal tire force is easily obtained. A hierarchical structure is adopted for the proposed estimation design. An upper estimator is developed based on unscented Kalman filter to estimate vehicle state information, while a hybrid estimation method is applied as the lower estimator to identify the tire-road friction coefficient using general regression neural network (GRNN) and Bayes' theorem. GRNN aims at detecting road friction coefficient under small excitations, which are the most common situations in daily driving. GRNN is able to accurately create a mapping from input parameters to the friction coefficient, avoiding storing an entire complex tire model. As for large excitations, the estimation algorithm is based on Bayes' theorem and a simplified "magic formula" tire model. The integrated estimation method is established by the combination of the above-mentioned estimators. Finally, the simulations based on a high-fidelity CarSim vehicle model are carried out on different road surfaces and driving maneuvers to verify the effectiveness of the proposed estimation method.
WTAQ - A computer program for aquifer-test analysis of confined and unconfined aquifers
Barlow, P.M.; Moench, A.F.
2004-01-01
Computer program WTAQ was developed to implement a Laplace-transform analytical solution for axial-symmetric flow to a partially penetrating, finite-diameter well in a homogeneous and anisotropic unconfined (water-table) aquifer. The solution accounts for wellbore storage and skin effects at the pumped well, delayed response at an observation well, and delayed or instantaneous drainage from the unsaturated zone. For the particular case of zero drainage from the unsaturated zone, the solution simplifies to that of axial-symmetric flow in a confined aquifer. WTAQ calculates theoretical time-drawdown curves for the pumped well and observation wells and piezometers. The theoretical curves are used with measured time-drawdown data to estimate hydraulic parameters of confined or unconfined aquifers by graphical type-curve methods or by automatic parameter-estimation methods. Parameters that can be estimated are horizontal and vertical hydraulic conductivity, specific storage, and specific yield. A sample application illustrates use of WTAQ for estimating hydraulic parameters of a hypothetical, unconfined aquifer by type-curve methods. Copyright ASCE 2004.
Stress estimation in reservoirs using an integrated inverse method
NASA Astrophysics Data System (ADS)
Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre
2018-05-01
Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.
Effects of Simplifying Choice Tasks on Estimates of Taste Heterogeneity in Stated-Choice Surveys
Johnson, F. Reed; Ozdemir, Semra; Phillips, Kathryn A
2011-01-01
Researchers usually employ orthogonal arrays or D-optimal designs with little or no attribute overlap in stated-choice surveys. The challenge is to balance statistical efficiency and respondent burden to minimize the overall error in the survey responses. This study examined whether simplifying the choice task, by using a design with more overlap, provides advantages over standard minimum-overlap methods. We administered two designs for eliciting HIV test preferences to split samples. Surveys were undertaken at four HIV testing locations in San Francisco, California. Personal characteristics had different effects on willingness to pay for the two treatments, and gains in statistical efficiency in the minimal-overlap version more than compensated for possible imprecision from increased measurement error. PMID:19880234
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
Observer-based monitoring of heat exchangers.
Astorga-Zaragoza, Carlos-Manuel; Alvarado-Martínez, Víctor-Manuel; Zavala-Río, Arturo; Méndez-Ocaña, Rafael-Maxim; Guerrero-Ramírez, Gerardo-Vicente
2008-01-01
The goal of this work is to provide a method for monitoring performance degradation in counter-flow double-pipe heat exchangers. The overall heat transfer coefficient is estimated by an adaptive observer and monitored in order to infer when the heat exchanger needs preventive or corrective maintenance. A simplified mathematical model is used to synthesize the adaptive observer and a more complex model is used for simulation. The reliability of the proposed method was demonstrated via numerical simulations and laboratory experiments with a bench-scale pilot plant.
NASA Astrophysics Data System (ADS)
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.
Estimating dietary costs of low-income women in California: a comparison of 2 approaches.
Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam; Townsend, Marilyn S
2013-04-01
Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher.
Jibson, Randall W.; Jibson, Matthew W.
2003-01-01
Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.
Estimating Transmissivity from the Water Level Fluctuations of a Sinusoidally Forced Well
Mehnert, E.; Valocchi, A.J.; Heidari, M.; Kapoor, S.G.; Kumar, P.
1999-01-01
The water levels in wells are known to fluctuate in response to earth tides and changes in atmospheric pressure. These water level fluctuations can be analyzed to estimate transmissivity (T). A new method to estimate transmissivity, which assumes that the atmospheric pressure varies in a sinusoidal fashion, is presented. Data analysis for this simplified method involves using a set of type curves and estimating the ratio of the amplitudes of the well response over the atmospheric pressure. Type curves for this new method were generated based on a model for ground water flow between the well and aquifer developed by Cooper et al. (1965). Data analysis with this method confirmed these published results: (1) the amplitude ratio is a function of transmissivity, the well radius, and the frequency of the sinusoidal oscillation; and (2) the amplitude ratio is a weak function of storativity. Compared to other methods, the developed method involves simpler, more intuitive data analysis and allows shorter data sets to be analyzed. The effect of noise on estimating the amplitude ratio was evaluated and found to be more significant at lower T. For aquifers with low T, noise was shown to mask the water level fluctuations induced by atmospheric pressure changes. In addition, reducing the length of the data series did not affect the estimate of T, but the variance of the estimate was higher for the shorter series of noisy data.
A practical method of predicting the loudness of complex electrical stimuli
NASA Astrophysics Data System (ADS)
McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.
2003-04-01
The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.
Ulrich, G.A.; Krumholz, L.R.; Suflita, J.M.
1997-01-01
A simplified passive extraction procedure for quantifying reduced inorganic sulfur compounds from sediments and water is presented. This method may also be used for the estimation of sulfate reduction rates. Efficient extraction of FeS, FeS(inf2), and S(sup2-) was obtained with this procedure; however, the efficiency for S(sup0) depended on the form that was tested. Passive extraction can be used with samples containing up to 20 mg of reduced sulfur. We demonstrated the utility of this technique in a determination of both sulfate reduction rates and reduced inorganic sulfur pools in marine and freshwater sediments. A side-by-side comparison of the passive extraction method with the established single-step distillation technique yielded comparable results with a fraction of the effort.
Albin, Thomas J
2017-07-01
Occasionally practitioners must work with single dimensions defined as combinations (sums or differences) of percentile values, but lack information (e.g. variances) to estimate the accommodation achieved. This paper describes methods to predict accommodation proportions for such combinations of percentile values, e.g. two 90th percentile values. Kreifeldt and Nah z-score multipliers were used to estimate the proportions accommodated by combinations of percentile values of 2-15 variables; two simplified versions required less information about variance and/or correlation. The estimates were compared to actual observed proportions; for combinations of 2-15 percentile values the average absolute differences ranged between 0.5 and 1.5 percentage points. The multipliers were also used to estimate adjusted percentile values, that, when combined, estimate a desired proportion of the combined measurements. For combinations of two and three adjusted variables, the average absolute difference between predicted and observed proportions ranged between 0.5 and 3.0 percentage points. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Improvements in estimating proportions of objects from multispectral data
NASA Technical Reports Server (NTRS)
Horwitz, H. M.; Hyde, P. D.; Richardson, W.
1974-01-01
Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.
Long, Keith R.; Singer, Donald A.
2001-01-01
Determining the economic viability of mineral deposits of various sizes and grades is a critical task in all phases of mineral supply, from land-use management to mine development. This study evaluates two simple tools for estimating the economic viability of porphyry copper deposits mined by open-pit, heap-leach methods when only limited information on these deposits is available. These two methods are useful for evaluating deposits that either (1) are undiscovered deposits predicted by a mineral resource assessment, or (2) have been discovered but for which little data has been collected or released. The first tool uses ordinary least-squared regression analysis of cost and operating data from selected deposits to estimate a predictive relationship between mining rate, itself estimated from deposit size, and capital and operating costs. The second method uses cost models developed by the U.S. Bureau of Mines (Camm, 1991) updated using appropriate cost indices. We find that the cost model method works best for estimating capital costs and the empirical model works best for estimating operating costs for mines to be developed in the United States.
Higher-order Multivariable Polynomial Regression to Estimate Human Affective States
NASA Astrophysics Data System (ADS)
Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin
2016-03-01
From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.
Higher-order Multivariable Polynomial Regression to Estimate Human Affective States
Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin
2016-01-01
From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254
A Continuous Method for Gene Flow
Palczewski, Michal; Beerli, Peter
2013-01-01
Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937
Estimation of Land Surface Temperature from GCOM-W1 AMSR2 Data over the Chinese Landmass
NASA Astrophysics Data System (ADS)
Zhou, Ji; Dai, Fengnan; Zhang, Xiaodong
2016-04-01
As one of the most important parameter at the interface between the earth's surface and atmosphere, land surface temperature (LST) plays a crucial role in many fields, such as climate change monitoring and hydrological modeling. Satellite remote sensing provides the unique possibility to observe LST of large regions at diverse spatial and temporal scales. Compared with thermal infrared (TIR) remote sensing, passive microwave (PW) remote sensing has a better ability in overcoming the influences of clouds; thus, it can be used to improve the temporal resolution of current satellite TIR LST. However, most of current methods for estimation LST from PW remote sensing are empirical and have unsatisfied generalization. In this study, a semi-empirical method is proposed to estimate LST from the observation of the Advanced Microwave Scanning Radiometer 2 (AMSR2) on board the Global Change Observation Mission 1st-WATER "SHIZUKU" satellite (GCOM-W1). The method is based on the PW radiation transfer equation, which is simplified based on (1) the linear relationship between the emissivities of horizontal and vertical polarization channels at the same frequency and (2) the significant relationship between atmospheric parameters and the atmospheric water vapor content. An iteration approach is used to best fit the pixel-based coefficients in the simplified radiation transfer equation of the horizontal and vertical polarization channels at each frequency. Then an integration approach is proposed to generate the ensemble estimation from estimations of multiple frequencies for different land cover types. This method is trained with the AMSR2 brightness temperature and MODIS LST in 2013 over the entire Chinese landmass and then it is tested with the data in 2014. Validation based on in situ LSTs measured in northwestern China demonstrates that the proposed method has a better accuracy than the polarization radiation method, with a root-mean squared error of 3 K. Although the proposal method is applied to AMSR2 data, it has good ability to extend to other satellite PW sensors, such as the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) on board the Aqua satellite and the Special Sensor Microwave/Imager (SSM/I) on board the Defense Meteorological Satellite Program (DMSP) satellite. It would be beneficial in providing LST to applications at continental and global scales.
A novel implementation of homodyne time interval analysis method for primary vibration calibration
NASA Astrophysics Data System (ADS)
Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo
2011-12-01
In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.
A simplified technique for delivering total body irradiation (TBI) with improved dose homogeneity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao Rui; Bernard, Damian; Turian, Julius
2012-04-15
Purpose: Total body irradiation (TBI) with megavoltage photon beams has been accepted as an important component of management for a number of hematologic malignancies, generally as part of bone marrow conditioning regimens. The purpose of this paper is to present and discuss the authors' TBI technique, which both simplifies the treatment process and improves the treatment quality. Methods: An AP/PA TBI treatment technique to produce uniform dose distributions using sequential collimator reductions during each fraction was implemented, and a sample calculation worksheet is presented. Using this methodology, the dosimetric characteristics of both 6 and 18 MV photon beams, including lungmore » dose under cerrobend blocks was investigated. A method of estimating midplane lung doses based on measured entrance and exit doses was proposed, and the estimated results were compared with measurements. Results: Whole body midplane dose uniformity of {+-}10% was achieved with no more than two collimator-based beam modulations. The proposed model predicted midplane lung doses 5% to 10% higher than the measured doses for 6 and 18 MV beams. The estimated total midplane doses were within {+-}5% of the prescribed midplane dose on average except for the lungs where the doses were 6% to 10% lower than the prescribed dose on average. Conclusions: The proposed TBI technique can achieve dose uniformity within {+-}10%. This technique is easy to implement and does not require complicated dosimetry and/or compensators.« less
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
NASA Technical Reports Server (NTRS)
Gracey, William
1948-01-01
A simplified compound-pendulum method for the experimental determination of the moments of inertia of airplanes about the x and y axes is described. The method is developed as a modification of the standard pendulum method reported previously in NACA report, NACA-467. A brief review of the older method is included to form a basis for discussion of the simplified method. (author)
Kernel PLS Estimation of Single-trial Event-related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.
2004-01-01
Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.
Verkest, K R; Fleeman, L M; Rand, J S; Morton, J M
2010-10-01
There is need for simple, inexpensive measures of glucose tolerance, insulin sensitivity, and insulin secretion in dogs. The aim of this study was to estimate the closeness of correlation between fasting and dynamic measures of insulin sensitivity and insulin secretion, the precision of fasting measures, and the agreement between results of standard and simplified glucose tolerance tests in dogs. A retrospective descriptive study using 6 naturally occurring obese and 6 lean dogs was conducted. Data from frequently sampled intravenous glucose tolerance tests (FSIGTTs) in 6 obese and 6 lean client-owned dogs were used to calculate HOMA, QUICKI, fasting glucose and insulin concentrations. Fasting measures of insulin sensitivity and secretion were compared with MINMOD analysis of FSIGTTs using Pearson correlation coefficients, and they were evaluated for precision by the discriminant ratio. Simplified sampling protocols were compared with standard FSIGTTs using Lin's concordance correlation coefficients, limits of agreement, and Pearson correlation coefficients. All fasting measures except fasting plasma glucose concentration were moderately correlated with MINMOD-estimated insulin sensitivity (|r| = 0.62-0.80; P < 0.03), and those that combined fasting insulin and glucose were moderately closely correlated with MINMOD-estimated insulin secretion (r = 0.60-0.79; P < 0.04). HOMA calculated using the nonlinear formulae had the closest estimated correlation (r = 0.77 and 0.74) and the best discrimination for insulin sensitivity and insulin secretion (discriminant ratio 4.4 and 3.4, respectively). Simplified sampling protocols with half as many samples collected over 3 h had close agreement with the full sampling protocol. Fasting measures and simplified intravenous glucose tolerance tests reflect insulin sensitivity and insulin secretion derived from frequently sampled glucose tolerance tests with MINMOD analysis in dogs. Copyright 2010 Elsevier Inc. All rights reserved.
Estimating dietary costs of low-income women in California: a comparison of 2 approaches123
Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam
2013-01-01
Background: Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. Objective: The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Design: Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Results: Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Conclusions: Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher. PMID:23388658
Determination of the performance of the Kaplan hydraulic turbines through simplified procedure
NASA Astrophysics Data System (ADS)
Pădureanu, I.; Jurcu, M.; Campian, C. V.; Haţiegan, C.
2018-01-01
A simplified procedure has been developed, compared to the complex one recommended by IEC 60041 (i.e. index samples), for measurement of the performance of the hydraulic turbines. The simplified procedure determines the minimum and maximum powers, the efficiency at maximum power, the evolution of powers by head and flow and to determine the correct relationship between runner/impeller blade angle and guide vane opening for most efficient operation of double-regulated machines. The simplified procedure can be used for a rapid and partial estimation of the performance of hydraulic turbines for repair and maintenance work.
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.
2015-12-01
The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Murayama, I; Miyano, A; Sasaki, Y; Hirata, T; Ichijo, T; Satoh, H; Sato, S; Furuhama, K
2013-11-01
This study was performed to clarify whether a formula (Holstein equation) based on a single blood sample and the isotonic, nonionic, iodine contrast medium iodixanol in Holstein dairy cows can apply to the estimation of glomerular filtration rate (GFR) for beef cattle. To verify the application of iodixanol in beef cattle, instead of the standard tracer inulin, both agents were coadministered as a bolus intravenous injection to identical animals at doses of 10 mg of I/kg of BW and 30 mg/kg. Blood was collected 30, 60, 90, and 120 min after the injection, and the GFR was determined by the conventional multisample strategies. The GFR values from iodixanol were well consistent with those from inulin, and no effects of BW, age, or parity on GFR estimates were noted. However, the GFR in cattle weighing less than 300 kg, aged<1 yr old, largely fluctuated, presumably due to the rapid ruminal growth and dynamic changes in renal function at young adult ages. Using clinically healthy cattle and those with renal failure, the GFR values estimated from the Holstein equation were in good agreement with those by the multisample method using iodixanol (r=0.89, P=0.01). The results indicate that the simplified Holstein equation using iodixanol can be used for estimating the GFR of beef cattle in the same dose regimen as Holstein dairy cows, and provides a practical and ethical alternative.
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
A Laplacian based image filtering using switching noise detector.
Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar
2015-01-01
This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1994-01-01
Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.
NASA Astrophysics Data System (ADS)
Ragon, Théa; Sladen, Anthony; Simons, Mark
2018-05-01
The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)
23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...
23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...
23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...
23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...
23 CFR 646.218 - Simplified procedure for accelerating grade crossing improvements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... preliminary engineering costs may include those incurred in selecting crossings to be improved, determining the type of improvement for each crossing, estimating the cost and preparing the required agreement... ENGINEERING AND TRAFFIC OPERATIONS RAILROADS Railroad-Highway Projects § 646.218 Simplified procedure for...
Determination of sensible heat flux over sparse canopy using thermal infrared data
Kustas, William P.; Choudhury, B.J.; Moran, M.S.; Reginato, R. J.; Jackson, R. D.; Gay, L.W.; Weaver, H.L.
1989-01-01
Surface temperatures, Ts, were estimated for a natural vegetative surface in Owens Valley, California, with infrared thermometric observations collected from an aircraft. The region is quite arid and is composed primarily of bushes (???30%) and bare soil (???70%). Application of the bulk transfer equation for the estimation of sensible heat, H, gave unsatisfactory values when compared to Bowen ratio and eddy correlation methods over a particular site. This was attributed to the inability with existing data to properly evaluate the resistance to heat transfer, rah. To obtain appropriate rah-values the added resistance to heat transfer, kB-1, was allowed to vary although there is both theoretical and experimental evidence that kB-1 for vegetative surfaces can be treated as constant. The present data indicate that for partial canopy cover under arid conditions kB-1 may be a function of Ts measured radiometrically. The equation determining kB-1 was simplified and tested over another arid site with good results; however, this had a limited data set (i.e., 6 data points). The dimensionless kB-1 equation is simplified for use over full canopy cover and is shown to give satisfactory estimates of H over a fully-grown wheat crop. ?? 1989.
Automatic estimation of elasticity parameters in breast tissue
NASA Astrophysics Data System (ADS)
Skerl, Katrin; Cochran, Sandy; Evans, Andrew
2014-03-01
Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.
Failure mode and effects analysis: a comparison of two common risk prioritisation methods.
McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L
2016-05-01
Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Applying stochastic small-scale damage functions to German winter storms
NASA Astrophysics Data System (ADS)
Prahl, B. F.; Rybski, D.; Kropp, J. P.; Burghoff, O.; Held, H.
2012-03-01
Analyzing insurance-loss data we derive stochastic storm-damage functions for residential buildings. On district level we fit power-law relations between daily loss and maximum wind speed, typically spanning more than 4 orders of magnitude. The estimated exponents for 439 German districts roughly range from 8 to 12. In addition, we find correlations among the parameters and socio-demographic data, which we employ in a simplified parametrization of the damage function with just 3 independent parameters for each district. A Monte Carlo method is used to generate loss estimates and confidence bounds of daily and annual storm damages in Germany. Our approach reproduces the annual progression of winter storm losses and enables to estimate daily losses over a wide range of magnitudes.
Gamage, K A A; Joyce, M J
2011-10-01
A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Simplified Discontinuous Galerkin Methods for Systems of Conservation Laws with Convex Extension
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
1999-01-01
Simplified forms of the space-time discontinuous Galerkin (DG) and discontinuous Galerkin least-squares (DGLS) finite element method are developed and analyzed. The new formulations exploit simplifying properties of entropy endowed conservation law systems while retaining the favorable energy properties associated with symmetric variable formulations.
Single frequency GPS measurements in real-time artificial satellite orbit determination
NASA Astrophysics Data System (ADS)
Chiaradia, orbit determination A. P. M.; Kuga, H. K.; Prado, A. F. B. A.
2003-07-01
A simplified and compact algorithm with low computational cost providing an accuracy around tens of meters for artificial satellite orbit determination in real-time and on-board is developed in this work. The state estimation method is the extended Kalman filter. The Cowell's method is used to propagate the state vector, through a simple Runge-Kutta numerical integrator of fourth order with fixed step size. The modeled forces are due to the geopotential up to 50th order and degree of JGM-2 model. To time-update the state error covariance matrix, it is considered a simplified force model. In other words, in computing the state transition matrix, the effect of J 2 (Earth flattening) is analytically considered, which unloads dramatically the processing time. In the measurement model, the single frequency GPS pseudorange is used, considering the effects of the ionospheric delay, clock offsets of the GPS and user satellites, and relativistic effects. To validate this model, real live data are used from Topex/Poseidon satellite and the results are compared with the Topex/Poseidon Precision Orbit Ephemeris (POE) generated by NASA/JPL, for several test cases. It is concluded that this compact algorithm enables accuracies of tens of meters with such simplified force model, analytical approach for computing the transition matrix, and a cheap GPS receiver providing single frequency pseudorange measurements.
NASA Technical Reports Server (NTRS)
Klein, V.
1980-01-01
A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.
Power Source Status Estimation and Drive Control Method for Autonomous Decentralized Hybrid Train
NASA Astrophysics Data System (ADS)
Furuya, Takemasa; Ogawa, Kenichi; Yamamoto, Takamitsu; Hasegawa, Hitoshi
A hybrid control system has two main functions: power sharing and equipment protection. In this paper, we discuss the design, construction and testing of a drive control method for an autonomous decentralized hybrid train with 100-kW-class fuel cells (FC) and 36-kWh lithium-ion batteries (Li-Batt). The main objectives of this study are to identify the operation status of the power sources on the basis of the input voltage of the traction inverter and to estimate the maximum traction power control basis of the power-source status. The proposed control method is useful in preventing overload operation of the onboard power sources in an autonomous decentralized hybrid system that has a flexible main circuit configuration and a few control signal lines. Further, with this method, the initial cost of a hybrid system can be reduced and the retrofit design of the hybrid system can be simplified. The effectiveness of the proposed method is experimentally confirmed by using a real-scale hybrid train system.
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
Statistical theory and methodology for remote sensing data analysis
NASA Technical Reports Server (NTRS)
Odell, P. L.
1974-01-01
A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
NASA Astrophysics Data System (ADS)
Di Prima, Simone; Bagarello, Vincenzo; Iovino, Massimo
2017-04-01
Simple infiltration experiments carried out in the field allow an easy and inexpensive way of characterizing soil hydraulic behavior, maintaining the functional connection of the sampled soil volume with the surrounding soil. The beerkan method consists of a three-dimensional (3D) infiltration experiment at zero pressure head (Haverkamp et al., 1996). It uses a simple annular ring inserted to a depth of about 0.01 m to avoid lateral loss of the ponded water. Soil disturbance is minimized by the limited ring insertion depth. Infiltration time of small volumes of water repeatedly poured on the confined soil are measured to determine the cumulative infiltration. Different algorithms based on this methodology (the so-called BEST family of algorithms) were developed for the determination of soil hydraulic characteristic parameters (Bagarello et al., 2014a; Lassabatere et al., 2006; Yilmaz et al., 2010). Recently, Bagarello et al. (2014b) developed a Simplified method based on a Beerkan Infiltration run (SBI method) to determine saturated soil hydraulic conductivity, Ks, by only the transient phase of a beerkan infiltration run and an estimate of the α* parameter, expressing the relative importance of gravity and capillary forces during an infiltration process (Reynolds and Elrick, 1990). However, several problems yet arise with the existing BEST-algorithms and the SBI method, including (i) the need of supplementary field and laboratory measurements (Bagarello et al., 2013); (ii) the difficulty to detect a linear relationship between I / √t and √t in the early stage of the infiltration process (Bagarello et al., 2014b); (iii) estimation of negative Ks values for hydrophobic soils (Di Prima et al., 2016). In this investigation, a new Simplified method based on the analysis of the Steady-state Beerkan Infiltration run (SSBI method) was proposed and tested. In particular, analytical data were generated to simulate beerkan infiltration experiments for six contrasting soils (sand, S; loamy sand, LS; sandy loam, SAL; loam, L; silt loam, SIL and silty clay loam, SCL) from UNSODA database and different initial water contents. Comparison with other existing procedures were also carried out. The SSBI method allowed accurate estimation of saturated soil hydraulic conductivity of both field and analytically generated data. For analytically generated data, the most accurate predictions were obtained with the method 2 by Wu et al. (1999) for the S and LS soils (prediction errors not exceeding 3.8%) and with the SSBI method for the other four soils (error < 3.7%). Therefore, this last method performed better than the other tested methods in most cases. The analysis of the field data supported the usability of the SSBI method in different environments and conditions to obtain an acceptable prediction of Ks, i.e. similar to the one that can be obtained with the BEST-steady algorithm (Bagarello et al., 2014a). Finally, this investigation yielded encouraging signs on the applicability of the SSBI method for a trustworthy estimation of Ks by the near steady-state phase of a beerkan infiltration run. REFERENCES Bagarello, V., Castellini, M., Di Prima, S., Giordano, G., Iovino, M., 2013. Testing a Simplified Approach to Determine Field Saturated Soil Hydraulic Conductivity. Procedia Environmental Sciences 19, 599-608. doi:10.1016/j.proenv.2013.06.068 Bagarello, V., Di Prima, S., Iovino, M., 2014a. Comparing Alternative Algorithms to Analyze the Beerkan Infiltration Experiment. Soil Science Society of America Journal 78, 724. doi:10.2136/sssaj2013.06.0231 Bagarello, V., Di Prima, S., Iovino, M., Provenzano, G., 2014b. Estimating field-saturated soil hydraulic conductivity by a simplified Beerkan infiltration experiment. Hydrological Processes 28, 1095-1103. doi:10.1002/hyp.9649 Di Prima, S., Lassabatere, L., Bagarello, V., Iovino, M., Angulo-Jaramillo, R., 2016. Testing a new automated single ring infiltrometer for Beerkan infiltration experiments. Geoderma 262, 20-34. doi:10.1016/j.geoderma.2015.08.006 Haverkamp, R., Arrúe, J., Vandervaere, J., Braud, I., Boulet, G., Laurent, J., Taha, A., Ross, P., Angulo-Jaramillo, R., 1996. Hydrological and thermal behaviour of the vadose zone in the area of Barrax and Tomelloso (Spain): Experimental study, analysis and modeling. Project UE n. EV5C-CT 92, 00-90. Lassabatere, L., Angulo-Jaramillo, R., Soria Ugalde, J.M., Cuenca, R., Braud, I., Haverkamp, R., 2006. Beerkan Estimation of Soil Transfer Parameters through Infiltration Experiments—BEST. Soil Science Society of America Journal 70, 521. doi:10.2136/sssaj2005.0026 Reynolds, W.D., Elrick, D.E., 1990. Ponded Infiltration From a Single Ring: I. Analysis of Steady Flow. Soil Science Society of America Journal 54, 1233. doi:10.2136/sssaj1990.03615995005400050006x Wu, L., Pan, L., Mitchell, J., Sanden, B., 1999. Measuring Saturated Hydraulic Conductivity using a Generalized Solution for Single-Ring Infiltrometers. Soil Science Society of America Journal 63, 788. doi:10.2136/sssaj1999.634788x Yilmaz, D., Lassabatere, L., Angulo-Jaramillo, R., Deneele, D., Legret, M., 2010. Hydrodynamic Characterization of Basic Oxygen Furnace Slag through an Adapted BEST Method. Vadose Zone Journal 9, 107. doi:10.2136/vzj2009.0039
Quantitative analysis of pyroglutamic acid in peptides.
Suzuki, Y; Motoi, H; Sato, K
1999-08-01
A simplified and rapid procedure for the determination of pyroglutamic acid in peptides was developed. The method involves the enzymatic cleavage of an N-terminal pyroglutamate residue using a thermostable pyroglutamate aminopeptidase and isocratic HPLC separation of the resulting enzymatic hydrolysate using a column switching technique. Pyroglutamate aminopeptidase from a thermophilic archaebacteria, Pyrococcus furiosus, cleaves N-terminal pyroglutamic acid residue independent of the molecular weight of the substrate. It cleaves more than 85% of pyroglutamate from peptides whose molecular weight ranges from 362.4 to 4599.4 Da. Thus, a new method is presented that quantitatively estimates N-terminal pyroglutamic acid residue in peptides.
A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.
2014-01-01
A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.
Simplified method for calculating shear deflections of beams.
I. Orosz
1970-01-01
When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...
A Methodology for Developing Army Acquisition Strategies for an Uncertain Future
2007-01-01
manuscript for publication. Acronyms ABP Assumption-Based Planning ACEIT Automated Cost Estimating Integrated Tool ACR Armored Cavalry Regiment ACTD...decisions. For example, they employ the Automated Cost Estimating Integrated Tools ( ACEIT ) to simplify life cycle cost estimates; other tools are
Potential application of the consistency approach for vaccine potency testing.
Arciniega, J; Sirota, L A
2012-01-01
The Consistency Approach offers the possibility of reducing the number of animals used for a potency test. However, it is critical to assess the effect that such reduction may have on assay performance. Consistency of production, sometimes referred to as consistency of manufacture or manufacturing, is an old concept implicit in regulation, which aims to ensure the uninterrupted release of safe and effective products. Consistency of manufacture can be described in terms of process capability, or the ability of a process to produce output within specification limits. For example, the standard method for potency testing of inactivated rabies vaccines is a multiple-dilution vaccination challenge test in mice that gives a quantitative, although highly variable estimate. On the other hand, a single-dilution test that does not give a quantitative estimate, but rather shows if the vaccine meets the specification has been proposed. This simplified test can lead to a considerable reduction in the number of animals used. However, traditional indices of process capability assume that the output population (potency values) is normally distributed, which clearly is not the case for the simplified approach. Appropriate computation of capability indices for the latter case will require special statistical considerations.
The financial viability of an SOFC cogeneration system in single-family dwellings
NASA Astrophysics Data System (ADS)
Alanne, Kari; Saari, Arto; Ugursal, V. Ismet; Good, Joel
In the near future, fuel cell-based residential micro-CHP systems will compete with traditional methods of energy supply. A micro-CHP system may be considered viable if its incremental capital cost compared to its competitors equals to cumulated savings during a given period of time. A simplified model is developed in this study to estimate the operation of a residential solid oxide fuel cell (SOFC) system. A comparative assessment of the SOFC system vis-à-vis heating systems based on gas, oil and electricity is conducted using the simplified model for a single-family house located in Ottawa and Vancouver. The energy consumption of the house is estimated using the HOT2000 building simulation program. A financial analysis is carried out to evaluate the sensitivity of the maximum allowable capital cost with respect to system sizing, acceptable payback period, energy price and the electricity buyback strategy of an energy utility. Based on the financial analysis, small (1-2 kW e) SOFC systems seem to be feasible in the considered case. The present study shows also that an SOFC system is especially an alternative to heating systems based on oil and electrical furnaces.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 13.305-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.305-4... purchase requisition, contracting officer verification statement, or other agency approved method of...
NASA Astrophysics Data System (ADS)
Miquel, Benjamin
The dynamic or seismic behavior of hydraulic structures is, as for conventional structures, essential to assure protection of human lives. These types of analyses also aim at limiting structural damage caused by an earthquake to prevent rupture or collapse of the structure. The particularity of these hydraulic structures is that not only the internal displacements are caused by the earthquake, but also by the hydrodynamic loads resulting from fluid-structure interaction. This thesis reviews the existing complex and simplified methods to perform such dynamic analysis for hydraulic structures. For the complex existing methods, attention is placed on the difficulties arising from their use. Particularly, interest is given in this work on the use of transmitting boundary conditions to simulate the semi infinity of reservoirs. A procedure has been developed to estimate the error that these boundary conditions can introduce in finite element dynamic analysis. Depending on their formulation and location, we showed that they can considerably affect the response of such fluid-structure systems. For practical engineering applications, simplified procedures are still needed to evaluate the dynamic behavior of structures in contact with water. A review of the existing simplified procedures showed that these methods are based on numerous simplifications that can affect the prediction of the dynamic behavior of such systems. One of the main objectives of this thesis has been to develop new simplified methods that are more accurate than those existing. First, a new spectral analysis method has been proposed. Expressions for the fundamental frequency of fluid-structure systems, key parameter of spectral analysis, have been developed. We show that this new technique can easily be implemented in a spreadsheet or program, and that its calculation time is near instantaneous. When compared to more complex analytical or numerical method, this new procedure yields excellent prediction of the dynamic behavior of fluid-structure systems. Spectral analyses ignore the transient and oscillatory nature of vibrations. When such dynamic analyses show that some areas of the studied structure undergo excessive stresses, time history analyses allow a better estimate of the extent of these zones as well as a time notion of these excessive stresses. Furthermore, the existing spectral analyses methods for fluid-structure systems account only for the static effect of higher modes. Thought this can generally be sufficient for dams, for flexible structures the dynamic effect of these modes should be accounted for. New methods have been developed for fluid-structure systems to account for these observations as well as the flexibility of foundations. A first method was developed to study structures in contact with one or two finite or infinite water domains. This new technique includes flexibility of structures and foundations as well as the dynamic effect of higher vibration modes and variations of the levels of the water domains. Extension of this method was performed to study beam structures in contact with fluids. These new developments have also allowed extending existing analytical formulations of the dynamic properties of a dry beam to a new formulation that includes effect of fluid-structure interaction. The method yields a very good estimate of the dynamic behavior of beam-fluid systems or beam like structures in contact with fluid. Finally, a Modified Accelerogram Method (MAM) has been developed to modify the design earthquake into a new accelerogram that directly accounts for the effect of fluid-structure interaction. This new accelerogram can therefore be applied directly to the dry structure (i.e. without water) in order to calculate the dynamic response of the fluid-structure system. This original technique can include numerous parameters that influence the dynamic response of such systems and allows to treat analytically the fluid-structure interaction while keeping the advantages of finite element modeling.
Improving Estimation of Ground Casualty Risk From Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, Chris L.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
Improving Estimation of Ground Casualty Risk from Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, C.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jin; Yu, Yaming; Van Dyk, David A.
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less
Jerszurki, Daniela; Souza, Jorge L. M.; Silva, Lucas C. R.
2017-01-01
The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil–as a model system–which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement “d”, mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent. PMID:28658324
Jerszurki, Daniela; Souza, Jorge L M; Silva, Lucas C R
2017-01-01
The development of new reference evapotranspiration (ETo) methods hold significant promise for improving our quantitative understanding of climatic impacts on water loss from the land to the atmosphere. To address the challenge of estimating ETo in tropical and subtropical regions where direct measurements are scarce we tested a new method based on geographical patterns of extraterrestrial radiation (Ra) and atmospheric water potential (Ψair). Our approach consisted of generating daily estimates of ETo across several climate zones in Brazil-as a model system-which we compared with standard EToPM (Penman-Monteith) estimates. In contrast with EToPM, the simplified method (EToMJS) relies solely on Ψair calculated from widely available air temperature (oC) and relative humidity (%) data, which combined with Ra data resulted in reliable estimates of equivalent evaporation (Ee) and ETo. We used regression analyses of Ψair vs EToPM and Ee vs EToPM to calibrate the EToMJS(Ψair) and EToMJS estimates from 2004 to 2014 and between seasons and climatic zone. Finally, we evaluated the performance of the new method based on the coefficient of determination (R2) and correlation (R), index of agreement "d", mean absolute error (MAE) and mean reason (MR). This evaluation confirmed the suitability of the EToMJS method for application in tropical and subtropical regions, where the climatic information needed for the standard EToPM calculation is absent.
How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Ensor, Joe E.; Pasciak, Alexander S.
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromicmore » film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available.« less
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis.
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-09-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis.
[Investigation of the safety of microbial biotechnological products and their hygienic regulation].
Omel'ianets', T H; Kovalenko, N K; Holovach, T M
2008-01-01
Peculiarities of influence of microbial preparations based on microorganisms of different taxonomic groups on the warm-blooded organisms are considered, that is necessary to take into account when developing the strategy of toxico-hygienic studying of these preparations and when substanting hygienic standards in industrial objects and in the environment. The possibility to simplify the methodical scheme of the toxicological estimation and the hygienic regulation of microbial preparations on the basis of soil nitrogen-fixing microorganisms is discussed.
NASA Astrophysics Data System (ADS)
Spencer, S.; Ogle, S. M.; Wirth, T. C.; Sivakami, G.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) provides methods and guidance for estimating anthropogenic greenhouse gas emissions for reporting to the United Nations Framework Convention on Climate Change. The methods are comprehensive and require extensive data compilation, management, aggregation, documentation and calculations of source and sink categories to achieve robust emissions estimates. IPCC Guidelines describe three estimation tiers that require increasing levels of country-specific data and method complexity. Use of higher tiers should improve overall accuracy and reduce uncertainty in estimates. The AFOLU sector represents a complex set of methods for estimating greenhouse gas emissions and carbon sinks. Major AFOLU emissions and sinks include carbon dioxide (CO2) from carbon stock change in biomass, dead organic matter and soils, urea or lime application to soils, and oxidation of carbon in drained organic soils; nitrous oxide (N2O) and methane (CH4) emissions from livestock management and biomass burning; N2O from organic amendments and fertilizer application to soils, and CH4 emissions from rice cultivation. To assist inventory compilers with calculating AFOLU-sector estimates, the Agriculture and Land Use Greenhouse Gas Inventory Tool (ALU) was designed to implement Tier 1 and 2 methods using IPCC Good Practice Guidance. It guides the compiler through activity data entry, emission factor assignment, and emissions calculations while carefully maintaining data integrity. ALU also provides IPCC defaults and can estimate uncertainty. ALU was designed to simplify the AFOLU inventory compilation process at regional or national scales, disaggregating the process into a series of steps reduces the potential for errors in the compilation process. An example application has been developed using ALU to estimate methane emissions from rice production in the United States.
stochastic estimation of transmissivity fields conditioned to flow connectivity data
NASA Astrophysics Data System (ADS)
Freixas, Genis; Fernàndez-Garcia, Daniel; Sanchez-vila, Xavier
2017-04-01
Most methods for hydraulic parameter interpretation rely on a number of simplifications regarding the homogeneity of the underlying porous media. This way, the actual heterogeneity of any natural parameter, such as transmissivity, is transferred to the estimated in a way heavily dependent on the interpretation method used. An example is a pumping test, in most cases interpreted by means of the Cooper-Jacob method, which implicitly assumes a homogeneous isotropic confined aquifer. It was shown that the estimates obtained from this method when applied to a real site are not local values, but still have a physical meaning; the estimated transmissivity is equal to the effective transmissivity characteristic of the regional scale, while the log-ratio of the estimated storage coefficient with respect to the actual real value (assumed constant), indicated by , is an indicator of flow connectivity, representative of the scale given by the distance between the pumping and the observation wells. In this work we propose a methodology to use together with actual measurements of the log transmissivity at selected points to obtain a map of the best local transmissivity estimates using cokriging. Since the interpolation involves two variables measured at different support scales, a critical point is the estimation of the covariance and crosscovariance matrices, involving some quadratures that are obtained using some simplified approach. The method was applied to a synthetic field displaying statistical anisotropy, showing that the use of connectivity indicators mixed with the local values provide a better representation of the local value map, in particular regarding the enhanced representation of the continuity of structures corresponding to either high or low values.
A user-oriented and computerized model for estimating vehicle ride quality
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Barker, L. M.
1984-01-01
A simplified empirical model and computer program for estimating passenger ride comfort within air and surface transportation systems are described. The model is based on subjective ratings from more than 3000 persons who were exposed to controlled combinations of noise and vibration in the passenger ride quality apparatus. This model has the capability of transforming individual elements of a vehicle's noise and vibration environment into subjective discomfort units and then combining the subjective units to produce a single discomfort index typifying passenger acceptance of the environment. The computational procedures required to obtain discomfort estimates are discussed, and a user oriented ride comfort computer program is described. Examples illustrating application of the simplified model to helicopter and automobile ride environments are presented.
NASA Astrophysics Data System (ADS)
Anayah, F. M.; Kaluarachchi, J. J.
2014-06-01
Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.
Hardware Implementation of a MIMO Decoder Using Matrix Factorization Based Channel Estimation
NASA Astrophysics Data System (ADS)
Islam, Mohammad Tariqul; Numan, Mostafa Wasiuddin; Misran, Norbahiah; Ali, Mohd Alauddin Mohd; Singh, Mandeep
2011-05-01
This paper presents an efficient hardware realization of multiple-input multiple-output (MIMO) wireless communication decoder that utilizes the available resources by adopting the technique of parallelism. The hardware is designed and implemented on Xilinx Virtex™-4 XC4VLX60 field programmable gate arrays (FPGA) device in a modular approach which simplifies and eases hardware update, and facilitates testing of the various modules independently. The decoder involves a proficient channel estimation module that employs matrix factorization on least squares (LS) estimation to reduce a full rank matrix into a simpler form in order to eliminate matrix inversion. This results in performance improvement and complexity reduction of the MIMO system. Performance evaluation of the proposed method is validated through MATLAB simulations which indicate 2 dB improvement in terms of SNR compared to LS estimation. Moreover complexity comparison is performed in terms of mathematical operations, which shows that the proposed approach appreciably outperforms LS estimation at a lower complexity and represents a good solution for channel estimation technique.
Global parameter estimation for thermodynamic models of transcriptional regulation.
Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N
2013-07-15
Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.
MRI contrast agent concentration and tumor interstitial fluid pressure.
Liu, L J; Schlesinger, M
2016-10-07
The present work describes the relationship between tumor interstitial fluid pressure (TIFP) and the concentration of contrast agent for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We predict the spatial distribution of TIFP based on that of contrast agent concentration. We also discuss the cases for estimating tumor interstitial volume fraction (void fraction or porosity of porous medium), ve, and contrast volume transfer constant, K(trans), by measuring the ratio of contrast agent concentration in tissue to that in plasma. A linear fluid velocity distribution may reflect a quadratic function of TIFP distribution and lead to a practical method for TIFP estimation. To calculate TIFP, the parameters or variables should preferably be measured along the direction of the linear fluid velocity (this is in the same direction as the gray value distribution of the image, which is also linear). This method may simplify the calculation for estimating TIFP. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Trajectory Planning by Preserving Flexibility: Metrics and Analysis
NASA Technical Reports Server (NTRS)
Idris, Husni R.; El-Wakil, Tarek; Wing, David J.
2008-01-01
In order to support traffic management functions, such as mitigating traffic complexity, ground and airborne systems may benefit from preserving or optimizing trajectory flexibility. To help support this hypothesis trajectory flexibility metrics have been defined in previous work to represent the trajectory robustness and adaptability to the risk of violating safety and traffic management constraints. In this paper these metrics are instantiated in the case of planning a trajectory with the heading degree of freedom. A metric estimation method is presented based on simplifying assumptions, namely discrete time and heading maneuvers. A case is analyzed to demonstrate the estimation method and its use in trajectory planning in a situation involving meeting a time constraint and avoiding loss of separation with nearby traffic. The case involves comparing path-stretch trajectories, in terms of adaptability and robustness along each, deduced from a map of estimated flexibility metrics over the solution space. The case demonstrated anecdotally that preserving flexibility may result in enhancing certain factors that contribute to traffic complexity, namely reducing proximity and confrontation.
NASA Astrophysics Data System (ADS)
Chen, Huaizhen; Zhang, Guangzhi
2017-05-01
Fracture detection and fluid identification are important tasks for a fractured reservoir characterization. Our goal is to demonstrate a direct approach to utilize azimuthal seismic data to estimate fluid bulk modulus, porosity, and dry fracture weaknesses, which decreases the uncertainty of fluid identification. Combining Gassmann's (Vier. der Natur. Gesellschaft Zürich 96:1-23, 1951) equations and linear-slip model, we first establish new simplified expressions of stiffness parameters for a gas-bearing saturated fractured rock with low porosity and small fracture density, and then we derive a novel PP-wave reflection coefficient in terms of dry background rock properties (P-wave and S-wave moduli, and density), fracture (dry fracture weaknesses), porosity, and fluid (fluid bulk modulus). A Bayesian Markov chain Monte Carlo nonlinear inversion method is proposed to estimate fluid bulk modulus, porosity, and fracture weaknesses directly from azimuthal seismic data. The inversion method yields reasonable estimates in the case of synthetic data containing a moderate noise and stable results on real data.
Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei
2017-10-01
To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.
Estimating Monetized Benefits of Groundwater Recharge from Stormwater Retention Practices
The purpose of the study is to inform valuation of groundwater recharge from stormwater retention in areas projected for new development and redevelopment. This study examined a simplified methodology for estimating recharge volume.
Quantifying Elements of a Lunar Economy Based on Resource Needs
NASA Astrophysics Data System (ADS)
Greenblatt, J. B.
2017-10-01
We model a simplified lunar economy from human life support, Earth materials consumption, and energy and propulsion requirement estimates, constrained by lunar elemental abundances; estimate likely imports/exports and "gross interplanetary product."
Waples, Robin S; Scribner, Kim; Moore, Jennifer; Draheim, Hope; Etter, Dwayne; Boersen, Mark
2018-04-14
The idealized concept of a population is integral to ecology, evolutionary biology, and natural resource management. To make analyses tractable, most models adopt simplifying assumptions, which almost inevitably are violated by real species in nature. Here we focus on both demographic and genetic estimates of effective population size per generation (Ne), the effective number of breeders per year (Nb), and Wright's neighborhood size (NS) for black bears (Ursus americanus) that are continuously distributed in the northern lower peninsula of Michigan, USA. We illustrate practical application of recently-developed methods to account for violations of two common, simplifying assumptions about populations: 1) reproduction occurs in discrete generations, and 2) mating occurs randomly among all individuals. We use a 9-year harvest dataset of >3300 individuals, together with genetic determination of 221 parent-offspring pairs, to estimate male and female vital rates, including age-specific survival, age-specific fecundity, and age-specific variance in fecundity (for which empirical data are rare). We find strong evidence for overdispersed variance in reproductive success of same-age individuals in both sexes, and we show that constraints on litter size have a strong influence on results. We also estimate that another life-history trait that is often ignored (skip breeding by females) has a relatively modest influence, reducing Nb by 9% and increasing Ne by 3%. We conclude that isolation by distance depresses genetic estimates of Nb, which implicitly assume a randomly-mating population. Estimated demographic NS (100, based on parent-offspring dispersal) was similar to genetic NS (85, based on regression of genetic distance and geographic distance), indicating that the >36,000 km2 study area includes about 4-5 black-bear neighborhoods. Results from this expansive data set provide important insight into effects of violating assumptions when estimating evolutionary parameters for long-lived, free-ranging species. In conjunction with recently-developed analytical methodology, the ready availability of non-lethal DNA sampling methods and the ability to rapidly and cheaply survey many thousands of molecular markers should facilitate eco-evolutionary studies like this for many more species in nature.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, X. R.; Wang, X.
2016-03-01
When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.
Estimating the circuit delay of FPGA with a transfer learning method
NASA Astrophysics Data System (ADS)
Cui, Xiuhai; Liu, Datong; Peng, Yu; Peng, Xiyuan
2017-10-01
With the increase of FPGA (Field Programmable Gate Array, FPGA) functionality, FPGA has become an on-chip system platform. Due to increase the complexity of FPGA, estimating the delay of FPGA is a very challenge work. To solve the problems, we propose a transfer learning estimation delay (TLED) method to simplify the delay estimation of different speed grade FPGA. In fact, the same style different speed grade FPGA comes from the same process and layout. The delay has some correlation among different speed grade FPGA. Therefore, one kind of speed grade FPGA is chosen as a basic training sample in this paper. Other training samples of different speed grade can get from the basic training samples through of transfer learning. At the same time, we also select a few target FPGA samples as training samples. A general predictive model is trained by these samples. Thus one kind of estimation model is used to estimate different speed grade FPGA circuit delay. The framework of TRED includes three phases: 1) Building a basic circuit delay library which includes multipliers, adders, shifters, and so on. These circuits are used to train and build the predictive model. 2) By contrasting experiments among different algorithms, the forest random algorithm is selected to train predictive model. 3) The target circuit delay is predicted by the predictive model. The Artix-7, Kintex-7, and Virtex-7 are selected to do experiments. Each of them includes -1, -2, -2l, and -3 different speed grade. The experiments show the delay estimation accuracy score is more than 92% with the TLED method. This result shows that the TLED method is a feasible delay assessment method, especially in the high-level synthesis stage of FPGA tool, which is an efficient and effective delay assessment method.
Aerial video mosaicking using binary feature tracking
NASA Astrophysics Data System (ADS)
Minnehan, Breton; Savakis, Andreas
2015-05-01
Unmanned Aerial Vehicles are becoming an increasingly attractive platform for many applications, as their cost decreases and their capabilities increase. Creating detailed maps from aerial data requires fast and accurate video mosaicking methods. Traditional mosaicking techniques rely on inter-frame homography estimations that are cascaded through the video sequence. Computationally expensive keypoint matching algorithms are often used to determine the correspondence of keypoints between frames. This paper presents a video mosaicking method that uses an object tracking approach for matching keypoints between frames to improve both efficiency and robustness. The proposed tracking method matches local binary descriptors between frames and leverages the spatial locality of the keypoints to simplify the matching process. Our method is robust to cascaded errors by determining the homography between each frame and the ground plane rather than the prior frame. The frame-to-ground homography is calculated based on the relationship of each point's image coordinates and its estimated location on the ground plane. Robustness to moving objects is integrated into the homography estimation step through detecting anomalies in the motion of keypoints and eliminating the influence of outliers. The resulting mosaics are of high accuracy and can be computed in real time.
Reitz, Meredith; Senay, Gabriel; Sanford, Ward E.
2017-01-01
Evapotranspiration (ET) is a key component of the hydrologic cycle, accounting for ~70% of precipitation in the conterminous U.S. (CONUS), but it has been a challenge to predict accurately across different spatio-temporal scales. The increasing availability of remotely sensed data has led to significant advances in the frequency and spatial resolution of ET estimates, derived from energy balance principles with variables such as temperature used to estimate surface latent heat flux. Although remote sensing methods excel at depicting spatial and temporal variability, estimation of ET independently of other water budget components can lead to inconsistency with other budget terms. Methods that rely on ground-based data better constrain long-term ET, but are unable to provide the same temporal resolution. Here we combine long-term ET estimates from a water-balance approach with the SSEBop (operational Simplified Surface Energy Balance) remote sensing-based ET product for 2000–2015. We test the new combined method, the original SSEBop product, and another remote sensing ET product (MOD16) against monthly measurements from 119 flux towers. The new product showed advantages especially in non-irrigated areas where the new method showed a coefficient of determination R2 of 0.44, compared to 0.41 for SSEBop or 0.35 for MOD16. The resulting monthly data set will be a useful, unique contribution to ET estimation, due to its combination of remote sensing-based variability and ground-based long-term water balance constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energymore » coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.« less
NASA Astrophysics Data System (ADS)
Zhou, Shiyuan; Sun, Haoyu; Xu, Chunguang; Cao, Xiandong; Cui, Liming; Xiao, Dingguo
2015-03-01
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of "energy coefficient" in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-12-01
Aims: A simplified model is derived for estimating rate coefficients for inelastic processes in low-energy collisions of heavy particles with hydrogen, in particular, the rate coefficients with high and moderate values. Such processes are important for non-local thermodynamic equilibrium modeling of cool stellar atmospheres. Methods: The derived method is based on the asymptotic approach for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: It is found that the rate coefficients are expressed via statistical probabilities and reduced rate coefficients. It is shown that the reduced rate coefficients for neutralization and ion-pair formation processes depend on single electronic bound energies of an atomic particle, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to barium-hydrogen ionic collisions. For the first time, rate coefficients are evaluated for inelastic processes in Ba+ + H and Ba2+ + H- collisions for all transitions between the states from the ground and up to and including the ionic state. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A33
NASA Astrophysics Data System (ADS)
Chen, Long; Bian, Mingyuan; Luo, Yugong; Qin, Zhaobo; Li, Keqiang
2016-01-01
In this paper, a resonance frequency-based tire-road friction coefficient (TRFC) estimation method is proposed by considering the dynamics performance of the in-wheel motor drive system under small slip ratio conditions. A frequency response function (FRF) is deduced for the drive system that is composed of a dynamic tire model and a simplified motor model. A linear relationship between the squared system resonance frequency and the TFRC is described with the FRF. Furthermore, the resonance frequency is identified by the Auto-Regressive eXogenous model using the information of the motor torque and the wheel speed, and the TRFC is estimated thereafter by a recursive least squares filter with the identified resonance frequency. Finally, the effectiveness of the proposed approach is demonstrated through simulations and experimental tests on different road surfaces.
Scalable problems and memory bounded speedup
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Ni, Lionel M.
1992-01-01
In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Pozzi, P.; Bezzubik, V. V.; Belashenkov, N. R.
2017-06-01
Superresolution image reconstruction method based on the structured illumination microscopy (SIM) principle with reduced and simplified pattern set is presented. The method described needs only 2 sinusoidal patterns shifted by half a period for each spatial direction of reconstruction, instead of the minimum of 3 for the previously known methods. The method is based on estimating redundant frequency components in the acquired set of modulated images. Digital processing is based on linear operations. When applied to several spatial orientations, the image set can be further reduced to a single pattern for each spatial orientation, complemented by a single non-modulated image for all the orientations. By utilizing this method for the case of two spatial orientations, the total input image set is reduced up to 3 images, providing up to 2-fold improvement in data acquisition time compared to the conventional 3-pattern SIM method. Using the simplified pattern design, the field of view can be doubled with the same number of spatial light modulator raster elements, resulting in a total 4-fold increase in the space-time product. The method requires precise knowledge of the optical transfer function (OTF). The key limitation is the thickness of object layer that scatters or emits light, which requires to be sufficiently small relatively to the lens depth of field. Numerical simulations and experimental results are presented. Experimental results are obtained on the SIM setup with the spatial light modulator based on the 1920x1080 digital micromirror device.
NASA Technical Reports Server (NTRS)
Baer-Riedhart, J. L.
1982-01-01
A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.
Development of a water-use data system in Minnesota
Horn, M.A.
1986-01-01
The Minnesota Water-Use Data System stores data on the quantity of individual annual water withdrawals and discharges in relation to the water resources affected, provides descriptors for aggregation of data and trend analysis, and enables access to additional data contained in other data bases. MWUDS is stored on a computer at the Land Management Information Center, an agency associated with the State Planning Agency. Interactive menu-driven programs simplify data entry, update, and retrieval and are easy to use. Estimates of unreported water use supplement reported water use to completely describe the stress on the hydrologic system. Links or common elements developed in the MWUDS enable access to data available in other State waterrelated data bases, forming a water-resource information system. Water-use information can be improved by developing methods for increasing accuracy of reported water use and refining methods for estimating unreported water use.
Morishita, Y
2001-05-01
The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.
Guidelines and Metrics for Assessing Space System Cost Estimates
2008-01-01
analysis time, reuse tooling, models , mechanical ground-support equipment [MGSE]) High mass margin ( simplifying assumptions used to bound solution...engineering environment changes High reuse of architecture, design , tools, code, test scripts, and commercial real- time operating systems Simplified life...Coronal Explorer TWTA traveling wave tube amplifier USAF U.S. Air Force USCM Unmanned Space Vehicle Cost Model USN U.S. Navy UV ultraviolet UVOT UV
Notification: Methods for Procuring Supplies and Services Under Simplified Acquisition Procedures
Project #OA-FY15-0193, June 18, 2015. The EPA OIG plans to begin the preliminary research phase of auditing the methods used in procuring supplies and services under simplified acquisition procedures.
A one-step method for modelling longitudinal data with differential equations.
Hu, Yueqin; Treinen, Raymond
2018-04-06
Differential equation models are frequently used to describe non-linear trajectories of longitudinal data. This study proposes a new approach to estimate the parameters in differential equation models. Instead of estimating derivatives from the observed data first and then fitting a differential equation to the derivatives, our new approach directly fits the analytic solution of a differential equation to the observed data, and therefore simplifies the procedure and avoids bias from derivative estimations. A simulation study indicates that the analytic solutions of differential equations (ASDE) approach obtains unbiased estimates of parameters and their standard errors. Compared with other approaches that estimate derivatives first, ASDE has smaller standard error, larger statistical power and accurate Type I error. Although ASDE obtains biased estimation when the system has sudden phase change, the bias is not serious and a solution is also provided to solve the phase problem. The ASDE method is illustrated and applied to a two-week study on consumers' shopping behaviour after a sale promotion, and to a set of public data tracking participants' grammatical facial expression in sign language. R codes for ASDE, recommendations for sample size and starting values are provided. Limitations and several possible expansions of ASDE are also discussed. © 2018 The British Psychological Society.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false [Reserved] 13.304 Section 13.304 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.304 [Reserved] ...
Keystroke-Level Analysis to Estimate Time to Process Pages in Online Learning Environments
ERIC Educational Resources Information Center
Bälter, Olle; Zimmaro, Dawn
2018-01-01
It is challenging for students to plan their work sessions in online environments, as it is very difficult to make estimates on how much material there is to cover. In order to simplify this estimation, we have extended the Keystroke-level analysis model with individual reading speed of text, figures, and questions. This was used to estimate how…
ERIC Educational Resources Information Center
Phillips, Daniel W.; Montello, Daniel R.
2015-01-01
Previous research has examined heuristics--simplified decision-making rules-of-thumb--for geospatial reasoning. This study examined at two locations the influence of beliefs about local coastline orientation on estimated directions to local and distant places; estimates were made immediately or after fifteen seconds. This study goes beyond…
On Least Squares Fitting Nonlinear Submodels.
ERIC Educational Resources Information Center
Bechtel, Gordon G.
Three simplifying conditions are given for obtaining least squares (LS) estimates for a nonlinear submodel of a linear model. If these are satisfied, and if the subset of nonlinear parameters may be LS fit to the corresponding LS estimates of the linear model, then one attains the desired LS estimates for the entire submodel. Two illustrative…
NASA Astrophysics Data System (ADS)
Inoue, K.; Kataoka, H.; Nagai, Y.; Hasegawa, M.; Kobayashi, Y.
2013-10-01
Positron annihilation spectroscopy is employed to estimate the size of subnanometer-scale open spaces in insulating materials. In most cases, the size is estimated from the lifetime of long-lived ortho-positronium (o-Ps) by pickoff annihilation using a simplified model. However, reactions of Ps with surrounding electrons other than the pickoff reaction, such as spin conversion or chemical reaction, could give a substantially underestimated size using the simplified model. In the present paper, we report that the size of the open spaces can be evaluated correctly by the angular correlation of positron annihilation radiation (ACAR) with a magnetic field using the spin-polarization effect on Ps formation, even if such reactions of Ps occur in the material. This method is applied to the subnanometer-scale structural open spaces of silica-based glass doped with Fe. We demonstrate the influence of the Ps reaction on size-estimation of the open spaces from the o-Ps lifetime. Furthermore, the type of reaction, whether spin conversion or chemical, is distinguished from the magnetic field dependence of the Ps self-annihilation component intensity in the ACAR spectra. The Ps reaction in silica-based glass doped with Fe is a chemical reaction (most likely oxidation) rather than spin conversion, with Fe ions. The chemical quenching rate with Fe ions is determined from the dependence of the o-Ps lifetime on the Fe content.
Non-contact AFM measurement of the Hamaker constants of solids: Calibrating cantilever geometries.
Fronczak, Sean G; Browne, Christopher A; Krenek, Elizabeth C; Beaudoin, Stephen P; Corti, David S
2018-05-01
Surface effects arising from roughness and deformation can negatively affect the results of AFM contact experiments. Using the non-contact portion of an AFM deflection curve is therefore desirable for estimating the Hamaker constant, A, of a solid material. A previously validated non-contact quasi-dynamic method for estimating A is revisited, in which the cantilever tip is now always represented by an "effective sphere". In addition to simplifying this previous method, accurate estimates of A can still be obtained even though precise knowledge of the nanoscale geometric features of the cantilever tip are no longer required. The tip's "effective" radius of curvature, R eff , is determined from a "calibration" step, in which the tip's deflection at first contact with the surface is measured for a substrate with a known Hamaker constant. After R eff is known for a given tip, estimates of A for other surfaces of interest are then determined. An experimental study was conducted to validate the new method and the obtained results are in good agreement with predictions from the Lifshitz approximation, when available. Since R eff accounts for all geometric uncertainties of the tip through a single fitted parameter, no visual fitting of the tip shape was required. Copyright © 2018 Elsevier Inc. All rights reserved.
26 CFR 1.199-4 - Costs allocable to domestic production gross receipts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... using the simplified deduction method. Paragraph (f) of this section provides a small business... taxpayer for internal management or other business purposes; whether the method is used for other Federal... than a taxpayer that uses the small business simplified overall method of paragraph (f) of this section...
2012-01-01
To build a life cycle assessment (LCA) database of Japanese products embracing their global supply chains in a manner requiring lower time and labor burdens, this study estimates the intensity of embodied global environmental burden for commodities produced in Japan. The intensity of embodied global environmental burden is a measure of the environmental burden generated globally by unit production of the commodity and can be used as life cycle inventory data in LCA. The calculation employs an input–output LCA method with a global link input–output model that defines a global system boundary grounded in a simplified multiregional input–output framework. As results, the intensities of embodied global environmental burden for 406 Japanese commodities are determined in terms of energy consumption, greenhouse-gas emissions (carbon dioxide, methane, nitrous oxide, perfluorocarbons, hydrofluorocarbons, sulfur hexafluoride, and their summation), and air-pollutant emissions (nitrogen oxide and sulfur oxide). The uncertainties in the intensities of embodied global environmental burden attributable to the simplified structure of the global link input–output model are quantified using Monte Carlo simulation. In addition, by analyzing the structure of the embodied global greenhouse-gas intensities we characterize Japanese commodities in the context of LCA embracing global supply chains. PMID:22881452
A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research
van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B; Neyer, Franz J; van Aken, Marcel AG
2014-01-01
Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided. PMID:24116396
Kuniya, Toshikazu; Sano, Hideki
2016-05-10
In mathematical epidemiology, age-structured epidemic models have usually been formulated as the boundary-value problems of the partial differential equations. On the other hand, in engineering, the backstepping method has recently been developed and widely studied by many authors. Using the backstepping method, we obtained a boundary feedback control which plays the role of the threshold criteria for the prediction of increase or decrease of newly infected population. Under an assumption that the period of infectiousness is same for all infected individuals (that is, the recovery rate is given by the Dirac delta function multiplied by a sufficiently large positive constant), the prediction method is simplified to the comparison of the numbers of reported cases at the current and previous time steps. Our prediction method was applied to the reported cases per sentinel of influenza in Japan from 2006 to 2015 and its accuracy was 0.81 (404 correct predictions to the total 500 predictions). It was higher than that of the ARIMA models with different orders of the autoregressive part, differencing and moving-average process. In addition, a proposed method for the estimation of the number of reported cases, which is consistent with our prediction method, was better than that of the best-fitted ARIMA model ARIMA(1,1,0) in the sense of mean square error. Our prediction method based on the backstepping method can be simplified to the comparison of the numbers of reported cases of the current and previous time steps. In spite of its simplicity, it can provide a good prediction for the spread of influenza in Japan.
Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara
2012-08-01
Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.
48 CFR 13.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Purchase orders. 13.302 Section 13.302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.302 Purchase...
A novel method to assess primary stability of press-fit acetabular cups.
Crosnier, Emilie A; Keogh, Patrick S; Miles, Anthony W
2014-11-01
Initial stability is an essential prerequisite to achieve osseointegration of press-fit acetabular cups in total hip replacements. Most in vitro methods that assess cup stability do not reproduce physiological loading conditions and use simplified acetabular models with a spherical cavity. The aim of this study was to investigate the effect of bone density and acetabular geometry on cup stability using a novel method for measuring acetabular cup micromotion. A press-fit cup was inserted into Sawbones(®) foam blocks having different densities to simulate normal and osteoporotic bone variations and different acetabular geometries. The stability of the cup was assessed in two ways: (a) measurement of micromotion of the cup in 6 degrees of freedom under physiological loading and (b) uniaxial push-out tests. The results indicate that changes in bone substrate density and acetabular geometry affect the stability of press-fit acetabular cups. They also suggest that cups implanted into weaker, for example, osteoporotic, bone are subjected to higher levels of micromotion and are therefore more prone to loosening. The decrease in stability of the cup in the physiological model suggests that using simplified spherical cavities to model the acetabulum over-estimates the initial stability of press-fit cups. This novel testing method should provide the basis for a more representative protocol for future pre-clinical evaluation of new acetabular cup designs. © IMechE 2014.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Bilateral step length estimation using a single inertial measurement unit attached to the pelvis
2012-01-01
Background The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. Methods The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. Results The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. Conclusions The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. PMID:22316235
On-orbit calibration for star sensors without priori information.
Zhang, Hao; Niu, Yanxiong; Lu, Jiazhen; Zhang, Chengfen; Yang, Yanqiang
2017-07-24
The star sensor is a prerequisite navigation device for a spacecraft. The on-orbit calibration is an essential guarantee for its operation performance. However, traditional calibration methods rely on ground information and are invalid without priori information. The uncertain on-orbit parameters will eventually influence the performance of guidance navigation and control system. In this paper, a novel calibration method without priori information for on-orbit star sensors is proposed. Firstly, the simplified back propagation neural network is designed for focal length and main point estimation along with system property evaluation, called coarse calibration. Then the unscented Kalman filter is adopted for the precise calibration of all parameters, including focal length, main point and distortion. The proposed method benefits from self-initialization and no attitude or preinstalled sensor parameter is required. Precise star sensor parameter estimation can be achieved without priori information, which is a significant improvement for on-orbit devices. Simulations and experiments results demonstrate that the calibration is easy for operation with high accuracy and robustness. The proposed method can satisfy the stringent requirement for most star sensors.
NASA Technical Reports Server (NTRS)
Shen, Ji-Yao; Taylor, Lawrence W., Jr.
1994-01-01
It is beneficial to use a distributed parameter model for large space structures because the approach minimizes the number of model parameters. Holzer's transfer matrix method provides a useful means to simplify and standardize the procedure for solving the system of partial differential equations. Any large space structures can be broken down into sub-structures with simple elastic and dynamical properties. For each single element, such as beam, tether, or rigid body, we can derive the corresponding transfer matrix. Combining these elements' matrices enables the solution of the global system equations. The characteristics equation can then be formed by satisfying the appropriate boundary conditions. Then natural frequencies and mode shapes can be determined by searching the roots of the characteristic equation at frequencies within the range of interest. This paper applies this methodology, and the maximum likelihood estimation method, to refine the modal characteristics of the NASA Mini-Mast Truss by successively matching the theoretical response to the test data of the truss. The method is being applied to more complex configurations.
NASA Astrophysics Data System (ADS)
Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung
2018-05-01
A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.
Nose-to-tail analysis of an airbreathing hypersonic vehicle using an in-house simplified tool
NASA Astrophysics Data System (ADS)
Piscitelli, Filomena; Cutrone, Luigi; Pezzella, Giuseppe; Roncioni, Pietro; Marini, Marco
2017-07-01
SPREAD (Scramjet PREliminary Aerothermodynamic Design) is a simplified, in-house method developed by CIRA (Italian Aerospace Research Centre), able to provide a preliminary estimation of the performance of engine/aeroshape for airbreathing configurations. It is especially useful for scramjet engines, for which the strong coupling between the aerothermodynamic (external) and propulsive (internal) flow fields requires real-time screening of several engine/aeroshape configurations and the identification of the most promising one/s with respect to user-defined constraints and requirements. The outcome of this tool defines the base-line configuration for further design analyses with more accurate tools, e.g., CFD simulations and wind tunnel testing. SPREAD tool has been used to perform the nose-to-tail analysis of the LAPCAT-II Mach 8 MR2.4 vehicle configuration. The numerical results demonstrate SPREAD capability to quickly predict reliable values of aero-propulsive balance (i.e., net-thrust) and aerodynamic efficiency in a pre-design phase.
48 CFR 1313.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 1313.302 Section 1313.302 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisitions Methods 1313.302 Purchase orders. ...
48 CFR 813.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 813.302 Section 813.302 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 813.302 Purchase...
48 CFR 1413.305 - Imprest fund.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...
48 CFR 1413.305 - Imprest fund.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...
Simple design of slanted grating with simplified modal method.
Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun
2014-02-15
A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
Direction of Arrival Estimation with a Novel Single-Port Smart Antenna
NASA Astrophysics Data System (ADS)
Sun, Chen; Karmakar, Nemai C.
2004-12-01
A novel direction of arrival (DOA) estimation technique that uses the conventional multiple-signal classification (MUSIC) algorithm with periodic signals is applied to a single-port smart antenna. Results show that the proposed method gives a high-resolution (1 degree) DOA estimation in an uncorrelated signal environment. The novelty lies in that the MUSIC algorithm is applied to a simplified antenna configuration. Only 1 analogue-to-digital converter (ADC) is used in this antenna, which features low power consumption, low cost, and ease of fabrication. Modifications to the conventional MUSIC algorithm do not bring much additional complexity. The proposed technique is also free from the negative influence by the mutual coupling among antenna elements. Therefore, it offers an economical way to extensively implement smart antennas into the existing wireless mobile communications systems, especially at the power consumption limited mobile terminals such as laptops in wireless networks.
Simplified power control method for cellular mobile communication
NASA Astrophysics Data System (ADS)
Leung, Y. W.
1994-04-01
The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.
Simplified formulae for the estimation of offshore wind turbines clutter on marine radars.
Grande, Olatz; Cañizo, Josune; Angulo, Itziar; Jenn, David; Danoon, Laith R; Guerra, David; de la Vega, David
2014-01-01
The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario.
SLAMMER: Seismic LAndslide Movement Modeled using Earthquake Records
Jibson, Randall W.; Rathje, Ellen M.; Jibson, Matthew W.; Lee, Yong W.
2013-01-01
This program is designed to facilitate conducting sliding-block analysis (also called permanent-deformation analysis) of slopes in order to estimate slope behavior during earthquakes. The program allows selection from among more than 2,100 strong-motion records from 28 earthquakes and allows users to add their own records to the collection. Any number of earthquake records can be selected using a search interface that selects records based on desired properties. Sliding-block analyses, using any combination of rigid-block (Newmark), decoupled, and fully coupled methods, are then conducted on the selected group of records, and results are compiled in both graphical and tabular form. Simplified methods for conducting each type of analysis are also included.
Simplified Formulae for the Estimation of Offshore Wind Turbines Clutter on Marine Radars
Grande, Olatz; Cañizo, Josune; Jenn, David; Danoon, Laith R.; Guerra, David
2014-01-01
The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario. PMID:24782682
Modern methods and systems for precise control of the quality of agricultural and food production
NASA Astrophysics Data System (ADS)
Bednarjevsky, Sergey S.; Veryasov, Yuri V.; Akinina, Evgeniya V.; Smirnov, Gennady I.
1999-01-01
The results on the modeling of non-linear dynamics of strong continuous and impulse radiation in the laser nephelometry of polydisperse biological systems, important from the viewpoint of applications in biotechnologies, are presented. The processes of nonlinear self-action of the laser radiation by the multiple scattering in the disperse biological agro-media are considered. The simplified algorithms of the calculation of the parameters of the biological media under investigation are indicated and the estimates of the errors of the laser-nephelometric measurements are given. The universal high-informative optical analyzers and the standard etalon specimens of agro- objects make the technological foundation of the considered methods and systems.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Zhang, Yajun; Chai, Tianyou; Wang, Hong; Wang, Dianhui; Chen, Xinkai
2018-06-01
Complex industrial processes are multivariable and generally exhibit strong coupling among their control loops with heavy nonlinear nature. These make it very difficult to obtain an accurate model. As a result, the conventional and data-driven control methods are difficult to apply. Using a twin-tank level control system as an example, a novel multivariable decoupling control algorithm with adaptive neural-fuzzy inference system (ANFIS)-based unmodeled dynamics (UD) compensation is proposed in this paper for a class of complex industrial processes. At first, a nonlinear multivariable decoupling controller with UD compensation is introduced. Different from the existing methods, the decomposition estimation algorithm using ANFIS is employed to estimate the UD, and the desired estimating and decoupling control effects are achieved. Second, the proposed method does not require the complicated switching mechanism which has been commonly used in the literature. This significantly simplifies the obtained decoupling algorithm and its realization. Third, based on some new lemmas and theorems, the conditions on the stability and convergence of the closed-loop system are analyzed to show the uniform boundedness of all the variables. This is then followed by the summary on experimental tests on a heavily coupled nonlinear twin-tank system that demonstrates the effectiveness and the practicability of the proposed method.
Simplified procedure for computing the absorption of sound by the atmosphere
DOT National Transportation Integrated Search
2007-10-31
This paper describes a study that resulted in the development of a simplified : method for calculating attenuation by atmospheric-absorption for wide-band : sounds analyzed by one-third octave-band filters. The new method [referred to : herein as the...
Direct Iterative Nonlinear Inversion by Multi-frequency T-matrix Completion
NASA Astrophysics Data System (ADS)
Jakobsen, M.; Wu, R. S.
2016-12-01
Researchers in the mathematical physics community have recently proposed a conceptually new method for solving nonlinear inverse scattering problems (like FWI) which is inspired by the theory of nonlocality of physical interactions. The conceptually new method, which may be referred to as the T-matrix completion method, is very interesting since it is not based on linearization at any stage. Also, there are no gradient vectors or (inverse) Hessian matrices to calculate. However, the convergence radius of this promising T-matrix completion method is seriously restricted by it's use of single-frequency scattering data only. In this study, we have developed a modified version of the T-matrix completion method which we believe is more suitable for applications to nonlinear inverse scattering problems in (exploration) seismology, because it makes use of multi-frequency data. Essentially, we have simplified the single-frequency T-matrix completion method of Levinson and Markel and combined it with the standard sequential frequency inversion (multi-scale regularization) method. For each frequency, we first estimate the experimental T-matrix by using the Moore-Penrose pseudo inverse concept. Then this experimental T-matrix is used to initiate an iterative procedure for successive estimation of the scattering potential and the T-matrix using the Lippmann-Schwinger for the nonlinear relation between these two quantities. The main physical requirements in the basic iterative cycle is that the T-matrix should be data-compatible and the scattering potential operator should be dominantly local; although a non-local scattering potential operator is allowed in the intermediate iterations. In our simplified T-matrix completion strategy, we ensure that the T-matrix updates are always data compatible simply by adding a suitable correction term in the real space coordinate representation. The use of singular-value decomposition representations are not required in our formulation since we have developed an efficient domain decomposition method. The results of several numerical experiments for the SEG/EAGE salt model illustrate the importance of using multi-frequency data when performing frequency domain full waveform inversion in strongly scattering media via the new concept of T-matrix completion.
NASA Astrophysics Data System (ADS)
Nøtthellen, Jacob; Konst, Bente; Abildgaard, Andreas
2014-08-01
Purpose: to present a new and simplified method for pixel-wise determination of the signal-to-noise ratio improvement factor KSNR of an antiscatter grid, when used with a digital imaging system. The method was based on approximations of published formulas. The simplified estimate of K2SNR may be used as a decision tool for whether or not to use an antiscatter grid. Methods: the primary transmission of the grid Tp was determined with and without a phantom present using a pattern of beam stops. The Bucky factor B was measured with and without a phantom present. Hence K2SNR maps were created based on Tp and B. A formula was developed to calculate K2SNR from the measured Bs without using the measured Tp. The formula was applied on two exposures of anthropomorphic phantoms, adult legs and baby chest, and on two homogeneous poly[methyl methacrylate] (PMMA) phantoms, 5 cm and 10 cm thick. The results from anthropomorphic phantoms were compared to those based on the beam stop method. The results for the PMMA-phantoms were compared to a study that used a contrast-detail phantom. Results: 2D maps of K2SNR over the entire adult legs and baby chest phantoms were created. The maps indicate that it is advantageous to use the antiscatter grid for imaging of the adult legs. For baby chest imaging the antiscatter grid is not recommended if only the lung regions are of interest. The K2SNR maps based on the new method correspond to those from the beam stop method, and the K2SNR from the homogenous phantoms arising from two different approaches also agreed well with each other. Conclusion: a method to measure 2D K2SNR associated with grid use in digital radiography system was developed and validated. The proposed method requires four exposures and use of a simple formula. It is fast and provides adequate estimates for K2SNR.
NASA Astrophysics Data System (ADS)
Delgado, A.; Gertig, C.; Blesa, E.; Loza, A.; Hidalgo, C.; Ron, R.
2016-05-01
Typical plant configurations for Central Receiver Systems (CRS) are comprised of a large field of heliostats which concentrate solar irradiation onto the receiver, which is elevated hundreds of meters above the ground. Wind speed changes with altitude above ground, impacting on the receiver thermal efficiency due to variations of the convective heat losses. In addition, the physical properties of air vary at high altitudes to a significant degree, which should be considered in the thermal losses calculation. DNV GL has long-reaching experience in wind energy assessment with reliable methodologies to reduce the uncertainty of the determination of the wind regime. As a part of this study, DNV GL estimates the wind speed at high altitude for different sites using two methods, a detailed estimation applying the best practices used in the wind energy sector based on measurements from various wind sensors and a simplified estimation applying the power law (1, 2) using only one wind measurement and a representative value for the surface roughness. As a result of the study, a comparison of the wind speed estimation considering both methods is presented and the impact on the receiver performance for the evaluated case is estimated.
Cao, Mengqiu; Suo, Shiteng; Han, Xu; Jin, Ke; Sun, Yawen; Wang, Yao; Ding, Weina; Qu, Jianxun; Zhang, Xiaohua; Zhou, Yan
2017-01-01
Purpose : To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI) acquired with three b -values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM) and dynamic contrast-enhanced (DCE) magnetic resonance (MR) imaging, and to investigate its utility to differentiate low- from high-grade gliomas. Materials and Methods : The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi- b -value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC 0,1000 ) map, perfusion-related parametric maps for IVIM-derived perfusion fraction ( f ) and pseudodiffusion coefficient (D*), DCE MR imaging-derived pharmacokinetic metrics, including K trans , v e and v p , as well as a metric named simplified perfusion fraction (SPF), were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade ( n = 19) and high-grade ( n = 31) groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC) analysis. Results : SPF showed strong correlation with IVIM-derived f and D* ( ρ = 0.732 and 0.716, respectively; both P < 0.001). Compared with f , SPF was more correlated with DCE MR imaging-derived K trans ( ρ = 0.607; P < 0.001) and v p ( ρ = 0.397; P = 0.004). Among all parameters, SPF achieved the highest accuracy for differentiating low- from high-grade gliomas, with an area under the ROC curve value of 0.942, which was significantly higher than that of ADC 0,1000 ( P = 0.004). By using SPF as a discriminative index, the diagnostic sensitivity and specificity were 87.1% and 94.7%, respectively, at the optimal cut-off value of 19.26%. Conclusion : The simplified method to measure tissue perfusion based on DWI by using three b -values may be helpful to differentiate low- from high-grade gliomas. SPF may serve as a valuable alternative to measure tumor perfusion in gliomas in a noninvasive, convenient and efficient way.
Echavarría-Heras, Héctor; Leal-Ramírez, Cecilia; Villa-Diharce, Enrique; Castillo, Oscar
2014-01-01
Eelgrass is a cosmopolitan seagrass species that provides important ecological services in coastal and near-shore environments. Despite its relevance, loss of eelgrass habitats is noted worldwide. Restoration by replanting plays an important role, and accurate measurements of the standing crop and productivity of transplants are important for evaluating restoration of the ecological functions of natural populations. Traditional assessments are destructive, and although they do not harm natural populations, in transplants the destruction of shoots might cause undesirable alterations. Non-destructive assessments of the aforementioned variables are obtained through allometric proxies expressed in terms of measurements of the lengths or areas of leaves. Digital imagery could produce measurements of leaf attributes without the removal of shoots, but sediment attachments, damage infringed by drag forces or humidity contents induce noise-effects, reducing precision. Available techniques for dealing with noise caused by humidity contents on leaves use the concepts of adjacency, vicinity, connectivity and tolerance of similarity between pixels. Selection of an interval of tolerance of similarity for efficient measurements requires extended computational routines with tied statistical inferences making concomitant tasks complicated and time consuming. The present approach proposes a simplified and cost-effective alternative, and also a general tool aimed to deal with any sort of noise modifying eelgrass leaves images. Moreover, this selection criterion relies only on a single statistics; the calculation of the maximum value of the Concordance Correlation Coefficient for reproducibility of observed areas of leaves through proxies obtained from digital images. Available data reveals that the present method delivers simplified, consistent estimations of areas of eelgrass leaves taken from noisy digital images. Moreover, the proposed procedure is robust because both the optimal interval of tolerance of similarity and the reproducibility of observed leaf areas through digital image surrogates were independent of sample size. The present method provides simplified, unbiased and non-destructive measurements of eelgrass leaf area. These measurements, in conjunction with allometric methods, can predict the dynamics of eelgrass biomass and leaf growth through indirect techniques, reducing the destructive effect of sampling, fundamental to the evaluation of eelgrass restoration projects thereby contributing to the conservation of this important seagrass species.
A Simplified Diagnostic Method for Elastomer Bond Durability
NASA Technical Reports Server (NTRS)
White, Paul
2009-01-01
A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.
A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.
ERIC Educational Resources Information Center
Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.
This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…
Application of the variational-asymptotical method to composite plates
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.
1992-01-01
A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.
Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom
2015-01-01
Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical implication in situations when predicted outcomes in CPR validation studies are lacking or deficient by describing how such predictions can be obtained by everyone using the derivation study alone, without any need for highly specialized knowledge or sophisticated statistics. PMID:25931829
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.
Senay, G.B.; Budde, Michael; Verdin, J.P.; Melesse, Assefa M.
2007-01-01
Accurate crop performance monitoring and production estimation are critical for timely assessment of the food balance of several countries in the world. Since 2001, the Famine Early Warning Systems Network (FEWS NET) has been monitoring crop performance and relative production using satellite-derived data and simulation models in Africa, Central America, and Afghanistan where ground-based monitoring is limited because of a scarcity of weather stations. The commonly used crop monitoring models are based on a crop water-balance algorithm with inputs from satellite-derived rainfall estimates. These models are useful to monitor rainfed agriculture, but they are ineffective for irrigated areas. This study focused on Afghanistan, where over 80 percent of agricultural production comes from irrigated lands. We developed and implemented a Simplified Surface Energy Balance (SSEB) model to monitor and assess the performance of irrigated agriculture in Afghanistan using a combination of 1-km thermal data and 250m Normalized Difference Vegetation Index (NDVI) data, both from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. We estimated seasonal actual evapotranspiration (ETa) over a period of six years (2000-2005) for two major irrigated river basins in Afghanistan, the Kabul and the Helmand, by analyzing up to 19 cloud-free thermal and NDVI images from each year. These seasonal ETa estimates were used as relative indicators of year-to-year production magnitude differences. The temporal water-use pattern of the two irrigated basins was indicative of the cropping patterns specific to each region. Our results were comparable to field reports and to estimates based on watershed-wide crop water-balance model results. For example, both methods found that the 2003 seasonal ETa was the highest of all six years. The method also captured water management scenarios where a unique year-to-year variability was identified in addition to water-use differences between upstream and downstream basins. A major advantage of the energy-balance approach is that it can be used to quantify spatial extent of irrigated fields and their water-use dynamics without reference to source of water as opposed to a water-balance model which requires knowledge of both the magnitude and temporal distribution of rainfall and irrigation applied to fields. ?? 2007 by MDPI.
Senay, Gabriel B.; Budde, Michael; Verdin, James P.; Melesse, Assefa M.
2007-01-01
Accurate crop performance monitoring and production estimation are critical for timely assessment of the food balance of several countries in the world. Since 2001, the Famine Early Warning Systems Network (FEWS NET) has been monitoring crop performance and relative production using satellite-derived data and simulation models in Africa, Central America, and Afghanistan where ground-based monitoring is limited because of a scarcity of weather stations. The commonly used crop monitoring models are based on a crop water-balance algorithm with inputs from satellite-derived rainfall estimates. These models are useful to monitor rainfed agriculture, but they are ineffective for irrigated areas. This study focused on Afghanistan, where over 80 percent of agricultural production comes from irrigated lands. We developed and implemented a Simplified Surface Energy Balance (SSEB) model to monitor and assess the performance of irrigated agriculture in Afghanistan using a combination of 1-km thermal data and 250-m Normalized Difference Vegetation Index (NDVI) data, both from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor. We estimated seasonal actual evapotranspiration (ETa) over a period of six years (2000-2005) for two major irrigated river basins in Afghanistan, the Kabul and the Helmand, by analyzing up to 19 cloud-free thermal and NDVI images from each year. These seasonal ETa estimates were used as relative indicators of year-to-year production magnitude differences. The temporal water-use pattern of the two irrigated basins was indicative of the cropping patterns specific to each region. Our results were comparable to field reports and to estimates based on watershed-wide crop water-balance model results. For example, both methods found that the 2003 seasonal ETa was the highest of all six years. The method also captured water management scenarios where a unique year-to-year variability was identified in addition to water-use differences between upstream and downstream basins. A major advantage of the energy-balance approach is that it can be used to quantify spatial extent of irrigated fields and their water-use dynamics without reference to source of water as opposed to a water-balance model which requires knowledge of both the magnitude and temporal distribution of rainfall and irrigation applied to fields.
Azamathulla, H. Md.; Jarrett, Robert D.
2013-01-01
Manning’s roughness coefficient (n) has been widely used in the estimation of flood discharges or depths of flow in natural channels. Therefore, the selection of appropriate Manning’s nvalues is of paramount importance for hydraulic engineers and hydrologists and requires considerable experience, although extensive guidelines are available. Generally, the largest source of error in post-flood estimates (termed indirect measurements) is due to estimates of Manning’s n values, particularly when there has been minimal field verification of flow resistance. This emphasizes the need to improve methods for estimating n values. The objective of this study was to develop a soft computing model in the estimation of the Manning’s n values using 75 discharge measurements on 21 high gradient streams in Colorado, USA. The data are from high gradient (S > 0.002 m/m), cobble- and boulder-bed streams for within bank flows. This study presents Gene-Expression Programming (GEP), an extension of Genetic Programming (GP), as an improved approach to estimate Manning’s roughness coefficient for high gradient streams. This study uses field data and assessed the potential of gene-expression programming (GEP) to estimate Manning’s n values. GEP is a search technique that automatically simplifies genetic programs during an evolutionary processes (or evolves) to obtain the most robust computer program (e.g., simplify mathematical expressions, decision trees, polynomial constructs, and logical expressions). Field measurements collected by Jarrett (J Hydraulic Eng ASCE 110: 1519–1539, 1984) were used to train the GEP network and evolve programs. The developed network and evolved programs were validated by using observations that were not involved in training. GEP and ANN-RBF (artificial neural network-radial basis function) models were found to be substantially more effective (e.g., R2 for testing/validation of GEP and RBF-ANN is 0.745 and 0.65, respectively) than Jarrett’s (J Hydraulic Eng ASCE 110: 1519–1539, 1984) equation (R2 for testing/validation equals 0.58) in predicting the Manning’s n.
The U.S. Environmental Protection Agency National Stormwater Calculator (NSWC) simplifies the task of estimating runoff through a straightforward simulation process based on the EPA Stormwater Management Model. The NSWC accesses localized climate and soil hydrology data, and opti...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, P.J.
1996-07-01
A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
Unsteady load on an oscillating Kaplan turbine runner
NASA Astrophysics Data System (ADS)
Puolakka, O.; Keto-Tokoi, J.; Matusiak, J.
2013-02-01
A Kaplan turbine runner oscillating in turbine waterways is subjected to a varying hydrodynamic load. Numerical simulation of the related unsteady flow is time-consuming and research is very limited. In this study, a simplified method based on unsteady airfoil theory is presented for evaluation of the unsteady load for vibration analyses of the turbine shaft line. The runner is assumed to oscillate as a rigid body in spin and axial heave, and the reaction force is resolved into added masses and dampings. The method is applied on three Kaplan runners at nominal operating conditions. Estimates for added masses and dampings are considered to be of a magnitude significant for shaft line vibration. Moderate variation in the added masses and minor variation in the added dampings is found in the frequency range of interest. Reference results for added masses are derived by solving the boundary value problem for small motions of inviscid fluid using the finite element method. Good correspondence is found in the added mass estimates of the two methods. The unsteady airfoil method is considered accurate enough for design purposes. Experimental results are needed for validation of unsteady load analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhiber, R; Usmanov, AV; Matthaeus, WH
Simple estimates of the number of Coulomb collisions experienced by the interplanetary plasma to the point of observation, i.e., the “collisional age”, can be usefully employed in the study of non-thermal features of the solar wind. Usually these estimates are based on local plasma properties at the point of observation. Here we improve the method of estimation of the collisional age by employing solutions obtained from global three-dimensional magnetohydrodynamics simulations. This enables evaluation of the complete analytical expression for the collisional age without using approximations. The improved estimation of the collisional timescale is compared with turbulence and expansion timescales tomore » assess the relative importance of collisions. The collisional age computed using the approximate formula employed in previous work is compared with the improved simulation-based calculations to examine the validity of the simplified formula. We also develop an analytical expression for the evaluation of the collisional age and we find good agreement between the numerical and analytical results. Finally, we briefly discuss the implications for an improved estimation of collisionality along spacecraft trajectories, including Solar Probe Plus.« less
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Using ultrasound CBE imaging without echo shift compensation for temperature estimation.
Tsui, Po-Hsiang; Chien, Yu-Ting; Liu, Hao-Li; Shu, Yu-Chen; Chen, Wen-Shiang
2012-09-01
Clinical trials have demonstrated that hyperthermia improves cancer treatments. Previous studies developed ultrasound temperature imaging methods, based on the changes in backscattered energy (CBE), to monitor temperature variations during hyperthermia. Echo shift, induced by increasing temperature, contaminates the CBE image, and its tracking and compensation should normally ensure that estimations of CBE at each pixel are correct. To obtain a simplified algorithm that would allow real-time computation of CBE images, this study evaluated the usefulness of CBE imaging without echo shift compensation in detecting distributions in temperature. Experiments on phantoms, using different scatterer concentrations, and porcine livers were conducted to acquire raw backscattered data at temperatures ranging from 37°C to 45°C. Tissue samples of pork tenderloin were ablated in vitro by microwave irradiation to evaluate the feasibility of using the CBE image without compensation to monitor tissue ablation. CBE image construction was based on a ratio map obtained from the envelope image divided by the reference envelope image at 37°C. The experimental results demonstrated that the CBE image obtained without echo shift compensation has the ability to estimate temperature variations induced during uniform heating or tissue ablation. The magnitude of the CBE as a function of temperature obtained without compensation is stronger than that with compensation, implying that the CBE image without compensation has a better sensitivity to detect temperature. These findings suggest that echo shift tracking and compensation may be unnecessary in practice, thus simplifying the algorithm required to implement real-time CBE imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
Allometric scaling theory applied to FIA biomass estimation
David C. Chojnacky
2002-01-01
Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...
77 FR 73965 - Allocation of Costs Under the Simplified Methods; Hearing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-12
... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [REG-126770-06] RIN 1545-BG07 Allocation of Costs Under the Simplified Methods; Hearing AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of public hearing on notice proposed rulemaking. SUMMARY: This document provides notice of...
New method for designing serial resonant power converters
NASA Astrophysics Data System (ADS)
Hinov, Nikolay
2017-12-01
In current work is presented one comprehensive method for design of serial resonant energy converters. The method is based on new simplified approach in analysis of such kind power electronic devices. It is grounded on supposing resonant mode of operation when finding relation between input and output voltage regardless of other operational modes (when controlling frequency is below or above resonant frequency). This approach is named `quasiresonant method of analysis', because it is based on assuming that all operational modes are `sort of' resonant modes. An estimation of error was made because of the a.m. hypothesis and is compared to the classic analysis. The `quasiresonant method' of analysis gains two main advantages: speed and easiness in designing of presented power circuits. Hence it is very useful in practice and in teaching Power Electronics. Its applicability is proven with mathematic modelling and computer simulation.
77 FR 15969 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-19
... confidentiality of the contract rates, as required by 49 U.S.C. 11904. Background In Simplified Standards for Rail Rate Cases (Simplified Standards), EP 646 (Sub-No. 1) (STB served Sept. 5, 2007), aff'd sub nom. CSX...\\ Under the Three-Benchmark method as revised in Simplified Standards, each party creates and proffers to...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false List of laws inapplicable to contracts and subcontracts at or below the simplified acquisition threshold. 13.005 Section 13.005 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES...
NASA Astrophysics Data System (ADS)
He, Jing; Shi, Jin; Deng, Rui; Chen, Lin
2017-08-01
Recently, visible light communication (VLC) based on light-emitting diodes (LEDs) is considered as a candidate technology for fifth-generation (5G) communications, VLC is free of electromagnetic interference and it can simplify the integration of VLC into heterogeneous wireless networks. Due to the data rates of VLC system limited by the low pumping efficiency, small output power and narrow modulation bandwidth, visible laser light communication (VLLC) system with laser diode (LD) has paid more attention. In addition, orthogonal frequency division multiplexing/offset quadrature amplitude modulation (OFDM/OQAM) is currently attracting attention in optical communications. Due to the non-requirement of cyclic prefix (CP) and time-frequency domain well-localized pulse shapes, it can achieve high spectral efficiency. Moreover, OFDM/OQAM has lower out-of-band power leakage so that it increases the system robustness against inter-carrier interference (ICI) and frequency offset. In this paper, a Discrete Fourier Transform (DFT)-based channel estimation scheme combined with the interference approximation method (IAM) is proposed and experimentally demonstrated for VLLC OFDM/OQAM system. The performance of VLLC OFDM/OQAM system with and without DFT-based channel estimation is investigated. Moreover, the proposed DFT-based channel estimation scheme and the intra-symbol frequency-domain averaging (ISFA)-based method are also compared for the VLLC OFDM/OQAM system. The experimental results show that, the performance of EVM using the DFT-based channel estimation scheme is improved about 3dB compared with the conventional IAM method. In addition, the DFT-based channel estimation scheme can resist the channel noise effectively than that of the ISFA-based method.
sEMG-based joint force control for an upper-limb power-assist exoskeleton robot.
Li, Zhijun; Wang, Baocheng; Sun, Fuchun; Yang, Chenguang; Xie, Qing; Zhang, Weidong
2014-05-01
This paper investigates two surface electromyogram (sEMG)-based control strategies developed for a power-assist exoskeleton arm. Different from most of the existing position control approaches, this paper develops force control methods to make the exoskeleton robot behave like humans in order to provide better assistance. The exoskeleton robot is directly attached to a user's body and activated by the sEMG signals of the user's muscles, which reflect the user's motion intention. In the first proposed control method, the forces of agonist and antagonist muscles pair are estimated, and their difference is used to produce the torque of the corresponding joints. In the second method, linear discriminant analysis-based classifiers are introduced as the indicator of the motion type of the joints. Then, the classifier's outputs together with the estimated force of corresponding active muscle determine the torque control signals. Different from the conventional approaches, one classifier is assigned to each joint, which decreases the training time and largely simplifies the recognition process. Finally, the extensive experiments are conducted to illustrate the effectiveness of the proposed approaches.
Earth's magnetic field effect on MUF calculation and consequences for hmF2 trend estimates
NASA Astrophysics Data System (ADS)
Elias, Ana G.; Zossi, Bruno S.; Yiğit, Erdal; Saavedra, Zenon; de Haro Barbas, Blas F.
2017-10-01
Knowledge of the state of the upper atmosphere, and in particular of the ionosphere, is essential in several applications such as systems used in radio frequency communications, satellite positioning and navigation. In general, these systems depend on the state and evolution of the ionosphere. In all applications involving the ionosphere an essential task is to determine the path and modifications of ray propagation through the ionospheric plasma. The ionospheric refractive index and the maximum usable frequency (MUF) that can be received over a given distance are some key parameters that are crucial for such technological applications. However, currently the representation of these parameters are in general simplified, neglecting the effects of Earth's magnetic field. The value of M(3000)F2, related to the MUF that can be received over 3000 km is routinely scaled from ionograms using a technique which also neglects the geomagnetic field effects assuming a standard simplified propagation model. M(3000)F2 is expected to be affected by a systematic trend linked to the secular variations of Earth's magnetic field. On the other hand, among the upper atmospheric effects expected from increasing greenhouse gases concentration is the lowering of the F2-layer peak density height, hmF2. This ionospheric parameter is usually estimated using the M(3000)F2 factor, so it would also carry this ;systematic trend;. In this study, the geomagnetic field effect on MUF estimations is analyzed as well as its impact on hmF2 long-term trend estimations. We find that M(3000)F2 increases when the geomagnetic field is included in its calculation, and hence hmF2, estimated using existing methods involving no magnetic field for M(3000)F2 scaling, would present a weak but steady trend linked to these variations which would increase or compensate the few kilometers decrease ( 2 km per decade) expected from greenhouse gases effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jian; Bearden, Mark D.; Fernandez, Carlos A.
Magnesium (Mg) has many useful applications especially in various Mg alloys which can decrease weight while increasing strength. To increase the affordability and minimize environment consequence, a novel catalyzed organo-metathetical (COMET) process was proposed to extract Mg from seawater aiming to achieve significant reduction in total energy and production cost comparing with the melting salt electrolysis method currently adopted by US Mg LLC. A process flowsheet for a reference COMET process was set-up using Aspen Plus which included five key steps, anhydrous MgCl2 production, transmetallation, dibutyl Mg decomposition, n-BuLi regeneration, and LiCL electrolysis. The energy and production cost and CO2more » emission were estimated based on the Aspen modeling using Aspen economic analyzer. Our results showed that it is possible to produce Mg from seawater with a production cost of $2.0/kg-Mg while consuming about 35.3 kWh/kg-Mg and releasing 7.0 kg CO2/kg-Mg. A simplified US Mg manufacturing process was also generated using Aspen and the cost and emission results were estimated for comparison purpose. Under our simulation conditions, the reference COMET process maintain a comparable CO2 emission rate and can save about 40% in production cost and save about 15% energy compared to the simplified US Mg process.« less
Image segmentation using hidden Markov Gauss mixture models.
Pyun, Kyungsuk; Lim, Johan; Won, Chee Sun; Gray, Robert M
2007-07-01
Image segmentation is an important tool in image processing and can serve as an efficient front end to sophisticated algorithms and thereby simplify subsequent processing. We develop a multiclass image segmentation method using hidden Markov Gauss mixture models (HMGMMs) and provide examples of segmentation of aerial images and textures. HMGMMs incorporate supervised learning, fitting the observation probability distribution given each class by a Gauss mixture estimated using vector quantization with a minimum discrimination information (MDI) distortion. We formulate the image segmentation problem using a maximum a posteriori criteria and find the hidden states that maximize the posterior density given the observation. We estimate both the hidden Markov parameter and hidden states using a stochastic expectation-maximization algorithm. Our results demonstrate that HMGMM provides better classification in terms of Bayes risk and spatial homogeneity of the classified objects than do several popular methods, including classification and regression trees, learning vector quantization, causal hidden Markov models (HMMs), and multiresolution HMMs. The computational load of HMGMM is similar to that of the causal HMM.
Real-time realizations of the Bayesian Infrasonic Source Localization Method
NASA Astrophysics Data System (ADS)
Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.
2015-12-01
The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.
High-throughput sample adaptive offset hardware architecture for high-efficiency video coding
NASA Astrophysics Data System (ADS)
Zhou, Wei; Yan, Chang; Zhang, Jingzhi; Zhou, Xin
2018-03-01
A high-throughput hardware architecture for a sample adaptive offset (SAO) filter in the high-efficiency video coding video coding standard is presented. First, an implementation-friendly and simplified bitrate estimation method of rate-distortion cost calculation is proposed to reduce the computational complexity in the mode decision of SAO. Then, a high-throughput VLSI architecture for SAO is presented based on the proposed bitrate estimation method. Furthermore, multiparallel VLSI architecture for in-loop filters, which integrates both deblocking filter and SAO filter, is proposed. Six parallel strategies are applied in the proposed in-loop filters architecture to improve the system throughput and filtering speed. Experimental results show that the proposed in-loop filters architecture can achieve up to 48% higher throughput in comparison with prior work. The proposed architecture can reach a high-operating clock frequency of 297 MHz with TSMC 65-nm library and meet the real-time requirement of the in-loop filters for 8 K × 4 K video format at 132 fps.
A Population Pharmacokinetic Model for 51Cr EDTA to Estimate Renal Function.
Kuan, Isabelle H S; Duffull, Stephen B; Putt, Tracey L; Schollum, John B W; Walker, Robert J; Wright, Daniel F B
2017-06-01
51 Cr EDTA clearance (CL) from plasma is used to estimate glomerular filtration rate (GFR). We propose that current methods for analysing the raw 51 Cr EDTA measurements over-simplifies the disposition of 51 Cr EDTA and therefore could produce biased GFR estimates. The aim of this study was to develop a population pharmacokinetic model for 51 Cr EDTA disposition and to compare model-predicted GFR to other methods of estimating renal function. Data from 40 individuals who received ~7.4 MBq of 51 Cr EDTA, as an intravenous bolus, were available for analysis. Plasma radioactivity (counts/min) was measured from timed collection points at 2, 4, 6 and 24 h after the dose. A population analysis was conducted using NONMEM ® version 7.2. Model-predicted GFR was compared with other methods for estimating renal function using mean prediction error (MPE). A two-compartment pharmacokinetic model with first-order elimination best fit the data. Compared with the model predictions, creatinine CL from 24 h urine data was unbiased. The commonly used 'slope-intercept' method for estimating isotopic GFR was positively biased compared with the model (MPE 15.5 mL/min/1.73 m 2 [95% confidence interval {CI} 8.9-22.2]. The Cockcroft Gault, Modification of Diet in Renal Disease (MDRD) and Chronic Kidney Disease Epidemiology Collaboration (CKD-Epi) equations led to negatively biased GFR estimates (MPE -19.0 [95% CI -25.4 to -12.7], -20.1 [95% CI -27.2 to -13.1] and -16.5 [95% CI -22.2 to -10.1] mL/min/1.73 m 2 , respectively). The biased GFR estimates were most obvious in patients with relatively normal renal function. This may lead to inaccurate dosing in patients who are receiving drugs with a narrow therapeutic range where dosing is adjusted according to GFR estimates (e.g. carboplatin). The study is registered with the Australian New Zealand Clinical Trials Registry (ANZCTR), number: ACTRN 12611000035921.
Lautrette, Alexandre; Phan, Thuy-Nga; Ouchchane, Lemlih; Aithssain, Ali; Tixier, Vincent; Heng, Anne-Elisabeth; Souweine, Bertrand
2012-09-27
A high dose of anti-infective agents is recommended when treating infectious meningitis. High creatinine clearance (CrCl) may affect the pharmacokinetic / pharmacodynamic relationships of anti-infective drugs eliminated by the kidneys. We recorded the incidence of high CrCl in intensive care unit (ICU) patients admitted with meningitis and assessed the diagnostic accuracy of two common methods used to identify high CrCl. Observational study performed in consecutive patients admitted with community-acquired acute infectious meningitis (defined by >7 white blood cells/mm3 in cerebral spinal fluid) between January 2006 and December 2009 to one medical ICU. During the first 7 days following ICU admission, CrCl was measured from 24-hr urine samples (24-hr-UV/P creatinine) and estimated according to Cockcroft-Gault formula and the simplified Modification of Diet in Renal Disease (MDRD) equation. High CrCl was defined as CrCl >140 ml/min/1.73 m2 by 24-hr-UV/P creatinine. Diagnostic accuracy was performed with ROC curves analysis. Thirty two patients were included. High CrCl was present in 8 patients (25%) on ICU admission and in 15 patients (47%) during the first 7 ICU days for a median duration of 3 (1-4) days. For the Cockcroft-Gault formula, the best threshold to predict high CrCl was 101 ml/min/1.73 m2 (sensitivity: 0.96, specificity: 0.75, AUC = 0.90 ± 0.03) with a negative likelihood ratio of 0.06. For the simplified MDRD equation, the best threshold to predict high CrCl was 108 ml/min/1.73 m2 (sensitivity: 0.91, specificity: 0.80, AUC = 0.88 ± 0.03) with a negative likelihood ratio of 0.11. There was no difference between the estimated methods in the diagnostic accuracy of identifying high CrCl (p = 0.30). High CrCl is frequently observed in ICU patients admitted with community-acquired acute infectious meningitis. The estimated methods of CrCl could be used as a screening tool to identify high CrCl.
Simplifying the complexity of resistance heterogeneity in metastasis
Lavi, Orit; Greene, James M.; Levy, Doron; Gottesman, Michael M.
2014-01-01
The main goal of treatment regimens for metastasis is to control growth rates, not eradicate all cancer cells. Mathematical models offer methodologies that incorporate high-throughput data with dynamic effects on net growth. The ideal approach would simplify, but not over-simplify, a complex problem into meaningful and manageable estimators that predict a patient’s response to specific treatments. Here, we explore three fundamental approaches with different assumptions concerning resistance mechanisms, in which the cells are categorized into either discrete compartments or described by a continuous range of resistance levels. We argue in favor of modeling resistance as a continuum and demonstrate how integrating cellular growth rates, density-dependent versus exponential growth, and intratumoral heterogeneity improves predictions concerning the resistance heterogeneity of metastases. PMID:24491979
Analysis of high-aspect-ratio jet-flap wings of arbitrary geometry
NASA Technical Reports Server (NTRS)
Lissaman, P. B. S.
1973-01-01
An analytical technique to compute the performance of an arbitrary jet-flapped wing is developed. The solution technique is based on the method of Maskell and Spence in which the well-known lifting-line approach is coupled with an auxiliary equation providing the extra function needed in jet-flap theory. The present method is generalized to handle straight, uncambered wings of arbitrary planform, twist, and blowing (including unsymmetrical cases). An analytical procedure is developed for continuous variations in the above geometric data with special functions to exactly treat discontinuities in any of the geometric and blowing data. A rational theory for the effect of finite wing thickness is introduced as well as simplified concepts of effective aspect ratio for rapid estimation of performance.
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
Effect of an overhead shield on gamma-ray skyshine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stedry, M.H.; Shultis, J.K.; Faw, R.E.
1996-06-01
A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less
Effects of shock on hypersonic boundary layer stability
NASA Astrophysics Data System (ADS)
Pinna, F.; Rambaud, P.
2013-06-01
The design of hypersonic vehicles requires the estimate of the laminar to turbulent transition location for an accurate sizing of the thermal protection system. Linear stability theory is a fast scientific way to study the problem. Recent improvements in computational capabilities allow computing the flow around a full vehicle instead of using only simplified boundary layer equations. In this paper, the effect of the shock is studied on a mean flow provided by steady Computational Fluid Dynamics (CFD) computations and simplified boundary layer calculations.
Sparse and Adaptive Diffusion Dictionary (SADD) for recovering intra-voxel white matter structure.
Aranda, Ramon; Ramirez-Manzanares, Alonso; Rivera, Mariano
2015-12-01
On the analysis of the Diffusion-Weighted Magnetic Resonance Images, multi-compartment models overcome the limitations of the well-known Diffusion Tensor model for fitting in vivo brain axonal orientations at voxels with fiber crossings, branching, kissing or bifurcations. Some successful multi-compartment methods are based on diffusion dictionaries. The diffusion dictionary-based methods assume that the observed Magnetic Resonance signal at each voxel is a linear combination of the fixed dictionary elements (dictionary atoms). The atoms are fixed along different orientations and diffusivity profiles. In this work, we present a sparse and adaptive diffusion dictionary method based on the Diffusion Basis Functions Model to estimate in vivo brain axonal fiber populations. Our proposal overcomes the following limitations of the diffusion dictionary-based methods: the limited angular resolution and the fixed shapes for the atom set. We propose to iteratively re-estimate the orientations and the diffusivity profile of the atoms independently at each voxel by using a simplified and easier-to-solve mathematical approach. As a result, we improve the fitting of the Diffusion-Weighted Magnetic Resonance signal. The advantages with respect to the former Diffusion Basis Functions method are demonstrated on the synthetic data-set used on the 2012 HARDI Reconstruction Challenge and in vivo human data. We demonstrate that improvements obtained in the intra-voxel fiber structure estimations benefit brain research allowing to obtain better tractography estimations. Hence, these improvements result in an accurate computation of the brain connectivity patterns. Copyright © 2015 Elsevier B.V. All rights reserved.
Simplified limits on resonances at the LHC
NASA Astrophysics Data System (ADS)
Chivukula, R. Sekhar; Ittisamai, Pawin; Mohan, Kirtimaan; Simmons, Elizabeth H.
2016-11-01
In the earliest stages of evaluating new collider data, especially if a small excess may be present, it would be useful to have a method for comparing the data with entire classes of models, to get an immediate sense of which classes could conceivably be relevant. In this paper, we propose a method that applies when the new physics invoked to explain the excess corresponds to the production and decay of a single, relatively narrow, s -channel resonance. A simplifed model of the resonance allows us to convert an estimated signal cross section into general bounds on the product of the branching ratios corresponding to the dominant production and decay modes. This quickly reveals whether a given class of models could possibly produce a signal of the required size at the LHC. Our work sets up a general framework, outlines how it operates for resonances with different numbers of production and decay modes, and analyzes cases of current experimental interest, including resonances decaying to dibosons, diphotons, dileptons, or dijets. If the LHC experiments were to report their searches for new resonances beyond the standard model in the simplified limits variable ζ defined in this paper, that would make it far easier to avoid blind alleys and home in on the most likely candidate models to explain any observed excesses.
Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material.
Usamentiaga, Rubén; García, Daniel Fernando
2017-05-18
Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance.
Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material
Usamentiaga, Rubén; García, Daniel Fernando
2017-01-01
Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance. PMID:28524110
NASA Astrophysics Data System (ADS)
Penot, David; Paquet, Emmanuel; Lang, Michel
2014-05-01
SCHADEX is a probabilistic method for extreme flood estimation, developed and applied since 2006 at Electricité de France (EDF) for dam spillway design [Paquet et al., 2013]. SCHADEX is based on a semi-continuous rainfall-runoff simulation process. The method has been built around two models: a Multi-Exponential Weather Pattern (MEWP) distribution for rainfall probability estimation [Garavaglia et al., 2010] and the MORDOR hydrological model. To use SCHADEX in ungauged context, rainfall distribution and hydrological model must be regionalized. The regionalization of the MEWP rainfall distribution can be managed with SPAZM, a daily rainfall interpolator [Gottardi et al., 2012] which provides reasonable estimates of point and areal rainfall up to hight quantiles. The main issue remains to regionalize MORDOR which is heavily parametrized. A much more simple model has been considered: the SCS model. It is a well known model for event simulation [USDA SCS, 1985; Beven, 2003] and it relies on only one parameter. Then, the idea is to use the SCS model instead of MORDOR within a simplified stochastic simulation scheme to produce a distribution of flood volume from an exhaustive crossing between rainy events and catchment saturation hazards. The presentation details this process and its capacity to generate a runoff distribution based on catchment areal rainfall distribution. The simulation method depends on a unique parameter Smax, the maximum initial loss of the catchment. Then an initial loss S (between zero and Smax) can be drawn to account for the variability of catchment state (between dry and saturated). The distribution of initial loss (or conversely, of catchment saturation, as modeled by MORDOR) seems closely linked to the catchment's regime, therefore easily to regionalize. The simulation takes into account a snow contribution for snow driven catchments, and an antecedent runoff. The presentation shows the results of this stochastic procedure applied on 80 French catchments and its capacity to represent the asymptotic behaviour of the runoff distribution. References: K. J. Beven. Rainfall-Runoff modelling The Primer, British Library, 2003. F. Garavaglia, J. Gailhard, E. Paquet, M. Lang, R. Garçon, and P. Bernardara. Introducing a rainfall compound distribution model based on weather patterns sub-sampling. Hydrology and Earth System Sciences, 14(6):951-964, 2010. F. Gottardi, C. Obled, J. Gailhard, and E. Paquet. Statistical reanalysis of precipitation fields based on ground network data and weather patterns : Application over french mountains. Journal of Hydrology, 432-433:154-167, 2012. ISSN 0022-1694. E. Paquet, F. Garavaglia, R Garçon, and J. Gailhard. The schadex method : a semi-continuous rainfall-runoff simulation for extreme flood estimation. Journal of Hydrology, 2013. USDA SCS, National Engineering Handbook, Supplement A, Section 4, Chapter 10. Whashington DC, 1985.
Image segmentation algorithm based on improved PCNN
NASA Astrophysics Data System (ADS)
Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui
2017-11-01
A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.
García, Mónica; Villagarcía, Luis; Contreras, Sergio; Domingo, Francisco; Puigdefábregas, Juan
2007-01-01
Three operative models with minimum input data requirements for estimating the partition of available surface energy into sensible and latent heat flux using ASTER data have been evaluated in a semiarid area in SE Spain. The non-evaporative fraction (NEF) is proposed as an indicator of the surface water deficit. The best results were achieved with NEF estimated using the “Simplified relationship” for unstable conditions (NEFSeguin) and with the S-SEBI (Simplified Surface Energy Balance Index) model corrected for atmospheric conditions (NEFS-SEBIt,) which both produced equivalent results. However, results with a third model, NEFCarlson, that estimates the exchange coefficient for sensible heat transfer from NDVI, were unrealistic for sites with scarce vegetation cover. These results are very promising for an operative monitoring of the surface water deficit, as validation with field data shows reasonable errors, within those reported in the literature (RMSE were 0.18 and 0.11 for the NEF, and 29.12 Wm-2 and 25.97 Wm-2 for sensible heat flux, with the Seguin and S-SEBIt models, respectively).
Sensor-less pseudo-sinusoidal drive for a permanent-magnet brushless ac motor
NASA Astrophysics Data System (ADS)
Liu, Li-Hsiang; Chern, Tzuen-Lih; Pan, Ping-Lung; Huang, Tsung-Mou; Tsay, Der-Min; Kuang, Jao-Hwa
2012-04-01
The precise rotor-position information is required for a permanent-magnet brushless ac motor (BLACM) drive. In the conventional sinusoidal drive method, either an encoder or a resolver is usually employed. For position sensor-less vector control schemes, the rotor flux estimation and torque components are obtained by complicated coordinate transformations. These computational intensive methods are susceptible to current distortions and parameter variations. To simplify the method complexity, this work presents a sensor-less pseudo-sinusoidal drive scheme with speed control for a three-phase BLACM. Based on the sinusoidal drive scheme, a floating period of each phase current is inserted for back electromotive force detection. The zero-crossing point is determined directly by the proposed scheme, and the rotor magnetic position and rotor speed can be estimated simultaneously. Several experiments for various active angle periods are undertaken. Furthermore, a current feedback control is included to minimize and compensate the torque fluctuation. The experimental results show that the proposed method has a competitive performance compared with the conventional drive manners for BLACM. The proposed scheme is straightforward, bringing the benefits of sensor-less drive and negating the need for coordinate transformations in the operating process.
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
An improved loopless mounting method for cryocrystallography
NASA Astrophysics Data System (ADS)
Qi, Jian-Xun; Jiang, Fan
2010-01-01
Based on a recent loopless mounting method, a simplified loopless and bufferless crystal mounting method is developed for macromolecular crystallography. This simplified crystal mounting system is composed of the following components: a home-made glass capillary, a brass seat for holding the glass capillary, a flow regulator, and a vacuum pump for evacuation. Compared with the currently prevalent loop mounting method, this simplified method has almost the same mounting procedure and thus is compatible with the current automated crystal mounting system. The advantages of this method include higher signal-to-noise ratio, more accurate measurement, more rapid flash cooling, less x-ray absorption and thus less radiation damage to the crystal. This method can be extended to the flash-freeing of a crystal without or with soaking it in a lower concentration of cryoprotectant, thus it may be the best option for data collection in the absence of suitable cryoprotectant. Therefore, it is suggested that this mounting method should be further improved and extensively applied to cryocrystallographic experiments.
Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.
Xie, Xianming
2016-08-22
A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.
Research on simplified parametric finite element model of automobile frontal crash
NASA Astrophysics Data System (ADS)
Wu, Linan; Zhang, Xin; Yang, Changhai
2018-05-01
The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.
NASA Astrophysics Data System (ADS)
Crevoisier, David; Chanzy, André; Voltz, Marc
2009-06-01
Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins; 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988;3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D.
Lee, Byung Moo
2017-12-29
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.
2017-01-01
Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339
DOE Office of Scientific and Technical Information (OSTI.GOV)
González-López, Antonio, E-mail: antonio.gonzalez7@carm.es; Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen
Purpose: This note studies the statistical relationships between color channels in radiochromic film readings with flatbed scanners. The same relationships are studied for noise. Finally, their implications for multichannel film dosimetry are discussed. Methods: Radiochromic films exposed to wedged fields of 6 MV energy were read in a flatbed scanner. The joint histograms of pairs of color channels were used to obtain the joint and conditional probability density functions between channels. Then, the conditional expectations and variances of one channel given another channel were obtained. Noise was extracted from film readings by means of a multiresolution analysis. Two different dosemore » ranges were analyzed, the first one ranging from 112 to 473 cGy and the second one from 52 to 1290 cGy. Results: For the smallest dose range, the conditional expectations of one channel given another channel can be approximated by linear functions, while the conditional variances are fairly constant. The slopes of the linear relationships between channels can be used to simplify the expression that estimates the dose by means of the multichannel method. The slopes of the linear relationships between each channel and the red one can also be interpreted as weights in the final contribution to dose estimation. However, for the largest dose range, the conditional expectations of one channel given another channel are no longer linear functions. Finally, noises in different channels were found to correlate weakly. Conclusions: Signals present in different channels of radiochromic film readings show a strong statistical dependence. By contrast, noise correlates weakly between channels. For the smallest dose range analyzed, the linear behavior between the conditional expectation of one channel given another channel can be used to simplify calculations in multichannel film dosimetry.« less
Bharate, Sonali S; Vishwakarma, Ram A
2015-04-01
An early prediction of solubility in physiological media (PBS, SGF and SIF) is useful to predict qualitatively bioavailability and absorption of lead candidates. Despite of the availability of multiple solubility estimation methods, none of the reported method involves simplified fixed protocol for diverse set of compounds. Therefore, a simple and medium-throughput solubility estimation protocol is highly desirable during lead optimization stage. The present work introduces a rapid method for assessment of thermodynamic equilibrium solubility of compounds in aqueous media using 96-well microplate. The developed protocol is straightforward to set up and takes advantage of the sensitivity of UV spectroscopy. The compound, in stock solution in methanol, is introduced in microgram quantities into microplate wells followed by drying at an ambient temperature. Microplates were shaken upon addition of test media and the supernatant was analyzed by UV method. A plot of absorbance versus concentration of a sample provides saturation point, which is thermodynamic equilibrium solubility of a sample. The established protocol was validated using a large panel of commercially available drugs and with conventional miniaturized shake flask method (r(2)>0.84). Additionally, the statistically significant QSPR models were established using experimental solubility values of 52 compounds. Copyright © 2015 Elsevier Ltd. All rights reserved.
Automated ambiguity estimation for VLBI Intensive sessions using L1-norm
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a space-geodetic technique that is uniquely capable of direct observation of the angle of the Earth's rotation about the Celestial Intermediate Pole (CIP) axis, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) provided by the 1-h long VLBI Intensive sessions are essential in providing timely UT1 estimates for satellite navigation systems and orbit determination. In order to produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This involves the automatic processing of X- and S-band group delays. These data contain an unknown number of integer ambiguities in the observed group delays. They are introduced as a side-effect of the bandwidth synthesis technique, which is used to combine correlator results from the narrow channels that span the individual bands. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimisation). We implement L1-norm as an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions on the Kokee-Wettzell baseline. The results are compared to an analysis set-up where the ambiguity estimation is computed using the L2-norm. For both methods three different weighting strategies for the ambiguity estimation are assessed. The results show that the L1-norm is better at automatically resolving the ambiguities than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies. The increase in the number of sessions is approximately 5% for each weighting strategy. This is accompanied by smaller post-fit residuals in the final UT1-UTC estimation step.
A Micromechanics-Based Method for Multiscale Fatigue Prediction
NASA Astrophysics Data System (ADS)
Moore, John Allan
An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.
Simplified half-life methods for the analysis of kinetic data
NASA Technical Reports Server (NTRS)
Eberhart, J. G.; Levin, E.
1988-01-01
The analysis of reaction rate data has as its goal the determination of the order rate constant which characterize the data. Chemical reactions with one reactant and present simplified methods for accomplishing this goal are considered. The approaches presented involve the use of half lives or other fractional lives. These methods are particularly useful for the more elementary discussions of kinetics found in general and physical chemistry courses.
NASA Astrophysics Data System (ADS)
Al-Rabadi, Anas N.
2009-10-01
This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.
Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.
Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan
2018-05-01
RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.
Crack opening area estimates in pressurized through-wall cracked elbows under bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franco, C.; Gilles, P.; Pignol, M.
1997-04-01
One of the most important aspects in the leak-before-break approach is the estimation of the crack opening area corresponding to potential through-wall cracks at critical locations during plant operation. In order to provide a reasonable lower bound to the leak area under such loading conditions, numerous experimental and numerical programs have been developed in USA, U.K. and FRG and widely discussed in literature. This paper aims to extend these investigations on a class of pipe elbows characteristic of PWR main coolant piping. The paper is divided in three main parts. First, a new simplified estimation scheme for leakage area ismore » described, based on the reference stress method. This approach mainly developed in U.K. and more recently in France provides a convenient way to account for the non-linear behavior of the material. Second, the method is carried out for circumferential through-wall cracks located in PWR elbows subjected to internal pressure. Finite element crack area results are presented and comparisons are made with our predictions. Finally, in the third part, the discussion is extended to elbows under combined pressure and in plane bending moment.« less
Simultaneous two-view epipolar geometry estimation and motion segmentation by 4D tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung; Medioni, Gérard
2004-09-01
We address the problem of simultaneous two-view epipolar geometry estimation and motion segmentation from nonstatic scenes. Given a set of noisy image pairs containing matches of n objects, we propose an unconventional, efficient, and robust method, 4D tensor voting, for estimating the unknown n epipolar geometries, and segmenting the static and motion matching pairs into n independent motions. By considering the 4D isotropic and orthogonal joint image space, only two tensor voting passes are needed, and a very high noise to signal ratio (up to five) can be tolerated. Epipolar geometries corresponding to multiple, rigid motions are extracted in succession. Only two uncalibrated frames are needed, and no simplifying assumption (such as affine camera model or homographic model between images) other than the pin-hole camera model is made. Our novel approach consists of propagating a local geometric smoothness constraint in the 4D joint image space, followed by global consistency enforcement for extracting the fundamental matrices corresponding to independent motions. We have performed extensive experiments to compare our method with some representative algorithms to show that better performance on nonstatic scenes are achieved. Results on challenging data sets are presented.
Space Shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
This fourth monthly progress report again contains corrections and additions to the previously submitted reports. The additions include a simplified SRB model that is directly incorporated into the estimation algorithm and provides the required partial derivatives. The resulting partial derivatives are analytical rather than numerical as would be the case using the SOBER routines. The filter and smoother routine developments have continued. These routines are being checked out.
Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2016-07-01
As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.
Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.
ERIC Educational Resources Information Center
Kim, YoungHwan; Reigluth, Charles M.
The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…
A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro
NASA Technical Reports Server (NTRS)
Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman
1996-01-01
Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...
Evaluating simplified methods for liquefaction assessment for loss estimation
NASA Astrophysics Data System (ADS)
Kongar, Indranil; Rossetto, Tiziana; Giovinazzi, Sonia
2017-06-01
Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at optimal thresholds. This paper also considers two models (HAZUS and EPOLLS) for estimation of the scale of liquefaction in terms of permanent ground deformation but finds that both models perform poorly, with correlations between observations and forecasts lower than 0.4 in all cases. Therefore these models potentially provide negligible additional value to loss estimation analysis outside of the regions for which they have been developed.
Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.
Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie
2018-05-01
Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime
2018-02-01
Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.
NASA Astrophysics Data System (ADS)
Sadeghipour, Negar; Davis, Scott C.; Tichauer, Kenneth M.
2018-02-01
Dynamic fluorescence imaging approaches can be used to estimate the concentration of cell surface receptors in vivo. Kinetic models are used to generate the final estimation by taking the targeted imaging agent concentration as a function of time. However, tissue absorption and scattering properties cause the final readout signal to be on a different scale than the real fluorescent agent concentration. In paired-agent imaging approaches, simultaneous injection of a suitable control imaging agent with a targeted one can account for non-specific uptake and retention of the targeted agent. Additionally, the signal from the control agent can be a normalizing factor to correct for tissue optical property differences. In this study, the kinetic model used for paired-agent imaging analysis (i.e., simplified reference tissue model) is modified and tested in simulation and experimental data in a way that accounts for the scaling correction within the kinetic model fit to the data to ultimately extract an estimate of the targeted biomarker concentration.
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2018-01-01
Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, J.; Reboul, S.
2015-06-01
SRS High Level Waste Tank Farm personnel rely on conductivity probes for detection of incipient overflow conditions in waste tanks. Minimal information is available concerning the sensitivity that must be achieved such that that liquid detection is assured. Overly sensitive electronics results in numerous nuisance alarms for these safety-related instruments. In order to determine the minimum sensitivity required of the probe, Tank Farm Engineering personnel need adequate conductivity data to improve the existing designs. Little or no measurements of liquid waste conductivity exist; however, the liquid phase of the waste consists of inorganic electrolytes for which the conductivity may bemore » calculated. Savannah River Remediation (SRR) Tank Farm Facility Engineering requested SRNL to determine the conductivity of the supernate resident in SRS waste Tank 40 experimentally as well as computationally. In addition, SRNL was requested to develop a correlation, if possible, that would be generally applicable to liquid waste resident in SRS waste tanks. A waste sample from Tank 40 was analyzed for composition and electrical conductivity as shown in Table 4-6, Table 4-7, and Table 4-9. The conductivity for undiluted Tank 40 sample was 0.087 S/cm. The accuracy of OLI Analyzer™ was determined using available literature data. Overall, 95% of computed estimates of electrical conductivity are within ±15% of literature values for component concentrations from 0 to 15 M and temperatures from 0 to 125 °C. Though the computational results are generally in good agreement with the measured data, a small portion of literature data deviates as much as ±76%. A simplified model was created that can be used readily to estimate electrical conductivity of waste solution in computer spreadsheets. The variability of this simplified approach deviates up to 140% from measured values. Generally, this model can be applied to estimate the conductivity within a factor of two. The comparison of the simplified model to pure component literature data suggests that the simplified model will tend to under estimate the electrical conductivity. Comparison of the computed Tank 40 conductivity with the measured conductivity shows good agreement within the range of deviation identified based on pure component literature data.« less
Tillman, F.D.; Callegary, J.B.; Nagler, P.L.; Glenn, E.P.
2012-01-01
Groundwater is a vital water resource in the arid to semi-arid southwestern United States. Accurate accounting of inflows to and outflows from the groundwater system is necessary to effectively manage this shared resource, including the important outflow component of groundwater discharge by vegetation. A simple method for estimating basin-scale groundwater discharge by vegetation is presented that uses remote sensing data from satellites, geographic information systems (GIS) land cover and stream location information, and a regression equation developed within the Southern Arizona study area relating the Enhanced Vegetation Index from the MODIS sensors on the Terra satellite to measured evapotranspiration. Results computed for 16-day composited satellite passes over the study area during the 2000 through 2007 time period demonstrate a sinusoidal pattern of annual groundwater discharge by vegetation with median values ranging from around 0.3 mm per day in the cooler winter months to around 1.5 mm per day during summer. Maximum estimated annual volume of groundwater discharge by vegetation was between 1.4 and 1.9 billion m3 per year with an annual average of 1.6 billion m3. A simplified accounting of the contribution of precipitation to vegetation greenness was developed whereby monthly precipitation data were subtracted from computed vegetation discharge values, resulting in estimates of minimum groundwater discharge by vegetation. Basin-scale estimates of minimum and maximum groundwater discharge by vegetation produced by this simple method are useful bounding values for groundwater budgets and groundwater flow models, and the method may be applicable to other areas with similar vegetation types.
Guo, Hongbin; Renaut, Rosemary A; Chen, Kewei; Reiman, Eric M
2010-01-01
Graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence. But they may, particularly for reversible kinetics, lead to bias in the estimated parameters. The source of the bias is commonly attributed to noise in the data. Assuming a two-tissue compartmental model, we investigate the bias that originates from modeling error. This bias is an intrinsic property of the simplified linear models used for limited scan durations, and it is exaggerated by random noise and numerical quadrature error. Conditions are derived under which Logan's graphical method either over- or under-estimates the distribution volume in the noise-free case. The bias caused by modeling error is quantified analytically. The presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate. Furthermore, visual examination of the linearity of the Logan plot is not sufficient for guaranteeing that equilibrium has been reached. A new model which retains the elegant properties of graphical analysis methods is presented, along with a numerical algorithm for its solution. We perform simulations with the fibrillar amyloid β radioligand [11C] benzothiazole-aniline using published data from the University of Pittsburgh and Rotterdam groups. The results show that the proposed method significantly reduces the bias due to modeling error. Moreover, the results for data acquired over a 70 minutes scan duration are at least as good as those obtained using existing methods for data acquired over a 90 minutes scan duration. PMID:20493196
Henthorn, Thomas K; Avram, Michael J; Dahan, Albert; Gustafsson, Lars L; Persson, Jan; Krejcie, Tom C; Olofsen, Erik
2018-05-16
The pharmacokinetics of infused drugs have been modeled without regard for recirculatory or mixing kinetics. We used a unique ketamine dataset with simultaneous arterial and venous blood sampling, during and after separate S(+) and R(-) ketamine infusions, to develop a simplified recirculatory model of arterial and venous plasma drug concentrations. S(+) or R(-) ketamine was infused over 30 min on two occasions to 10 healthy male volunteers. Frequent, simultaneous arterial and forearm venous blood samples were obtained for up to 11 h. A multicompartmental pharmacokinetic model with front-end arterial mixing and venous blood components was developed using nonlinear mixed effects analyses. A three-compartment base pharmacokinetic model with additional arterial mixing and arm venous compartments and with shared S(+)/R(-) distribution kinetics proved superior to standard compartmental modeling approaches. Total pharmacokinetic flow was estimated to be 7.59 ± 0.36 l/min (mean ± standard error of the estimate), and S(+) and R(-) elimination clearances were 1.23 ± 0.04 and 1.06 ± 0.03 l/min, respectively. The arm-tissue link rate constant was 0.18 ± 0.01 min and the fraction of arm blood flow estimated to exchange with arm tissue was 0.04 ± 0.01. Arterial drug concentrations measured during drug infusion have two kinetically distinct components: partially or lung-mixed drug and fully mixed-recirculated drug. Front-end kinetics suggest the partially mixed concentration is proportional to the ratio of infusion rate and total pharmacokinetic flow. This simplified modeling approach could lead to more generalizable models for target-controlled infusions and improved methods for analyzing pharmacokinetic-pharmacodynamic data.
NASA Astrophysics Data System (ADS)
Şahin, Rıdvan; Liu, Peide
2017-07-01
Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.
The ECCO Family of State Estimates: An Overview
NASA Astrophysics Data System (ADS)
Wunsch, C.
2008-12-01
The idea of ECCO (Estimating the Circulation and Climate of the Ocean)originated in the middle 1980s, when it became apparent that a global oceanographic observing system for the general circulation would become a reality as it did through the World Ocean Circulation Experiment. Observational design involved extremely diverse technologies and oceanic flow regimes. To be physically interpretable, these diverse data and physical processes would need to be combined into a useful, coherent, whole. Such a synthesis can only be done with a skillful GCM having useful resolution. ECCO originated as an experiment to demonstrate the technical feasibility of such a synthesis and to determine if any of several possible methods was preferable. In contrast to a number of other superficially similar efforts, mainly derived from weather forecasting methods, the ECCO goal was to estimate the long-term circulation mean and its variability on climate (decadal and longer) time scales in a form exactly satisfying known equations of motion. ECCO was made feasible with the simultaneous construction of a new GCM (MIT) along with the development of an automatic differentiation (AD) software tool(now called TAF) which rendered practical the method of Lagrange multipliers (called the adjoint method in oceanography). Parallel developments of simplified sequential methods (smoothers) provided an alternative, also practical, methodology. One can now use the existing (publicly available) machinery to discuss the ocean circulation and its variability. The huge variety of issues connected with the global circulation has meant that an entire family of estimates has grown up, each having different emphases (primarily global; but some primarily regional---the tropics, the Southern Ocean); some focussed on physics---the role of eddies or sea ice). The methodology leads, usefully, to intense scrutiny of data and model errors and spatio-temporal coverage. As with any estimation problem, no uniquely 'correct' solution is now or ever going to be possible-- -only evolving best estimates. Further development of these and similar methodologies appears to be a necessary, inevitable, and growing component of oceanography and climate.
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai
2016-10-01
To evaluate the continuous and instantaneous load capability of a battery, this paper describes a joint estimator for state-of-charge (SOC) and state-of-function (SOF) of lithium-ion batteries (LIB) based on Kalman filter (KF). The SOC is a widely used index for remain useful capacity left in a battery. The SOF represents the peak power capability of the battery. It can be determined by real-time SOC estimation and terminal voltage prediction, which can be derived from impedance parameters. However, the open-circuit-voltage (OCV) of LiFePO4 is highly nonlinear with SOC, which leads to the difficulties in SOC estimation. To solve these problems, this paper proposed an onboard SOC estimation method. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery, where the OCV is regarded as a linearized function of SOC. Then, the system states are estimated based on the KF. Besides, the factors that influence peak power capability are analyzed according to statistical data. Finally, the performance of the proposed methodology is demonstrated by experiments conducted on a LiFePO4 LIBs under different operating currents and temperatures. Experimental results indicate that the proposed approach is suitable for battery onboard SOC and SOF estimation.
A method to estimate groundwater depletion from confining layers
Konikow, Leonard F.; Neuzil, Christopher E.
2007-01-01
Although depletion of storage in low‐permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.
Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU
NASA Astrophysics Data System (ADS)
Ciarleglio, Constance A.
Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
A simplified model for tritium permeation transient predictions when trapping is active*1
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1994-09-01
This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.
Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV
NASA Astrophysics Data System (ADS)
Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.
2011-04-01
When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.
Allan, G. Michael; Lindblad, Adrienne J.; Comeau, Ann; Coppola, John; Hudson, Brianne; Mannarino, Marco; McMinis, Cindy; Padwal, Raj; Schelstraete, Christine; Zarnke, Kelly; Garrison, Scott; Cotton, Candra; Korownyk, Christina; McCormack, James; Nickel, Sharon; Kolber, Michael R.
2015-01-01
Abstract Objective To develop clinical practice guidelines for a simplified approach to primary prevention of cardiovascular disease (CVD), concentrating on CVD risk estimation and lipid management for primary care clinicians and their teams; we sought increased contribution from primary care professionals with little or no conflict of interest and focused on the highest level of evidence available. Methods Nine health professionals (4 family physicians, 2 internal medicine specialists, 1 nurse practitioner, 1 registered nurse, and 1 pharmacist) and 1 nonvoting member (pharmacist project manager) comprised the overarching Lipid Pathway Committee (LPC). Member selection was based on profession, practice setting, and location, and members disclosed any actual or potential conflicts of interest. The guideline process was iterative through online posting, detailed evidence review, and telephone and online meetings. The LPC identified 12 priority questions to be addressed. The Evidence Review Group answered these questions. After review of the answers, key recommendations were derived through consensus of the LPC. The guidelines were drafted, refined, and distributed to a group of clinicians (family physicians, other specialists, pharmacists, nurses, and nurse practitioners) and patients for feedback, then refined again and finalized by the LPC. Recommendations Recommendations are provided on screening and testing, risk assessments, interventions, follow-up, and the role of acetylsalicylic acid in primary prevention. Conclusion These simplified lipid guidelines provide practical recommendations for prevention and treatment of CVD for primary care practitioners. All recommendations are intended to assist with, not dictate, decision making in conjunction with patients. PMID:26472792
On simplified application of multidimensional Savitzky-Golay filters and differentiators
NASA Astrophysics Data System (ADS)
Shekhar, Chandra
2016-02-01
I propose a simplified approach for multidimensional Savitzky-Golay filtering, to enable its fast and easy implementation in scientific and engineering applications. The proposed method, which is derived from a generalized framework laid out by Thornley (D. J. Thornley, "Novel anisotropic multidimensional convolution filters for derivative estimation and reconstruction" in Proceedings of International Conference on Signal Processing and Communications, November 2007), first transforms any given multidimensional problem into a unique one, by transforming coordinates of the sampled data nodes to unity-spaced, uniform data nodes, and then performs filtering and calculates partial derivatives on the unity-spaced nodes. It is followed by transporting the calculated derivatives back onto the original data nodes by using the chain rule of differentiation. The burden to performing the most cumbersome task, which is to carry out the filtering and to obtain derivatives on the unity-spaced nodes, is almost eliminated by providing convolution coefficients for a number of convolution kernel sizes and polynomial orders, up to four spatial dimensions. With the availability of the convolution coefficients, the task of filtering at a data node reduces merely to multiplication of two known matrices. Simplified strategies to adequately address near-boundary data nodes and to calculate partial derivatives there are also proposed. Finally, the proposed methodologies are applied to a three-dimensional experimentally obtained data set, which shows that multidimensional Savitzky-Golay filters and differentiators perform well in both the internal and the near-boundary regions of the domain.
Uncertainty in predicting soil hydraulic properties at the hillslope scale with indirect methods
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2007-02-01
SummarySeveral hydrological applications require the characterisation of the soil hydraulic properties at large spatial scales. Pedotransfer functions (PTFs) are being developed as simplified methods to estimate soil hydraulic properties as an alternative to direct measurements, which are unfeasible for most practical circumstances. The objective of this study is to quantify the uncertainty in PTFs spatial predictions at the hillslope scale as related to the sampling density, due to: (i) the error in estimated soil physico-chemical properties and (ii) PTF model error. The analysis is carried out on a 2-km-long experimental hillslope in South Italy. The method adopted is based on a stochastic generation of patterns of soil variables using sequential Gaussian simulation, conditioned to the observed sample data. The following PTFs are applied: Vereecken's PTF [Vereecken, H., Diels, J., van Orshoven, J., Feyen, J., Bouma, J., 1992. Functional evaluation of pedotransfer functions for the estimation of soil hydraulic properties. Soil Sci. Soc. Am. J. 56, 1371-1378] and HYPRES PTF [Wösten, J.H.M., Lilly, A., Nemes, A., Le Bas, C., 1999. Development and use of a database of hydraulic properties of European soils. Geoderma 90, 169-185]. The two PTFs estimate reliably the soil water retention characteristic even for a relatively coarse sampling resolution, with prediction uncertainties comparable to the uncertainties in direct laboratory or field measurements. The uncertainty of soil water retention prediction due to the model error is as much as or more significant than the uncertainty associated with the estimated input, even for a relatively coarse sampling resolution. Prediction uncertainties are much more important when PTF are applied to estimate the saturated hydraulic conductivity. In this case model error dominates the overall prediction uncertainties, making negligible the effect of the input error.
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
Barfi, Behruz; Asghari, Alireza; Rajabi, Maryam; Barfi, Azadeh; Saeidi, Iman
2013-10-11
In the present study, for the first time, a simplified miniaturized ultrasound-assisted matrix solid-phase dispersion (SM-USA-MSPD) method with a different application for liquid matrices was developed to extract different flavonoids (hesperidin, diosmin, eriocitrin, narirutin, naringin, hesperetin and naringenin) from citrus fruit juice and human fluid samples prior to their determination using high performance liquid chromatography (HPLC). Different effective parameters were studied and under the optimum conditions (including sample volume: 150μL; solid phase: silica-based C18, 200mg; eluting solvent: methanol, 500μL; pH: 4; and sonication: 6min; at room temperature), limits of detection and limits of quantification were ranged from 23.3 to 46.8ngmL(-1) and 74.8 to 141.5ngmL(-1), respectively. Once optimized, analytical performance of the method was studied in terms of linearity (0.074-198.5μgmL(-1), r(2)>0.991), accuracy (recovery=84.6-101.5%), and precision (repeatability: intra-day precision<5.9%, and inter-day precision<7.2%). At the end, SM-USA-MSPD method was successfully applied to estimate the levels of hesperetin and naringenin in plasma and urinary excretion -after ingestion of orange, grapefruit and lime juices- and the obtained results confirmed that these compounds could be used as good biomarkers of citrus fruit juice intake. Copyright © 2013 Elsevier B.V. All rights reserved.
Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi
2017-11-21
As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Robert E.
2015-12-08
Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.
Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Robert B.
Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.
NASA Astrophysics Data System (ADS)
Voityuk, Alexander A.; Rösch, Notker
2002-09-01
The purpose of this communication is two-fold. We introduce the fragment charge difference (FCD) method to estimate the electron transfer matrix element HDA between a donor D and an acceptor A, and we apply this method to several aspects of hole transfer electronic couplings in π-stacks of DNA, including systems with several donor-acceptor sites. Within the two-state model, our scheme can be simplified to recover a convenient estimate of the electron transfer matrix element HDA=(1-Δq2)1/2(E2-E1)/2 based on the vertical excitation energy E2-E1 and the charge difference Δq between donor and acceptor. For systems with strong charge separation, Δq≳0.95, one should resort to the FCD method. As favorable feature, we demonstrate the stability of the FCD approach for systems which require an approach beyond the two-state model. On the basis of ab initio calculations of various DNA related systems, we compared three approaches for estimating the electronic coupling: the minimum splitting method, the generalized Mulliken-Hush (GMH) scheme, and the FCD approach. We studied the sensitivity of FCD and GMH couplings to the donor-acceptor energy gap and found both schemes to be quite robust; they are applicable also in cases where donor and acceptor states are off resonance. In the application to π-stacks of DNA, we demonstrated for the Watson-Crick pair dimer [(GC),(GC)] how structural changes considerably affect the coupling strength of electron hole transfer. For models of three Watson-Crick pairs, we showed that the two-state model significantly overestimates the hole transfer coupling whereas simultaneous treatment of several states leads to satisfactory results.
NASA Astrophysics Data System (ADS)
Bour, O.; Klepikova, M.; Le Borgne, T.; De Dreuzy, J.
2013-12-01
Inverse modeling of hydraulic and geometrical properties of fractured media is a very challenging objective due to the spatial heterogeneity of the medium and the scarcity of data. Here we present a flow tomography approach that permits to characterize the location, the connectivity and the hydraulic properties of main flow paths in fractured media. The accurate characterization of the location, hydraulic properties and connectivity of major fracture zones is essential to model flow and solute transport in fractured media. Cross-borehole flowmeter tests, which consist of measuring changes in vertical borehole flows when pumping a neighboring borehole, were shown to be an efficient technique to provide information on the properties of the flow zones that connect borehole pairs [Paillet, 1998; Le Borgne et al., 2006]. The interpretation of such experiments may however be quite uncertain when multiple connections exist. In this study, we explore the potential of flow tomography (i.e., sequential cross-borehole flowmeter tests) for characterizing aquifer heterogeneity. We first propose a framework for inverting flow and drawdown data to infer fracture connectivity and transmissivities. Here we use a simplified discrete fracture network approach that highlights main connectivity structures. This conceptual model attempts to reproduce fracture network connectivity without taking fracture geometry (length, orientation, dip) into account. We then explore the potential of the method for simplified synthetic fracture network models and quantify the sensitivity of drawdown and borehole flow velocities to the transmissivity of the connecting flowpaths. Flow tomography is expected to be most effective if cross-borehole pumping induces large changes in vertical borehole velocities. The uncertainty of the transmissivity estimates increases for small borehole flow velocities. The uncertainty about the transmissivity of fractures that connect the main flowpath but not the boreholes is generally higher. We demonstrate that successively changing pumping and observation boreholes improves the quality of available information and reduces the indetermination of the problem. The inverse method is validated for different synthetic flow scenarios. It is shown to provide a good estimation of connectivity patterns and transmissivities of main flowpaths. Although the chosen fracture network geometry has been simplified, flow tomography appears to be a promising approach for characterizing connectivity patterns and transmissivities of fractured media.
Photographic and drafting techniques simplify method of producing engineering drawings
NASA Technical Reports Server (NTRS)
Provisor, H.
1968-01-01
Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
Xun-Ping, W; An, Z
2017-07-27
Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.
Simplified method for numerical modeling of fiber lasers.
Shtyrina, O V; Yarutkina, I A; Fedoruk, M P
2014-12-29
A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.
NASA Astrophysics Data System (ADS)
Kartashov, Dmitry; Shurshakov, Vyacheslav
2018-03-01
A ray-tracing method to calculate radiation exposure levels of astronauts at different spacecraft shielding configurations has been developed. The method uses simplified shielding geometry models of the spacecraft compartments together with depth-dose curves. The depth-dose curves can be obtained with different space radiation environment models and radiation transport codes. The spacecraft shielding configurations are described by a set of geometry objects. To calculate the shielding probability functions for each object its surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Such description can be applied for any complex shape objects. The method is applied to the space experiment MATROSHKA-R modeling conditions. The experiment has been carried out onboard the ISS from 2004 to 2016. Dose measurements were realized in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility that provides an additional shielding on the crew cabin wall. The space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms and for an additional shielding installed in the compartment are calculated. There is agreement within accuracy of about 15% between the data obtained in the experiment and calculated ones. Thus the calculation method used has been successfully verified with the MATROSHKA-R experiment data. The ray-tracing radiation dose calculation method can be recommended for estimation of dose distribution in astronaut body in different space station compartments and for estimation of the additional shielding efficiency, especially when exact compartment shielding geometry and the radiation environment for the planned mission are not known.
Transmission Parameters of the 2001 Foot and Mouth Epidemic in Great Britain
Chis Ster, Irina; Ferguson, Neil M.
2007-01-01
Despite intensive ongoing research, key aspects of the spatial-temporal evolution of the 2001 foot and mouth disease (FMD) epidemic in Great Britain (GB) remain unexplained. Here we develop a Markov Chain Monte Carlo (MCMC) method for estimating epidemiological parameters of the 2001 outbreak for a range of simple transmission models. We make the simplifying assumption that infectious farms were completely observed in 2001, equivalent to assuming that farms that were proactively culled but not diagnosed with FMD were not infectious, even if some were infected. We estimate how transmission parameters varied through time, highlighting the impact of the control measures on the progression of the epidemic. We demonstrate statistically significant evidence for assortative contact patterns between animals of the same species. Predictive risk maps of the transmission potential in different geographic areas of GB are presented for the fitted models. PMID:17551582
Improved photo response non-uniformity (PRNU) based source camera identification.
Cooper, Alan J
2013-03-10
The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbert, J.H.
This brief note describes the probabilistic structure of the Arps/Roberts (A/R) model of petroleum discovery. A model similar to the A/R model is derived from probabilistic propositions demonstrated to be similar to the E. Barouch/G.M. Kaufman (B/K) model, and also demonstrated to be similar to the Drew, Schuenemeyer, and Root (D/S/R) model. This note attempts to elucidate and to simplify some fundamental ideas contained in an unpublished paper by Barouch and Kaufman. This note and its predecessor paper does not attempt to address a wide variety of statistical approaches for estimating petroleum resource availability. Rather, an attempt is made tomore » draw attention to characteristics of certain methods that are commonly used, both formally and informally, to estimate a petroleum resource base for a basin or a nation. Some of these characteristics are statistical, but many are not, except in the broadest sense of the term.« less
Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments
Sun, Tongyang; Duan, Lihong; Wang, Yulong
2017-01-01
The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575
NASA Technical Reports Server (NTRS)
Houlahan, Padraig; Scalo, John
1992-01-01
A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
NASA Technical Reports Server (NTRS)
Gillies, Robert R.; Carlson, Toby N.
1995-01-01
This study outlines a method for the estimation of regional patterns of surface moisture availability (M(sub 0)) and fractional vegetation (Fr) in the presence of spatially variable vegetation cover. The method requires relating variations in satellite-derived (NOAA, Advanced Very High Resolution Radiometer (AVHRR)) surface radiant temperature to a vegetation index (computed from satellite visible and near-infrared data) while coupling this association to an inverse modeling scheme. More than merely furnishing surface soil moisture values, the method constitues a new conceptual and practical approach for combining thermal infrared and vegetation index measurements for incorporating the derived values of M(sub 0) into hydrologic and atmospheric prediction models. Application of the technique is demonstrated for a region in and around the city of Newcastle upon Tyne situated in the northeast of England. A regional estimate of M(sub 0) is derived and is probabbly good for fractional vegetation cover up to 80% before errors in the estimated soil water content become unacceptably large. Moreover, a normalization scheme is suggested from which a nomogram, `universal triangle,' is constructed and is seen to fit the observed data well. The universal triangle also simplifies the inclusion of remotely derived M(sub 0) in hydrology and meteorological models and is perhaps a practicable step toward integrating derived data from satellite measurements in weather forecasting.
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke
2015-08-01
To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lines, L.; Burton, A.; Lu, H.X.
Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
NASA Astrophysics Data System (ADS)
Zhang, Hua-qing; Sun, Xi-ping; Wang, Yuan-zhan; Yin, Ji-long; Wang, Chao-yang
2015-10-01
There has been a growing trend in the development of offshore deep-water ports in China. For such deep sea projects, all-vertical-piled wharves are suitable structures and generally located in open waters, greatly affected by wave action. Currently, no systematic studies or simplified numerical methods are available for deriving the dynamic characteristics and dynamic responses of all-vertical-piled wharves under wave cyclic loads. In this article, we compare the dynamic characteristics of an all-vertical-piled wharf with those of a traditional inshore high-piled wharf through numerical analysis; our research reveals that the vibration period of an all-vertical-piled wharf under cyclic loading is longer than that of an inshore high-piled wharf and is much closer to the period of the loading wave. Therefore, dynamic calculation and analysis should be conducted when designing and calculating the characteristics of an all-vertical-piled wharf. We establish a dynamic finite element model to examine the dynamic response of an all-vertical-piled wharf under wave cyclic loads and compare the results with those under wave equivalent static load; the comparison indicates that dynamic amplification of the structure is evident when the wave dynamic load effect is taken into account. Furthermore, a simplified dynamic numerical method for calculating the dynamic response of an all-vertical-piled wharf is established based on the P-Y curve. Compared with finite element analysis, the simplified method is more convenient to use and applicable to large structural deformation while considering the soil non-linearity. We confirmed that the simplified method has acceptable accuracy and can be used in engineering applications.
Feasibility study on a strain based deflection monitoring system for wind turbine blades
NASA Astrophysics Data System (ADS)
Lee, Kyunghyun; Aihara, Aya; Puntsagdash, Ganbayar; Kawaguchi, Takayuki; Sakamoto, Hiraku; Okuma, Masaaki
2017-01-01
The bending stiffness of the wind turbine blades has decreased due to the trend of wind turbine upsizing. Consequently, the risk of blades breakage by hitting the tower has increased. In order to prevent such incidents, this study proposes a deflection monitoring system that can be installed to already operating wind turbine's blades. The monitoring system is composed of an estimation algorithm to detect blade deflection and a wireless sensor network as a hardware equipment. As for the estimation method for blade deflection, a strain-based estimation algorithm and an objective function for optimal sensor arrangement are proposed. Strain-based estimation algorithm is using a linear correlation between strain and deflections, which can be expressed in a form of a transformation matrix. The objective function includes the terms of strain sensitivity and condition number of the transformation matrix between strain and deflection. In order to calculate the objective function, a simplified experimental model of the blade is constructed by interpolating the mode shape of a blade from modal testing. The interpolation method is effective considering a practical use to operating wind turbines' blades since it is not necessary to establish a finite element model of a blade. On the other hand, a sensor network with wireless connection with an open source hardware is developed. It is installed to a 300 W scale wind turbine and vibration of the blade on operation is investigated.
Estimating loop length from CryoEM images at medium resolutions.
McKnight, Andrew; Si, Dong; Al Nasr, Kamal; Chernikov, Andrey; Chrisochoides, Nikos; He, Jing
2013-01-01
De novo protein modeling approaches utilize 3-dimensional (3D) images derived from electron cryomicroscopy (CryoEM) experiments. The skeleton connecting two secondary structures such as α-helices represent the loop in the 3D image. The accuracy of the skeleton and of the detected secondary structures are critical in De novo modeling. It is important to measure the length along the skeleton accurately since the length can be used as a constraint in modeling the protein. We have developed a novel computational geometric approach to derive a simplified curve in order to estimate the loop length along the skeleton. The method was tested using fifty simulated density images of helix-loop-helix segments of atomic structures and eighteen experimentally derived density data from Electron Microscopy Data Bank (EMDB). The test using simulated density maps shows that it is possible to estimate within 0.5 Å of the expected length for 48 of the 50 cases. The experiments, involving eighteen experimentally derived CryoEM images, show that twelve cases have error within 2 Å. The tests using both simulated and experimentally derived images show that it is possible for our proposed method to estimate the loop length along the skeleton if the secondary structure elements, such as α-helices, can be detected accurately, and there is a continuous skeleton linking the α-helices.
Estimation of actual evapotranspiration in the Nagqu river basin of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Zou, Mijun; Zhong, Lei; Ma, Yaoming; Hu, Yuanyuan; Feng, Lu
2018-05-01
As a critical component of the energy and water cycle, terrestrial actual evapotranspiration (ET) can be influenced by many factors. This study was mainly devoted to providing accurate and continuous estimations of actual ET for the Tibetan Plateau (TP) and analyzing the effects of its impact factors. In this study, summer observational data from the Coordinated Enhanced Observing Period (CEOP) Asia-Australia Monsoon Project (CAMP) on the Tibetan Plateau (CAMP/Tibet) for 2003 to 2004 was selected to determine actual ET and investigate its relationship with energy, hydrological, and dynamical parameters. Multiple-layer air temperature, relative humidity, net radiation flux, wind speed, precipitation, and soil moisture were used to estimate actual ET. The regression model simulation results were validated with independent data retrieved using the combinatory method. The results suggested that significant correlations exist between actual ET and hydro-meteorological parameters in the surface layer of the Nagqu river basin, among which the most important factors are energy-related elements (net radiation flux and air temperature). The results also suggested that how ET is eventually affected by precipitation and two-layer wind speed difference depends on whether their positive or negative feedback processes have a more important role. The multivariate linear regression method provided reliable estimations of actual ET; thus, 6-parameter simplified schemes and 14-parameter regular schemes were established.
Stephan, Peter; Schmid, Christina; Freckmann, Guido; Pleus, Stefan; Haug, Cornelia; Müller, Peter
2015-10-09
The measurement accuracy of systems for self-monitoring of blood glucose (SMBG) is usually analyzed by a method comparison in which the analysis results are displayed using difference plots or similar graphs. However, such plots become difficult to comprehend as the number of data points displayed increases. This article introduces a new approach, the rectangle target plot (RTP), which aims to provide a simplified and comprehensible visualization of accuracy data. The RTP is based on ISO 15197 accuracy evaluations of SMBG systems. Two-sided tolerance intervals for normally distributed data are calculated for absolute and relative differences at glucose concentrations <100 mg/dL and ≥100 mg/dL. These tolerance intervals provide an estimator of where a 90% proportion of results is found with a confidence level of 95%. Plotting these tolerance intervals generates a rectangle whose center indicates the systematic measurement difference of the investigated system relative to the comparison method. The size of the rectangle depends on the measurement variability. The RTP provides a means of displaying measurement accuracy data in a simple and comprehensible manner. The visualization is simplified by reducing the displayed information from typically 200 data points to just 1 rectangle. Furthermore, this allows data for several systems or several lots from 1 system to be displayed clearly and concisely in a single graph. © 2015 Diabetes Technology Society.
Durand, Jean-Baptiste; Allard, Alix; Guitton, Baptiste; van de Weg, Eric; Bink, Marco C A M; Costes, Evelyne
2017-01-01
Irregular flowering over years is commonly observed in fruit trees. The early prediction of tree behavior is highly desirable in breeding programmes. This study aims at performing such predictions, combining simplified phenotyping and statistics methods. Sequences of vegetative vs. floral annual shoots (AS) were observed along axes in trees belonging to five apple related full-sib families. Sequences were analyzed using Markovian and linear mixed models including year and site effects. Indices of flowering irregularity, periodicity and synchronicity were estimated, at tree and axis scales. They were used to predict tree behavior and detect QTL with a Bayesian pedigree-based analysis, using an integrated genetic map containing 6,849 SNPs. The combination of a Biennial Bearing Index (BBI) with an autoregressive coefficient (γ g ) efficiently predicted and classified the genotype behaviors, despite few misclassifications. Four QTLs common to BBIs and γ g and one for synchronicity were highlighted and revealed the complex genetic architecture of the traits. Irregularity resulted from high AS synchronism, whereas regularity resulted from either asynchronous locally alternating or continual regular AS flowering. A relevant and time-saving method, based on a posteriori sampling of axes and statistical indices is proposed, which is efficient to evaluate the tree breeding values for flowering regularity and could be transferred to other species.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
48 CFR 713.000 - Scope of part.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...
48 CFR 713.000 - Scope of part.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Scope of part. 713.000 Section 713.000 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES 713.000 Scope of part. The simplified...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
Cost-effectiveness of human papillomavirus vaccination in the United States.
Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Markowitz, Lauri E
2008-02-01
We describe a simplified model, based on the current economic and health effects of human papillomavirus (HPV), to estimate the cost-effectiveness of HPV vaccination of 12-year-old girls in the United States. Under base-case parameter values, the estimated cost per quality-adjusted life year gained by vaccination in the context of current cervical cancer screening practices in the United States ranged from $3,906 to $14,723 (2005 US dollars), depending on factors such as whether herd immunity effects were assumed; the types of HPV targeted by the vaccine; and whether the benefits of preventing anal, vaginal, vulvar, and oropharyngeal cancers were included. The results of our simplified model were consistent with published studies based on more complex models when key assumptions were similar. This consistency is reassuring because models of varying complexity will be essential tools for policy makers in the development of optimal HPV vaccination strategies.
Influence of model reduction on uncertainty of flood inundation predictions
NASA Astrophysics Data System (ADS)
Romanowicz, R. J.; Kiczko, A.; Osuch, M.
2012-04-01
Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of that uncertainty to be assessed. This work was supported by National Science Centre of Poland (grant 2011/01/B/ST10/06866).
Doyle, Jacqueline M; McCormick, Cory R; DeWoody, J Andrew
2011-01-01
Many animals, such as crustaceans, insects, and salamanders, package their sperm into spermatophores, and the number of spermatozoa contained in a spermatophore is relevant to studies of sexual selection and sperm competition. We used two molecular methods, real-time quantitative polymerase chain reaction (RT-qPCR) and spectrophotometry, to estimate sperm numbers from spermatophores. First, we designed gene-specific primers that produced a single amplicon in four species of ambystomatid salamanders. A standard curve generated from cloned amplicons revealed a strong positive relationship between template DNA quantity and cycle threshold, suggesting that RT-qPCR could be used to quantify sperm in a given sample. We then extracted DNA from multiple Ambystoma maculatum spermatophores, performed RT-qPCR on each sample, and estimated template copy numbers (i.e. sperm number) using the standard curve. Second, we used spectrophotometry to determine the number of sperm per spermatophore by measuring DNA concentration relative to the genome size. We documented a significant positive relationship between the estimates of sperm number based on RT-qPCR and those based on spectrophotometry. When these molecular estimates were compared to spermatophore cap size, which in principle could predict the number of sperm contained in the spermatophore, we also found a significant positive relationship between sperm number and spermatophore cap size. This linear model allows estimates of sperm number strictly from cap size, an approach which could greatly simplify the estimation of sperm number in future studies. These methods may help explain variation in fertilization success where sperm competition is mediated by sperm quantity. © 2010 Blackwell Publishing Ltd.
Szadkowska, Agnieszka; Pietrzak, Iwona; Mianowska, Beata; Markuszewski, Leszek; Bodalska-Lipińska, Joanna; Bodalski, Jerzy
2006-01-01
Our own studies confirm the hypothesis, that insulin resistance of various degree is often observed in children and adolescents with type 1 diabetes mellitus (T1DM). The knowledge of this parameter characterizing individual patients may be of great value not only for better understanding of the disease course but also as a potential source of specific treatment. Reliable estimation of insulin resistance with hyperinsulinemic euglycemic clamp is a complex, laborious and costly procedure. These facts were enough to motivate us to make an attempt to elaborate an indirect, simplified method of insulin resistance assessment in T1DM children, that would be based on patients characteristics and on clinical parameters of the disease course. 142 children and adolescents with T1DM (79 boys, 63 girls) aged 7.7-20.3 years (mean age - 13.7+/-3.3 years) were included into the study. Duration of diabetes was 0.5-12.5 years (mean 2.7+/-2.3 years). The stage of puberty was assessed by the Tanner scale. Euglycemic-hyperinsulinemic clamp by de Fronzo was performed to estimate insulin resistance. Glucose disposal rate (M index) determined during the last 30 min of the test estimated insulin resistance. Looking for clinical and metabolic factors characterizing insulin resistance: a) the plasma cholesterol, HDL-Ch, triglycerides and HbA1c were examined, b) the height, weight, waist circumference and blood pressure were measured, c) body mass index and daily dose of insulin were calculated. For statistical analysis the multiple regression was used (forward stepwise method). In the study group M index ranged from 2.1 to 17.4 mg/kg/min (mean 7.27+/-2.62 mg/kg/min). The boys presented better insulin sensitivity than girls (7.79 vs. 6.62, p=0.008). The insulin resistance depended on the patients' age (r=-0.46, p<0.001) and stage of puberty (p<0.001). A correlation between M index and insulin dose (r=-0.34, p<0.05) and HbA1c (r=-0.17; p=0.04) were found. There was a significant relationship between M index and parameters of adiposity, lipids and blood pressure. All significant clinical parameters of insulin resistance were subjected to the analysis. Multiple linear regression analysis was performed. The model with the strongest correlation with index M was used to work out the formula: M index = 17.065 + 1.547 x (gender: boys=1, girls=0) - 0,183 x (age) - 0,117 x (Waist circumference) - 2,019 x (Daily insulin dose) - 0,016 x (LDL-CH) + 0,041 x (DBP). In T1DM children and adolescents it is possible to estimate for daily use extent of insulin resistance on the basis of clinical features.
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Masood, Athar; Stark, Ken D; Salem, Norman
2005-10-01
Conventional sample preparation for fatty acid analysis is a complicated, multiple-step process, and gas chromatography (GC) analysis alone can require >1 h per sample to resolve fatty acid methyl esters (FAMEs). Fast GC analysis was adapted to human plasma FAME analysis using a modified polyethylene glycol column with smaller internal diameters, thinner stationary phase films, increased carrier gas linear velocity, and faster temperature ramping. Our results indicated that fast GC analyses were comparable to conventional GC in peak resolution. A conventional transesterification method based on Lepage and Roy was simplified to a one-step method with the elimination of the neutralization and centrifugation steps. A robotics-amenable method was also developed, with lower methylation temperatures and in an open-tube format using multiple reagent additions. The simplified methods produced results that were quantitatively similar and with similar coefficients of variation as compared with the original Lepage and Roy method. The present streamlined methodology is suitable for the direct fatty acid analysis of human plasma, is appropriate for research studies, and will facilitate large clinical trials and make possible population studies.
An exact solution of a simplified two-phase plume model. [for solid propellant rocket
NASA Technical Reports Server (NTRS)
Wang, S.-Y.; Roberts, B. B.
1974-01-01
An exact solution of a simplified two-phase, gas-particle, rocket exhaust plume model is presented. It may be used to make the upper-bound estimation of the heat flux and pressure loads due to particle impingement on the objects existing in the rocket exhaust plume. By including the correction factors to be determined experimentally, the present technique will provide realistic data concerning the heat and aerodynamic loads on these objects for design purposes. Excellent agreement in trend between the best available computer solution and the present exact solution is shown.
Factoring 51 and 85 with 8 qubits
Geller, Michael R.; Zhou, Zhongyuan
2013-01-01
We construct simplified quantum circuits for Shor's order-finding algorithm for composites N given by products of the Fermat primes 3, 5, 17, 257, and 65537. Such composites, including the previously studied case of 15, as well as 51, 85, 771, 1285, 4369, … have the simplifying property that the order of a modulo N for every base a coprime to N is a power of 2, significantly reducing the usual phase estimation precision requirement. Prime factorization of 51 and 85 can be demonstrated with only 8 qubits and a modular exponentiation circuit consisting of no more than four CNOT gates. PMID:24162074
Factoring 51 and 85 with 8 qubits.
Geller, Michael R; Zhou, Zhongyuan
2013-10-28
We construct simplified quantum circuits for Shor's order-finding algorithm for composites N given by products of the Fermat primes 3, 5, 17, 257, and 65537. Such composites, including the previously studied case of 15, as well as 51, 85, 771, 1285, 4369, … have the simplifying property that the order of a modulo N for every base a coprime to N is a power of 2, significantly reducing the usual phase estimation precision requirement. Prime factorization of 51 and 85 can be demonstrated with only 8 qubits and a modular exponentiation circuit consisting of no more than four CNOT gates.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-06-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-01-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370
Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods
NASA Technical Reports Server (NTRS)
Adams, G. F.
1980-01-01
The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.
NASA Astrophysics Data System (ADS)
Behroozi-Toosi, A. B.; Booker, H. G.
1980-12-01
The simplified theory of ELF wave propagation in the earth-ionosphere transmission lines developed by Booker (1980) is applied to a simplified worldwide model of the ionosphere. The theory, which involves the comparison of the local vertical refractive index gradient with the local wavelength in order to classify the altitude into regions of low and high gradient, is used for a model of electron and negative ion profiles in the D and E regions below 150 km. Attention is given to the frequency dependence of ELF propagation at a middle latitude under daytime conditions, the daytime latitude dependence of ELF propagation at the equinox, the effects of sunspot, seasonal and diurnal variations on propagation, nighttime propagation neglecting and including propagation above 100 km, and the effect on daytime ELF propagation of a sudden ionospheric disturbance. The numerical values obtained by the method for the propagation velocity and attenuation rate are shown to be in general agreement with the analytic Naval Ocean Systems Center computer program. It is concluded that the method employed gives more physical insights into propagation processes than any other method, while requiring less effort and providing maximal accuracy.
NASA Astrophysics Data System (ADS)
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.
Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni
2016-01-01
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Comparison of two trajectory based models for locating particle sources for two rural New York sites
NASA Astrophysics Data System (ADS)
Zhou, Liming; Hopke, Philip K.; Liu, Wei
Two back trajectory-based statistical models, simplified quantitative transport bias analysis and residence-time weighted concentrations (RTWC) have been compared for their capabilities of identifying likely locations of source emissions contributing to observed particle concentrations at Potsdam and Stockton, New York. Quantitative transport bias analysis (QTBA) attempts to take into account the distribution of concentrations around the directions of the back trajectories. In full QTBA approach, deposition processes (wet and dry) are also considered. Simplified QTBA omits the consideration of deposition. It is best used with multiple site data. Similarly the RTWC approach uses concentrations measured at different sites along with the back trajectories to distribute the concentration contributions across the spatial domain of the trajectories. In this study, these models are used in combination with the source contribution values obtained by the previous positive matrix factorization analysis of particle composition data from Potsdam and Stockton. The six common sources for the two sites, sulfate, soil, zinc smelter, nitrate, wood smoke and copper smelter were analyzed. The results of the two methods are consistent and locate large and clearly defined sources well. RTWC approach can find more minor sources but may also give unrealistic estimations of the source locations.
NASA/FAA general aviation crash dynamics program
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Hayduk, R. J.; Carden, H. D.
1981-01-01
The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.
Examination of simplified travel demand model. [Internal volume forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.L. Jr.; McFarlane, W.J.
1978-01-01
A simplified travel demand model, the Internal Volume Forecasting (IVF) model, proposed by Low in 1972 is evaluated as an alternative to the conventional urban travel demand modeling process. The calibration of the IVF model for a county-level study area in Central Wisconsin results in what appears to be a reasonable model; however, analysis of the structure of the model reveals two primary mis-specifications. Correction of the mis-specifications leads to a simplified gravity model version of the conventional urban travel demand models. Application of the original IVF model to ''forecast'' 1960 traffic volumes based on the model calibrated for 1970more » produces accurate estimates. Shortcut and ad hoc models may appear to provide reasonable results in both the base and horizon years; however, as shown by the IVF mode, such models will not always provide a reliable basis for transportation planning and investment decisions.« less
A fluid model simulation of a simplified plasma limiter based on spectral-element time-domain method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Cheng; Ding, Dazhi, E-mail: dzding@njust.edu.cn; Fan, Zhenhong
2015-03-15
A simplified plasma limiter prototype is proposed and the fluid model coupled with Maxwell's equations is established to describe the operating mechanism of plasma limiter. A three-dimensional (3-D) simplified sandwich structure plasma limiter model is analyzed with the spectral-element time-domain (SETD) method. The field breakdown threshold of air and argon at different frequency is predicted and compared with the experimental data and there is a good agreement between them for gas microwave breakdown discharge problems. Numerical results demonstrate that the two-layer plasma limiter (plasma-slab-plasma) has better protective characteristics than a one-layer plasma limiter (slab-plasma-slab) with the same length of gasmore » chamber.« less
NASA Astrophysics Data System (ADS)
Gómez, C. D.; González, C. M.; Osses, M.; Aristizábal, B. H.
2018-04-01
Emission data is an essential tool for understanding environmental problems associated with sources and dynamics of air pollutants in urban environments, especially those emitted from vehicular sources. There is a lack of knowledge about the estimation of air pollutant emissions and particularly its spatial and temporal distribution in South America, mainly in medium-sized cities with population less than one million inhabitants. This work performed the spatial and temporal disaggregation of the on-road vehicle emission inventory (EI) in the medium-sized Andean city of Manizales, Colombia, with a spatial resolution of 1 km × 1 km and a temporal resolution of 1 h. A reported top-down methodology, based on the analysis of traffic flow levels and road network distribution, was applied. Results obtained allowed the identification of several hotspots of emission at the downtown zone and the residential and commercial area of Manizales. Downtown exhibited the highest percentage contribution of emissions normalized by its total area, with values equal to 6% and 5% of total CO and PM10 emissions per km2 respectively. These indexes were higher than those obtained in residential-commercial area with values of 2%/km2 for both pollutants. Temporal distribution showed strong relationship with driving patterns at rush hours, as well as an important influence of passenger cars and motorcycles in emissions of CO both at downtown and residential-commercial areas, and the impact of public transport in PM10 emissions in the residential-commercial zone. Considering that detailed information about traffic counts and road network distribution is not always available in medium-sized cities, this work compares other simplified top-down methods for spatially assessing the on-road vehicle EI. Results suggested that simplified methods could underestimate the spatial allocation of downtown emissions, a zone dominated by high traffic of vehicles. The comparison between simplified methods based on total traffic counts and road density distribution suggested that the use of total traffic counts in a simplified form could enhance higher uncertainties in the spatial disaggregation of emissions. Results obtained could add new information that help to improve the air pollution management system in the city and contribute to local public policy decisions. Additionally, this work provides appropriate resolution emission fluxes for ongoing research in atmospheric modeling in the city, with the aim to improve the understanding of transport, transformation and impacts of pollutant emissions in urban air quality.
A simplified method for extracting androgens from avian egg yolks
Kozlowski, C.P.; Bauman, J.E.; Hahn, D.C.
2009-01-01
Female birds deposit significant amounts of steroid hormones into the yolks of their eggs. Studies have demonstrated that these hormones, particularly androgens, affect nestling growth and development. In order to measure androgen concentrations in avian egg yolks, most authors follow the extraction methods outlined by Schwabl (1993. Proc. Nat. Acad. Sci. USA 90:11446-11450). We describe a simplified method for extracting androgens from avian egg yolks. Our method, which has been validated through recovery and linearity experiments, consists of a single ethanol precipitation that produces substantially higher recoveries than those reported by Schwabl.
Simplified, inverse, ejector design tool
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1993-01-01
A simple lumped parameter based inverse design tool has been developed which provides flow path geometry and entrainment estimates subject to operational, acoustic, and design constraints. These constraints are manifested through specification of primary mass flow rate or ejector thrust, fully-mixed exit velocity, and static pressure matching. Fundamentally, integral forms of the conservation equations coupled with the specified design constraints are combined to yield an easily invertible linear system in terms of the flow path cross-sectional areas. Entrainment is computed by back substitution. Initial comparison with experimental and analogous one-dimensional methods show good agreement. Thus, this simple inverse design code provides an analytically based, preliminary design tool with direct application to High Speed Civil Transport (HSCT) design studies.
Geostatistics, remote sensing and precision farming.
Mulla, D J
1997-01-01
Precision farming is possible today because of advances in farming technology, procedures for mapping and interpolating spatial patterns, and geographic information systems for overlaying and interpreting several soil, landscape and crop attributes. The key component of precision farming is the map showing spatial patterns in field characteristics. Obtaining information for this map is often achieved by soil sampling. This approach, however, can be cost-prohibitive for grain crops. Soil sampling strategies can be simplified by use of auxiliary data provided by satellite or aerial photo imagery. This paper describes geostatistical methods for estimating spatial patterns in soil organic matter, soil test phosphorus and wheat grain yield from a combination of Thematic Mapper imaging and soil sampling.
Moessfit. A free Mössbauer fitting program
NASA Astrophysics Data System (ADS)
Kamusella, Sirko; Klauss, Hans-Henning
2016-12-01
A free data analysis program for Mössbauer spectroscopy was developed to solve commonly faced problems such as simultaneous fitting of multiple data sets, Maximum Entropy Method and a proper error estimation. The program is written in C++ using the Qt application framework and the Gnu Scientific Library. Moessfit makes use of multithreading to reasonably apply the multi core CPU capacities of modern PC. The whole fit is specified in a text input file issued to simplify work flow for the user and provide a simple start in the Mössbauer data analysis for beginners. However, the possibility to define arbitrary parameter dependencies and distributions as well as relaxation spectra makes Moessfit interesting for advanced user as well.
Kinetic characterisation of primer mismatches in allele-specific PCR: a quantitative assessment.
Waterfall, Christy M; Eisenthal, Robert; Cobb, Benjamin D
2002-12-20
A novel method of estimating the kinetic parameters of Taq DNA polymerase during rapid cycle PCR is presented. A model was constructed using a simplified sigmoid function to represent substrate accumulation during PCR in combination with the general equation describing high substrate inhibition for Michaelis-Menten enzymes. The PCR progress curve was viewed as a series of independent reactions where initial rates were accurately measured for each cycle. Kinetic parameters were obtained for allele-specific PCR (AS-PCR) amplification to examine the effect of mismatches on amplification. A high degree of correlation was obtained providing evidence of substrate inhibition as a major cause of the plateau phase that occurs in the later cycles of PCR.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Research on carrying capacity of hydrostatic slideway on heavy-duty gantry CNC machine
NASA Astrophysics Data System (ADS)
Cui, Chao; Guo, Tieneng; Wang, Yijie; Dai, Qin
2017-05-01
Hydrostatic slideway is a key part in the heavy-duty gantry CNC machine, which supports the total weight of the gantry and moves smoothly along the table. Therefore, the oil film between sliding rails plays an important role on the carrying capacity and precision of machine. In this paper, the oil film in no friction is simulated with three-dimensional CFD. The carrying capacity of heavy hydrostatic slideway, pressure and velocity characteristic of the flow field are analyzed. The simulation result is verified through comparing with the experimental data obtained from the heavy-duty gantry machine. For the requirement of engineering, the oil film carrying capacity is analyzed with simplified theoretical method. The precision of the simplified method is evaluated and the effectiveness is verified with the experimental data. The simplified calculation method is provided for designing oil pad on heavy-duty gantry CNC machine hydrostatic slideway.
NASA Astrophysics Data System (ADS)
Bagherzadeh, Seyed Amin; Asadi, Davood
2017-05-01
In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.
NASA Astrophysics Data System (ADS)
Vâjâiac, Sorin Nicolae; Filip, Valeriu; Štefan, Sabina; Boscornea, Andreea
2014-03-01
The paper describes a method of assessing the size distribution of fog droplets in a cloud chamber, based on measuring the time variation of the transmission of a light beam during the gravitational settling of droplets. Using a model of light extinction by floating spherical particles, the size distribution of droplets is retrieved, along with characteristic structural parameters of the fog (total droplet concentration, liquid water content and effective radius). Moreover, the time variation of the effective radius can be readily extracted from the model. The errors of the method are also estimated and fall within acceptable limits. The method proves sensitive enough to resolve various modes in the droplet distribution and to point out changes in the distribution due to diverse types of aerosol present in the chamber or to the thermal condition of the fog. It is speculated that the method can be further simplified to reach an in-situ version for real-time field measurements.
NASA Astrophysics Data System (ADS)
Horiguchi, Fumio; Nakata, Kisaburo; Ito, Naganori; Okawa, Ken
2006-12-01
A risk assessment of Tributyltin (TBT) in Tokyo Bay was conducted using the Margin of Exposure (MOE) method at the species level using the Japanese short-neck clam, Ruditapes philippinarum. The assessment endpoint was defined to protect R. philippinarum in Tokyo Bay from TBT (growth effects). A No Observed Effect Concentration (NOEC) for this species with respect to growth reduction induced by TBT was estimated from experimental results published in the scientific literature. Sources of TBT in this study were assumed to be commercial vessels in harbors and navigation routes. Concentrations of TBT in Tokyo Bay were estimated using a three-dimensional hydrodynamic model, an ecosystem model and a chemical fate model. MOEs for this species were estimated for the years 1990, 2000, and 2007. Estimated MOEs for R. philippinarum for 1990, 2000, and 2007 were approximately 1-3, 10, and 100, respectively, indicating a declining temporal trend in the probability of adverse growth effects. A simplified software package called RAMTB was developed by incorporating the chemical fate model and the databases of seasonal flow fields and distributions of organic substances (phytoplankton and detritus) in Tokyo Bay, simulated by the hydrodynamic and ecological model, respectively.
NASA Astrophysics Data System (ADS)
Wang, Wei; Shen, Jianqi
2018-06-01
The use of a shaped beam for applications relying on light scattering depends much on the ability to evaluate the beam shape coefficients (BSC) effectively. Numerical techniques for evaluating the BSCs of a shaped beam, such as the quadrature, the localized approximation (LA), the integral localized approximation (ILA) methods, have been developed within the framework of generalized Lorenz-Mie theory (GLMT). The quadrature methods usually employ the 2-/3-dimensional integrations. In this work, the expressions of the BSCs for an elliptical Gaussian beam (EGB) are simplified into the 1-dimensional integral so as to speed up the numerical computation. Numerical results of BSCs are used to reconstruct the beam field and the fidelity of the reconstructed field to the given beam field is estimated. It is demonstrated that the proposed method is much faster than the 2-dimensional integrations and it can acquire more accurate results than the LA method. Limitations of the quadrature method and also the LA method in the numerical calculation are analyzed in detail.
A Simplified Method for Implementing Run-Time Polymorphism in Fortran95
Decyk, Viktor K.; Norton, Charles D.
2004-01-01
This paper discusses a simplified technique for software emulation of inheritance and run-time polymorphism in Fortran95. This technique involves retaining the same type throughout an inheritance hierarchy, so that only functions which are modified in a derived class need to be implemented.
Using Landslide Failure Forecast Models in Near Real Time: the Mt. de La Saxe case-study
NASA Astrophysics Data System (ADS)
Manconi, Andrea; Giordan, Daniele
2014-05-01
Forecasting the occurrence of landslide phenomena in space and time is a major scientific challenge. The approaches used to forecast landslides mainly depend on the spatial scale analyzed (regional vs. local), the temporal range of forecast (long- vs. short-term), as well as the triggering factor and the landslide typology considered. By focusing on short-term forecast methods for large, deep seated slope instabilities, the potential time of failure (ToF) can be estimated by studying the evolution of the landslide deformation over time (i.e., strain rate) provided that, under constant stress conditions, landslide materials follow creep mechanism before reaching rupture. In the last decades, different procedures have been proposed to estimate ToF by considering simplified empirical and/or graphical methods applied to time series of deformation data. Fukuzono, 1985 proposed a failure forecast method based on the experience performed during large scale laboratory experiments, which were aimed at observing the kinematic evolution of a landslide induced by rain. This approach, known also as the inverse-velocity method, considers the evolution over time of the inverse value of the surface velocity (v) as an indicator of the ToF, by assuming that failure approaches while 1/v tends to zero. Here we present an innovative method to aimed at achieving failure forecast of landslide phenomena by considering near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and then apply straightforward statistical methods to obtain confidence intervals on the time of failure. Our results can be relevant to support the management of early warning systems during landslide emergency conditions, also when the predefined displacement and/or velocity thresholds are exceeded. In addition, our statistical approach for the definition of confidence interval and forecast reliability can be applied also to different failure forecast methods. We applied for the first time the herein presented approach in near real time during the emergency scenario relevant to the reactivation of the La Saxe rockslide, a large mass wasting menacing the population of Courmayeur, northern Italy, and the important European route E25. We show how the application of simplified but robust forecast models can be a convenient method to manage and support early warning systems during critical situations. References: Fukuzono T. (1985), A New Method for Predicting the Failure Time of a Slope, Proc. IVth International Conference and Field Workshop on Landslides, Tokyo.
Simplified Modeling of Oxidation of Hydrocarbons
NASA Technical Reports Server (NTRS)
Bellan, Josette; Harstad, Kenneth
2008-01-01
A method of simplified computational modeling of oxidation of hydrocarbons is undergoing development. This is one of several developments needed to enable accurate computational simulation of turbulent, chemically reacting flows. At present, accurate computational simulation of such flows is difficult or impossible in most cases because (1) the numbers of grid points needed for adequate spatial resolution of turbulent flows in realistically complex geometries are beyond the capabilities of typical supercomputers now in use and (2) the combustion of typical hydrocarbons proceeds through decomposition into hundreds of molecular species interacting through thousands of reactions. Hence, the combination of detailed reaction- rate models with the fundamental flow equations yields flow models that are computationally prohibitive. Hence, further, a reduction of at least an order of magnitude in the dimension of reaction kinetics is one of the prerequisites for feasibility of computational simulation of turbulent, chemically reacting flows. In the present method of simplified modeling, all molecular species involved in the oxidation of hydrocarbons are classified as either light or heavy; heavy molecules are those having 3 or more carbon atoms. The light molecules are not subject to meaningful decomposition, and the heavy molecules are considered to decompose into only 13 specified constituent radicals, a few of which are listed in the table. One constructs a reduced-order model, suitable for use in estimating the release of heat and the evolution of temperature in combustion, from a base comprising the 13 constituent radicals plus a total of 26 other species that include the light molecules and related light free radicals. Then rather than following all possible species through their reaction coordinates, one follows only the reduced set of reaction coordinates of the base. The behavior of the base was examined in test computational simulations of the combustion of heptane in a stirred reactor at various initial pressures ranging from 0.1 to 6 MPa. Most of the simulations were performed for stoichiometric mixtures; some were performed for fuel/oxygen mole ratios of 1/2 and 2.
Load Capacity Estimation of Foil Air Journal Bearings for Oil-Free Turbomachinery Applications
NASA Technical Reports Server (NTRS)
DellaCorte, Christopher; Valco, Mark J.
2000-01-01
This paper introduces a simple "Rule of Thumb" (ROT) method to estimate the load capacity of foil air journal bearings, which are self-acting compliant-surface hydrodynamic bearings being considered for Oil-Free turbo-machinery applications such as gas turbine engines. The ROT is based on first principles and data available in the literature and it relates bearing load capacity to the bearing size and speed through an empirically based load capacity coefficient, D. It is shown that load capacity is a linear function of bearing surface velocity and bearing projected area. Furthermore, it was found that the load capacity coefficient, D, is related to the design features of the bearing compliant members and operating conditions (speed and ambient temperature). Early bearing designs with basic or "first generation" compliant support elements have relatively low load capacity. More advanced bearings, in which the compliance of the support structure is tailored, have load capacities up to five times those of simpler designs. The ROT enables simplified load capacity estimation for foil air journal bearings and can guide development of new Oil-Free turbomachinery systems.
Chu, Khim Hoong
2017-11-09
Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6 cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.
The impact evaluation of soil liquefaction on low-rise building in the Meinong earthquake
NASA Astrophysics Data System (ADS)
Lu, Chih-Chieh; Hwang, Jin-Hung; Hsu, Shang-Yi
2017-08-01
This paper presents major preliminary observations on the liquefaction-induced damages in the Meinong earthquake ( M L = 6.4). The severe damages to buildings centered on Huian and Sanmin Streets in Tainan City where the places were reclaimed fish or farm ponds with poor construction quality from many decades ago. To better understand the effect due to the soil liquefaction at these sites, the information provided by the in situ 13 Standard Penetration Test boreholes and 5 Cone Penetration Test soundings accompanying with the PGAs derived from the near seismographs was used to conduct the soil liquefaction evaluation by the Seed method (Seed et al. in J Geotech Eng ASCE 111(12):1425-1445, 1985) when subject to the Meinong earthquake. The liquefaction potential index (LPI) was then evaluated accordingly. From the results, it was found that the estimated damage severity was not consistent to the field conditions if the local site effect was not taken into account. To better reflect the site response in such sites, the sites' PGAs in the PGA contour map were multiplied by 1.5 times to quantify the amplification effects due to the soft geological condition. In addition, the PGAs based on other simple approaches were evaluated as well for comparison. Besides, the effects of fines content and magnitude scaling factor were also discussed in this paper. After that, several common simplified methods were also used to calculate the LPI when subject to the Meinong earthquake in order to evaluate the applicability of these simplified methods.
Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).
Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C
2016-08-01
Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.
A high-resolution computational localization method for transcranial magnetic stimulation mapping.
Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro
2018-05-15
Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.
Psychometric Evaluation of the Simplified Chinese Version of Flourishing Scale
ERIC Educational Resources Information Center
Tang, Xiaoqing; Duan, Wenjie; Wang, Zhizhang; Liu, Tianyuan
2016-01-01
Objectives: The Flourishing Scale (FS) was developed to measure psychological well-being from the eudaimonic perspective, highlighting the flourishing of human functioning. This article evaluated the psychometric characteristics of the simplified Chinese version of FS among a Chinese community population. Method: A total of 433 participants from…
Estimate of Probability of Crack Detection from Service Difficulty Report Data.
DOT National Transportation Integrated Search
1995-09-01
The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...
Estimate of probability of crack detection from service difficulty report data
DOT National Transportation Integrated Search
1994-09-01
The initiation and growth of cracks in a fuselage lap joint were simulated. Stochastic distribution of crack initiation and rivet interference were included. The simulation also contained a simplified crack growth. Nominal crack growth behavior of la...
DOT National Transportation Integrated Search
2014-11-15
The simplified procedure in design codes for determining earthquake response spectra involves : estimating site coefficients to adjust available rock accelerations to site accelerations. Several : investigators have noted concerns with the site coeff...
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Application of IEM model on soil moisture and surface roughness estimation
NASA Technical Reports Server (NTRS)
Shi, Jiancheng; Wang, J. R.; Oneill, P. E.; Hsu, A. Y.; Engman, E. T.
1995-01-01
Monitoring spatial and temporal changes of soil moisture are of importance to hydrology, meteorology, and agriculture. This paper reports a result on study of using L-band SAR imagery to estimate soil moisture and surface roughness for bare fields. Due to limitations of the Small Perturbation Model, it is difficult to apply this model on estimation of soil moisture and surface roughness directly. In this study, we show a simplified model derived from the Integral Equation Model for estimation of soil moisture and surface roughness. We show a test of this model using JPL L-band AIRSAR data.
Simplified dichromated gelatin hologram recording process
NASA Technical Reports Server (NTRS)
Georgekutty, Tharayil G.; Liu, Hua-Kuang
1987-01-01
A simplified method for making dichromated gelatin (DCG) holographic optical elements (HOE) has been discovered. The method is much less tedious and it requires a period of processing time comparable with that for processing a silver halide hologram. HOE characteristics including diffraction efficiency (DE), linearity, and spectral sensitivity have been quantitatively investigated. The quality of the holographic grating is very high. Ninety percent or higher diffraction efficiency has been achieved in simple plane gratings made by this process.
Boundary element analysis of corrosion problems for pumps and pipes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyasaka, M.; Amaya, K.; Kishimoto, K.
1995-12-31
Three-dimensional (3D) and axi-symmetric boundary element methods (BEM) were developed to quantitatively estimate cathodic protection and macro-cell corrosion. For 3D analysis, a multiple-region method (MRM) was developed in addition to a single-region method (SRM). The validity and usefulness of the BEMs were demonstrated by comparing numerical results with experimental data from galvanic corrosion systems of a cylindrical model and a seawater pipe, and from a cathodic protection system of an actual seawater pump. It was shown that a highly accurate analysis could be performed for fluid machines handling seawater with complex 3D fields (e.g. seawater pump) by taking account ofmore » flow rate and time dependencies of polarization curve. Compared to the 3D BEM, the axi-symmetric BEM permitted large reductions in numbers of elements and nodes, which greatly simplified analysis of axi-symmetric fields such as pipes. Computational accuracy and CPU time were compared between analyses using two approximation methods for polarization curves: a logarithmic-approximation method and a linear-approximation method.« less
Probabilistic framework for product design optimization and risk management
NASA Astrophysics Data System (ADS)
Keski-Rahkonen, J. K.
2018-05-01
Probabilistic methods have gradually gained ground within engineering practices but currently it is still the industry standard to use deterministic safety margin approaches to dimensioning components and qualitative methods to manage product risks. These methods are suitable for baseline design work but quantitative risk management and product reliability optimization require more advanced predictive approaches. Ample research has been published on how to predict failure probabilities for mechanical components and furthermore to optimize reliability through life cycle cost analysis. This paper reviews the literature for existing methods and tries to harness their best features and simplify the process to be applicable in practical engineering work. Recommended process applies Monte Carlo method on top of load-resistance models to estimate failure probabilities. Furthermore, it adds on existing literature by introducing a practical framework to use probabilistic models in quantitative risk management and product life cycle costs optimization. The main focus is on mechanical failure modes due to the well-developed methods used to predict these types of failures. However, the same framework can be applied on any type of failure mode as long as predictive models can be developed.
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
NASA Astrophysics Data System (ADS)
Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor
2017-04-01
Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located about 5 km from the southern boundary of Budapest. The quake caused serious damages in the epicentral area and in the southern districts of the capital. The epicentral area of the earthquake is located along the Danube River. Sand boils were observed in some locations that indicated the occurrence of liquefaction. Because their exact locations were recorded at the time of the earthquake, in situ geotechnical measurements (CPT and SPT) could be performed at two (Dunaharaszti and Taksony) sites. The different types of measurements enabled the probabilistic liquefaction hazard computations at the two studied sites. We have compared the return periods of liquefaction that were computed using different built-in simplified stress based methods.
Model-based estimation for dynamic cardiac studies using ECT.
Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O
1994-01-01
The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.
Optical chirp z-transform processor with a simplified architecture.
Ngo, Nam Quoc
2014-12-29
Using a simplified chirp z-transform (CZT) algorithm based on the discrete-time convolution method, this paper presents the synthesis of a simplified architecture of a reconfigurable optical chirp z-transform (OCZT) processor based on the silica-based planar lightwave circuit (PLC) technology. In the simplified architecture of the reconfigurable OCZT, the required number of optical components is small and there are no waveguide crossings which make fabrication easy. The design of a novel type of optical discrete Fourier transform (ODFT) processor as a special case of the synthesized OCZT is then presented to demonstrate its effectiveness. The designed ODFT can be potentially used as an optical demultiplexer at the receiver of an optical fiber orthogonal frequency division multiplexing (OFDM) transmission system.
Simplified Technique for Predicting Offshore Pipeline Expansion
NASA Astrophysics Data System (ADS)
Seo, J. H.; Kim, D. K.; Choi, H. S.; Yu, S. Y.; Park, K. S.
2018-06-01
In this study, we propose a method for estimating the amount of expansion that occurs in subsea pipelines, which could be applied in the design of robust structures that transport oil and gas from offshore wells. We begin with a literature review and general discussion of existing estimation methods and terminologies with respect to subsea pipelines. Due to the effects of high pressure and high temperature, the production of fluid from offshore wells is typically caused by physical deformation of subsea structures, e.g., expansion and contraction during the transportation process. In severe cases, vertical and lateral buckling occurs, which causes a significant negative impact on structural safety, and which is related to on-bottom stability, free-span, structural collapse, and many other factors. In addition, these factors may affect the production rate with respect to flow assurance, wax, and hydration, to name a few. In this study, we developed a simple and efficient method for generating a reliable pipe expansion design in the early stage, which can lead to savings in both cost and computation time. As such, in this paper, we propose an applicable diagram, which we call the standard dimensionless ratio (SDR) versus virtual anchor length (L A ) diagram, that utilizes an efficient procedure for estimating subsea pipeline expansion based on applied reliable scenarios. With this user guideline, offshore pipeline structural designers can reliably determine the amount of subsea pipeline expansion and the obtained results will also be useful for the installation, design, and maintenance of the subsea pipeline.
Estimation of Local Bone Loads for the Volume of Interest.
Kim, Jung Jin; Kim, Youkyung; Jang, In Gwun
2016-07-01
Computational bone remodeling simulations have recently received significant attention with the aid of state-of-the-art high-resolution imaging modalities. They have been performed using localized finite element (FE) models rather than full FE models due to the excessive computational costs of full FE models. However, these localized bone remodeling simulations remain to be investigated in more depth. In particular, applying simplified loading conditions (e.g., uniform and unidirectional loads) to localized FE models have a severe limitation in a reliable subject-specific assessment. In order to effectively determine the physiological local bone loads for the volume of interest (VOI), this paper proposes a novel method of estimating the local loads when the global musculoskeletal loads are given. The proposed method is verified for the three VOI in a proximal femur in terms of force equilibrium, displacement field, and strain energy density (SED) distribution. The effect of the global load deviation on the local load estimation is also investigated by perturbing a hip joint contact force (HCF) in the femoral head. Deviation in force magnitude exhibits the greatest absolute changes in a SED distribution due to its own greatest deviation, whereas angular deviation perpendicular to a HCF provides the greatest relative change. With further in vivo force measurements and high-resolution clinical imaging modalities, the proposed method will contribute to the development of reliable patient-specific localized FE models, which can provide enhanced computational efficiency for iterative computing processes such as bone remodeling simulations.
An approach to parameter estimation for breast tumor by finite element method
NASA Astrophysics Data System (ADS)
Xu, A.-qing; Yang, Hong-qin; Ye, Zhen; Su, Yi-ming; Xie, Shu-sen
2009-02-01
The temperature of human body on the surface of the skin depends on the metabolic activity, the blood flow, and the temperature of the surroundings. Any abnormality in the tissue, such as the presence of a tumor, alters the normal temperature on the skin surface due to increased metabolic activity of the tumor. Therefore, abnormal skin temperature profiles are an indication of diseases such as tumor or cancer. This study is to present an approach to detect the female breast tumor and its related parameter estimations by combination the finite element method with infrared thermography for the surface temperature profile. A 2D simplified breast embedded a tumor model based on the female breast anatomical structure and physiological characteristics was first established, and then finite element method was used to analyze the heat diffuse equation for the surface temperature profiles of the breast. The genetic optimization algorithm was used to estimate the tumor parameters such as depth, size and blood perfusion by minimizing a fitness function involving the temperature profiles simulated data by finite element method to the experimental data obtained by infrared thermography. This preliminary study shows it is possible to determine the depth and the heat generation rate of the breast tumor by using infrared thermography and the optimization analysis, which may play an important role in the female breast healthcare and diseases evaluation or early detection. In order to develop the proposed methodology to be used in clinical, more accurate anatomy 3D breast geometry should be considered in further investigations.
Sun, Hao; Guo, Jianbin; Wu, Shubiao; Liu, Fang; Dong, Renjie
2017-09-01
The volatile fatty acids (VFAs) concentration has been considered as one of the most sensitive process performance indicators in anaerobic digestion (AD) process. However, the accurate determination of VFAs concentration in AD processes normally requires advanced equipment and complex pretreatment procedures. A simplified method with fewer sample pretreatment procedures and improved accuracy is greatly needed, particularly for on-site application. This report outlines improvements to the Nordmann method, one of the most popular titrations used for VFA monitoring. The influence of ion and solid interfering subsystems in titrated samples on results accuracy was discussed. The total solid content in titrated samples was the main factor affecting accuracy in VFA monitoring. Moreover, a high linear correlation was established between the total solids contents and VFA measurement differences between the traditional Nordmann equation and gas chromatography (GC). Accordingly, a simplified titration method was developed and validated using a semi-continuous experiment of chicken manure anaerobic digestion with various organic loading rates. The good fitting of the results obtained by this method in comparison with GC results strongly supported the potential application of this method to VFA monitoring. Copyright © 2017. Published by Elsevier Ltd.
Development of Generation System of Simplified Digital Maps
NASA Astrophysics Data System (ADS)
Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng
In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.
Simplified Model to Predict Deflection and Natural Frequency of Steel Pole Structures
NASA Astrophysics Data System (ADS)
Balagopal, R.; Prasad Rao, N.; Rokade, R. P.
2018-04-01
Steel pole structures are suitable alternate to transmission line towers, due to difficulty encountered in finding land for the new right of way for installation of new lattice towers. The steel poles have tapered cross section and they are generally used for communication, power transmission and lighting purposes. Determination of deflection of steel pole is important to decide its functionality requirement. The excessive deflection of pole may affect the signal attenuation and short circuiting problems in communication/transmission poles. In this paper, a simplified method is proposed to determine both primary and secondary deflection based on dummy unit load/moment method. The predicted deflection from proposed method is validated with full scale experimental investigation conducted on 8 m and 30 m high lighting mast, 132 and 400 kV transmission pole and found to be in close agreement with each other. Determination of natural frequency is an important criterion to examine its dynamic sensitivity. A simplified semi-empirical method using the static deflection from the proposed method is formulated to determine its natural frequency. The natural frequency predicted from proposed method is validated with FE analysis results. Further the predicted results are validated with experimental results available in literature.
NASA Astrophysics Data System (ADS)
Negm, Amro; Minacapilli, Mario; Provenzano, Giuseppe
2017-04-01
The accurate estimation of grass reference evapotranspiration (ET0) is important for many fields, including hydrology and irrigation water management. Being direct measure of ET0 difficult, expensive and time consuming, application of simplified approaches and web-based meteorological information are often preferred. The Prediction of Worldwide Energy Resource project developed by the American National Aeronautics and Space Administration (POWER-NASA) provides meteorological observations and surface energy fluxes on 1° latitude by 1° longitude grid, with a continuous daily coverage and for the entire globe. However, the broad spatial resolution of these data represents a limiting factor, for example when they have to be used for local estimations of reference ET0. In this work, a procedure for the spatial disaggregation of POWER-NASA daily average air temperature was proposed. In particular, a daily scaling factor was initially defined as the ratio between disaggregated average air temperature and the corresponding native value. This ratio was then modeled with a cosine function, characterized by three parameters depending on elevation, so to account for seasonal and regional variability. The proposed model was calibrated with three years of ground measurements (2006-2008) and then validated over six years (2009-2014). The suitability of the procedure was finally assessed by applying two simplified empirical models to estimate ET0 (Turc, 1961; Hargreaves, 1975). When compared to ET0 values obtained with FAO-56 PM equation, both simplified equations associated to downscaled meteorological observations, were characterized by RMSE ranging between 0.44 and 1.08 mm (average of 0.72-0.74 mm), and average MBE of -0.06 (Turc equation) and 0.13 mm (Hargreaves equation). These results indicated the strength of the proposed procedure to estimate ET0, even for regions characterized by the lack of detailed meteorological information.
A two-step super-Gaussian independent component analysis approach for fMRI data.
Ge, Ruiyang; Yao, Li; Zhang, Hang; Long, Zhiying
2015-09-01
Independent component analysis (ICA) has been widely applied to functional magnetic resonance imaging (fMRI) data analysis. Although ICA assumes that the sources underlying data are statistically independent, it usually ignores sources' additional properties, such as sparsity. In this study, we propose a two-step super-GaussianICA (2SGICA) method that incorporates the sparse prior of the sources into the ICA model. 2SGICA uses the super-Gaussian ICA (SGICA) algorithm that is based on a simplified Lewicki-Sejnowski's model to obtain the initial source estimate in the first step. Using a kernel estimator technique, the source density is acquired and fitted to the Laplacian function based on the initial source estimates. The fitted Laplacian prior is used for each source at the second SGICA step. Moreover, the automatic target generation process for initial value generation is used in 2SGICA to guarantee the stability of the algorithm. An adaptive step size selection criterion is also implemented in the proposed algorithm. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of 2SGICA and made a performance comparison between InfomaxICA, FastICA, mean field ICA (MFICA) with Laplacian prior, sparse online dictionary learning (ODL), SGICA and 2SGICA. Both simulated and real fMRI experiments showed that the 2SGICA was most robust to noises, and had the best spatial detection power and the time course estimation among the six methods. Copyright © 2015. Published by Elsevier Inc.
Sit Up Straight! It's Good Physics
ERIC Educational Resources Information Center
Colicchia, Giuseppe
2005-01-01
A simplified model has been developed that shows forces and torques involved in maintaining static posture in the cervical spine. The model provides a biomechanical basis to estimate loadings on the cervical discs under various postures. Thus it makes a biological context for teaching statics.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki
2008-08-01
Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.
Analyzing costs of space debris mitigation methods
NASA Astrophysics Data System (ADS)
Wiedemann, C.; Krag, H.; Bendisch, J.; Sdunnus, H.
2004-01-01
The steadily increasing number of space objects poses a considerable hazard to all kinds of spacecraft. To reduce the risks to future space missions different debris mitigation measures and spacecraft protection techniques have been investigated during the last years. However, the economic efficiency has not been considered yet in this context. Current studies have the objective to evaluate the mission costs due to space debris in a business as usual (no mitigation) scenario compared to the missions costs considering debris mitigation. The aim is an estimation of the time until the investment in debris mitigation will lead to an effective reduction of mission costs. This paper presents the results of investigations on the key issues of cost estimation for spacecraft and the influence of debris mitigation and shielding on cost. Mitigation strategies like the reduction of orbital lifetime and de- or re-orbit of non-operational satellites are methods to control the space debris environment. These methods result in an increase of costs. In a first step the overall costs of different types of unmanned satellites are analyzed. A selected cost model is simplified and generalized for an application on all operational satellites. In a next step the influence of space debris on cost is treated, if the implementation of mitigation strategies is considered.
Improved Bayesian Infrasonic Source Localization for regional infrasound
Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.
2015-10-20
The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less
Simplified method for the transverse bending analysis of twin celled concrete box girder bridges
NASA Astrophysics Data System (ADS)
Chithra, J.; Nagarajan, Praveen; S, Sajith A.
2018-03-01
Box girder bridges are one of the best options for bridges with span more than 25 m. For the study of these bridges, three-dimensional finite element analysis is the best suited method. However, performing three-dimensional analysis for routine design is difficult as well as time consuming. Also, software used for the three-dimensional analysis are very expensive. Hence designers resort to simplified analysis for predicting longitudinal and transverse bending moments. Among the many analytical methods used to find the transverse bending moments, SFA is the simplest and widely used in design offices. Results from simplified frame analysis can be used for the preliminary analysis of the concrete box girder bridges.From the review of literatures, it is found that majority of the work done using SFA is restricted to the analysis of single cell box girder bridges. Not much work has been done on the analysis multi-cell concrete box girder bridges. In this present study, a double cell concrete box girder bridge is chosen. The bridge is modelled using three- dimensional finite element software and the results are then compared with the simplified frame analysis. The study mainly focuses on establishing correction factors for transverse bending moment values obtained from SFA.
Jiwani, Aliya; Himmelstein, David; Woolhandler, Steffie; Kahn, James G
2014-11-13
The United States' multiple-payer health care system requires substantial effort and costs for administration, with billing and insurance-related (BIR) activities comprising a large but incompletely characterized proportion. A number of studies have quantified BIR costs for specific health care sectors, using micro-costing techniques. However, variation in the types of payers, providers, and BIR activities across studies complicates estimation of system-wide costs. Using a consistent and comprehensive definition of BIR (including both public and private payers, all providers, and all types of BIR activities), we synthesized and updated available micro-costing evidence in order to estimate total and added BIR costs for the U.S. health care system in 2012. We reviewed BIR micro-costing studies across healthcare sectors. For physician practices, hospitals, and insurers, we estimated the % BIR using existing research and publicly reported data, re-calculated to a standard and comprehensive definition of BIR where necessary. We found no data on % BIR in other health services or supplies settings, so extrapolated from known sectors. We calculated total BIR costs in each sector as the product of 2012 U.S. national health expenditures and the percentage of revenue used for BIR. We estimated "added" BIR costs by comparing total BIR costs in each sector to those observed in existing, simplified financing systems (Canada's single payer system for providers, and U.S. Medicare for insurers). Due to uncertainty in inputs, we performed sensitivity analyses. BIR costs in the U.S. health care system totaled approximately $471 ($330 - $597) billion in 2012. This includes $70 ($54 - $76) billion in physician practices, $74 ($58 - $94) billion in hospitals, an estimated $94 ($47 - $141) billion in settings providing other health services and supplies, $198 ($154 - $233) billion in private insurers, and $35 ($17 - $52) billion in public insurers. Compared to simplified financing, $375 ($254 - $507) billion, or 80%, represents the added BIR costs of the current multi-payer system. A simplified financing system in the U.S. could result in cost savings exceeding $350 billion annually, nearly 15% of health care spending.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less
Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source
NASA Astrophysics Data System (ADS)
Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.
2014-06-01
To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.
Dyverfeldt, Petter; Hope, Michael D.; Tseng, Elaine E.; Saloner, David
2013-01-01
OBJECTIVES The authors sought to measure the turbulent kinetic energy (TKE) in the ascending aorta of patients with aortic stenosis and to assess its relationship to irreversible pressure loss. BACKGROUND Irreversible pressure loss caused by energy dissipation in post-stenotic flow is an important determinant of the hemodynamic significance of aortic stenosis. The simplified Bernoulli equation used to estimate pressure gradients often misclassifies the ventricular overload caused by aortic stenosis. The current gold standard for estimation of irreversible pressure loss is catheterization, but this method is rarely used due to its invasiveness. Post-stenotic pressure loss is largely caused by dissipation of turbulent kinetic energy into heat. Recent developments in magnetic resonance flow imaging permit noninvasive estimation of TKE. METHODS The study was approved by the local ethics review board and all subjects gave written informed consent. Three-dimensional cine magnetic resonance flow imaging was used to measure TKE in 18 subjects (4 normal volunteers, 14 patients with aortic stenosis with and without dilation). For each subject, the peak total TKE in the ascending aorta was compared with a pressure loss index. The pressure loss index was based on a previously validated theory relating pressure loss to measures obtainable by echocardiography. RESULTS The total TKE did not appear to be related to global flow patterns visualized based on magnetic resonance–measured velocity fields. The TKE was significantly higher in patients with aortic stenosis than in normal volunteers (p < 0.001). The peak total TKE in the ascending aorta was strongly correlated to index pressure loss (R2 = 0.91). CONCLUSIONS Peak total TKE in the ascending aorta correlated strongly with irreversible pressure loss estimated by a well-established method. Direct measurement of TKE by magnetic resonance flow imaging may, with further validation, be used to estimate irreversible pressure loss in aortic stenosis. PMID:23328563
NASA Astrophysics Data System (ADS)
Snyder, R. L.; Mancosu, N.; Spano, D.
2014-12-01
This study derived the summer (June-August) reference evapotranspiration distribution map for Sardinia (Italy) based on weather station data and use of the geographic information system (GIS). A modified daily Penman-Monteith equation from the Food and Agriculture Organization of the United Nations (UN-FAO) and the American Society of Civil Engineers Environmental and Water Resources Institute (ASCE-EWRI) was used to calculate the Standardized Reference Evapotranspiration (ETos) for all weather stations having a "full" set of required data for the calculations. For stations having only temperature data (partial stations), the Hargreaves-Samani equation was used to estimate the reference evapotranspiration for a grass surface (ETo). The ETos and ETo results were different depending on the local climate, so two methods to estimate ETos from the ETo were tested. Substitution of missing solar radiation, wind speed, and humidity data from a nearby station within a similar microclimate was found to give better results than using a calibration factor that related ETos and ETo. Therefore, the substitution method was used to estimate ETos at "partial" stations having only temperature data. The combination of 63 full and partial stations was sufficient to use GIS to map ETos for Sardinia. Three interpolation methods were studied, and the ordinary kriging model fitted the observed data better than a radial basis function or the inverse distance weighting method. Using station data points to create a regional map simplified the zonation of ETos when large scale computations were needed. Making a distinction based on ETos classes allows the simulation of crop water requirements for large areas and it can potentially lead to improved irrigation management and water savings. It also provides a baseline to investigate possible impact of climate change.
Aggregative Learning Method and Its Application for Communication Quality Evaluation
NASA Astrophysics Data System (ADS)
Akhmetov, Dauren F.; Kotaki, Minoru
2007-12-01
In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.
Learned Compact Local Feature Descriptor for Tls-Based Geodetic Monitoring of Natural Outdoor Scenes
NASA Astrophysics Data System (ADS)
Gojcic, Z.; Zhou, C.; Wieser, A.
2018-05-01
The advantages of terrestrial laser scanning (TLS) for geodetic monitoring of man-made and natural objects are not yet fully exploited. Herein we address one of the open challenges by proposing feature-based methods for identification of corresponding points in point clouds of two or more epochs. We propose a learned compact feature descriptor tailored for point clouds of natural outdoor scenes obtained using TLS. We evaluate our method both on a benchmark data set and on a specially acquired outdoor dataset resembling a simplified monitoring scenario where we successfully estimate 3D displacement vectors of a rock that has been displaced between the scans. We show that the proposed descriptor has the capacity to generalize to unseen data and achieves state-of-the-art performance while being time efficient at the matching step due the low dimension.
NASA Astrophysics Data System (ADS)
Li, M.; Jiang, Y. S.
2014-11-01
Micro-Doppler effect is induced by the micro-motion dynamics of the radar target itself or any structure on the target. In this paper, a simplified cone-shaped model for ballistic missile warhead with micro-nutation is established, followed by the theoretical formula of micro-nutation is derived. It is confirmed that the theoretical results are identical to simulation results by using short-time Fourier transform. Then we propose a new method for nutation period extraction via signature maximum energy fitting based on empirical mode decomposition and short-time Fourier transform. The maximum wobble angle is also extracted by distance approximate approach in a small range of wobble angle, which is combined with the maximum likelihood estimation. By the simulation studies, it is shown that these two feature extraction methods are both valid even with low signal-to-noise ratio.
Simplifier: a web tool to eliminate redundant NGS contigs.
Ramos, Rommel Thiago Jucá; Carneiro, Adriana Ribeiro; Azevedo, Vasco; Schneider, Maria Paula; Barh, Debmalya; Silva, Artur
2012-01-01
Modern genomic sequencing technologies produce a large amount of data with reduced cost per base; however, this data consists of short reads. This reduction in the size of the reads, compared to those obtained with previous methodologies, presents new challenges, including a need for efficient algorithms for the assembly of genomes from short reads and for resolving repetitions. Additionally after abinitio assembly, curation of the hundreds or thousands of contigs generated by assemblers demands considerable time and computational resources. We developed Simplifier, a stand-alone software that selectively eliminates redundant sequences from the collection of contigs generated by ab initio assembly of genomes. Application of Simplifier to data generated by assembly of the genome of Corynebacterium pseudotuberculosis strain 258 reduced the number of contigs generated by ab initio methods from 8,004 to 5,272, a reduction of 34.14%; in addition, N50 increased from 1 kb to 1.5 kb. Processing the contigs of Escherichia coli DH10B with Simplifier reduced the mate-paired library 17.47% and the fragment library 23.91%. Simplifier removed redundant sequences from datasets produced by assemblers, thereby reducing the effort required for finalization of genome assembly in tests with data from Prokaryotic organisms. Simplifier is available at http://www.genoma.ufpa.br/rramos/softwares/simplifier.xhtmlIt requires Sun jdk 6 or higher.
Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.
Barrera, Roberto
2009-07-01
Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.
Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, A.R.; Moreno, S.; Deringer, J.
1984-08-01
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels ofmore » detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less
Gallium arsenide processing elements for motion estimation full-search algorithm
NASA Astrophysics Data System (ADS)
Lopez, Jose F.; Cortes, P.; Lopez, S.; Sarmiento, Roberto
2001-11-01
The Block-Matching motion estimation algorithm (BMA) is the most popular method for motion-compensated coding of image sequence. Among the several possible searching methods to compute this algorithm, the full-search BMA (FBMA) has obtained great interest from the scientific community due to its regularity, optimal solution and low control overhead which simplifies its VLSI realization. On the other hand, its main drawback is the demand of an enormous amount of computation. There are different ways of overcoming this factor, being the use of advanced technologies, such as Gallium Arsenide (GaAs), the one adopted in this article together with different techniques to reduce area overhead. By exploiting GaAs properties, improvements can be obtained in the implementation of feasible systems for real time video compression architectures. Different primitives used in the implementation of processing elements (PE) for a FBMA scheme are presented. As a result, Pes running at 270 MHz have been developed in order to study its functionality and performance. From these results, an implementation for MPEG applications is proposed, leading to an architecture running at 145 MHz with a power dissipation of 3.48 W and an area of 11.5 mm2.
Physics-based coastal current tomographic tracking using a Kalman filter.
Wang, Tongchen; Zhang, Ying; Yang, T C; Chen, Huifang; Xu, Wen
2018-05-01
Ocean acoustic tomography can be used based on measurements of two-way travel-time differences between the nodes deployed on the perimeter of the surveying area to invert/map the ocean current inside the area. Data at different times can be related using a Kalman filter, and given an ocean circulation model, one can in principle now cast and even forecast current distribution given an initial distribution and/or the travel-time difference data on the boundary. However, an ocean circulation model requires many inputs (many of them often not available) and is unpractical for estimation of the current field. A simplified form of the discretized Navier-Stokes equation is used to show that the future velocity state is just a weighted spatial average of the current state. These weights could be obtained from an ocean circulation model, but here in a data driven approach, auto-regressive methods are used to obtain the time and space dependent weights from the data. It is shown, based on simulated data, that the current field tracked using a Kalman filter (with an arbitrary initial condition) is more accurate than that estimated by the standard methods where data at different times are treated independently. Real data are also examined.
An investigation of soil-structure interaction effects observed at the MIT Green Building
Taciroglu, Ertugrul; Çelebi, Mehmet; Ghahari, S. Farid; Abazarsa, Fariba
2016-01-01
The soil-foundation impedance function of the MIT Green Building is identified from its response signals recorded during an earthquake. Estimation of foundation impedance functions from seismic response signals is a challenging task, because: (1) the foundation input motions (FIMs) are not directly measurable, (2) the as-built properties of the super-structure are only approximately known, and (3) the soil-foundation impedance functions are inherently frequency-dependent. In the present study, aforementioned difficulties are circumvented by using, in succession, a blind modal identification (BMID) method, a simplified Timoshenko beam model (TBM), and a parametric updating of transfer functions (TFs). First, the flexible-base modal properties of the building are identified from response signals using the BMID method. Then, a flexible-base TBM is updated using the identified modal data. Finally, the frequency-dependent soil-foundation impedance function is estimated by minimizing the discrepancy between TFs (of pairs instrumented floors) that are (1) obtained experimentally from earthquake data and (2) analytically from the updated TBM. Using the fully identified flexible-base TBM, the FIMs as well as building responses at locations without instruments can be predicted, as demonstrated in the present study.
Towing Tank Tests on a Ram Wing in a Rectangular Guideway
DOT National Transportation Integrated Search
1973-07-01
The object of the study was to set the theoretical and experimental basis for a preliminary design of a ram wing vehicle. A simplified one-dimensional mathematical model is developed in an attempt to estimate the stability derivatives of this type of...
NASA Astrophysics Data System (ADS)
Zuccarello, Felice; Raudino, Antonio; Buemi, Giuseppe
1980-03-01
The interaction between the anionic site of cholinesterase and the cationic end of acetylcholine is estimated by considering a simplified model. The effect of the aqueous environment on the stability of the aggregate is considered.
Simplified human thermoregulatory model for designing wearable thermoelectric devices
NASA Astrophysics Data System (ADS)
Wijethunge, Dimuthu; Kim, Donggyu; Kim, Woochul
2018-02-01
Research on wearable and implantable devices have become popular with the strong need in market. A precise understanding of the thermal properties of human skin, which are not constant values but vary depending on ambient condition, is required for the development of such devices. In this paper, we present simplified human thermoregulatory model for accurately estimating the thermal properties of the skin without applying rigorous calculations. The proposed model considers a variable blood flow rate through the skin, evaporation functions, and a variable convection heat transfer from the skin surface. In addition, wearable thermoelectric generation (TEG) and refrigeration devices were simulated. We found that deviations of 10-60% can be resulted in estimating TEG performance without considering human thermoregulatory model owing to the fact that thermal resistance of human skin is adapted to ambient condition. Simplicity of the modeling procedure presented in this work could be beneficial for optimizing and predicting the performance of any applications that are directly coupled with skin thermal properties.
A simplified economic filter for open-pit gold-silver mining in the United States
Singer, Donald A.; Menzie, W. David; Long, Keith R.
1998-01-01
In resource assessments of undiscovered mineral deposits and in the early stages of exploration, including planning, a need for prefeasibility cost models exists. In exploration, these models to filter economic from uneconomic deposits help to focus on targets that can really benefit the exploration enterprise. In resource assessment, these models can be used to eliminate deposits that would probably be uneconomic even if discovered. The U. S. Bureau of Mines (USBM) previously developed simplified cost models for such problems (Camm, 1991). These cost models estimate operating and capital expenditures for a mineral deposit given its tonnage, grade, and depth. These cost models were also incorporated in USBM prefeasibility software (Smith, 1991). Because the cost data used to estimate operating and capital costs in these models are now over ten years old, we decided that it was necessary to test these equations with more current data. We limited this study to open-pit gold-silver mines located in the United States.
Modeling and Assessment of GPS/BDS Combined Precise Point Positioning.
Chen, Junping; Wang, Jungang; Zhang, Yize; Yang, Sainan; Chen, Qian; Gong, Xiuqiang
2016-07-22
Precise Point Positioning (PPP) technique enables stand-alone receivers to obtain cm-level positioning accuracy. Observations from multi-GNSS systems can augment users with improved positioning accuracy, reliability and availability. In this paper, we present and evaluate the GPS/BDS combined PPP models, including the traditional model and a simplified model, where the inter-system bias (ISB) is treated in different way. To evaluate the performance of combined GPS/BDS PPP, kinematic and static PPP positions are compared to the IGS daily estimates, where 1 month GPS/BDS data of 11 IGS Multi-GNSS Experiment (MGEX) stations are used. The results indicate apparent improvement of GPS/BDS combined PPP solutions in both static and kinematic cases, where much smaller standard deviations are presented in the magnitude distribution of coordinates RMS statistics. Comparisons between the traditional and simplified combined PPP models show no difference in coordinate estimations, and the inter system biases between the GPS/BDS system are assimilated into receiver clock, ambiguities and pseudo-range residuals accordingly.
Electric Power Distribution System Model Simplification Using Segment Substitution
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...
2017-09-20
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Discontinuous Galerkin Methods for NonLinear Differential Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy; Mansour, Nagi (Technical Monitor)
2001-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the PDE (partial differential equation) system. Central to the development of the simplified DG methods is the Eigenvalue Scaling Theorem which characterizes right symmetrizers of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobian matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler equations of gas dynamics and extended conservation law systems derivable as moments of the Boltzmann equation. Using results from kinetic Boltzmann moment closure theory, we then derive and prove energy stability for several approximate DG fluxes which have practical and theoretical merit.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Eddy, Sean R.
2008-01-01
Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236
Probabilistic risk analysis of building contamination.
Bolster, D T; Tartakovsky, D M
2008-10-01
We present a general framework for probabilistic risk assessment (PRA) of building contamination. PRA provides a powerful tool for the rigorous quantification of risk in contamination of building spaces. A typical PRA starts by identifying relevant components of a system (e.g. ventilation system components, potential sources of contaminants, remediation methods) and proceeds by using available information and statistical inference to estimate the probabilities of their failure. These probabilities are then combined by means of fault-tree analyses to yield probabilistic estimates of the risk of system failure (e.g. building contamination). A sensitivity study of PRAs can identify features and potential problems that need to be addressed with the most urgency. Often PRAs are amenable to approximations, which can significantly simplify the approach. All these features of PRA are presented in this paper via a simple illustrative example, which can be built upon in further studies. The tool presented here can be used to design and maintain adequate ventilation systems to minimize exposure of occupants to contaminants.
Size distribution of Portuguese firms between 2006 and 2012
NASA Astrophysics Data System (ADS)
Pascoal, Rui; Augusto, Mário; Monteiro, A. M.
2016-09-01
This study aims to describe the size distribution of Portuguese firms, as measured by annual sales and total assets, between 2006 and 2012, giving an economic interpretation for the evolution of the distribution along the time. Three distributions are fitted to data: the lognormal, the Pareto (and as a particular case Zipf) and the Simplified Canonical Law (SCL). We present the main arguments found in literature to justify the use of distributions and emphasize the interpretation of SCL coefficients. Methods of estimation include Maximum Likelihood, modified Ordinary Least Squares in log-log scale and Nonlinear Least Squares considering the Levenberg-Marquardt algorithm. When applying these approaches to Portuguese's firms data, we analyze if the evolution of estimated parameters in both lognormal power and SCL is in accordance with the known existence of a recession period after 2008. This is confirmed for sales but not for assets, leading to the conclusion that the first variable is a best proxy for firm size.
Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.
2016-01-01
For this study, a methodology was developed for assessing impacts of wind energy generation on populations of birds and bats at regional to national scales. The approach combines existing methods in applied ecology for prioritizing species in terms of their potential risk from wind energy facilities and estimating impacts of fatalities on population status and trend caused by collisions with wind energy infrastructure. Methods include a qualitative prioritization approach, demographic models, and potential biological removal. The approach can be used to prioritize species in need of more thorough study as well as to identify species with minimal risk. However, the components of this methodology require simplifying assumptions and the data required may be unavailable or of poor quality for some species. These issues should be carefully considered before using the methodology. The approach will increase in value as more data become available and will broaden the understanding of anthropogenic sources of mortality on bird and bat populations.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508
Simplifying BRDF input data for optical signature modeling
NASA Astrophysics Data System (ADS)
Hallberg, Tomas; Pohl, Anna; Fagerström, Jan
2017-05-01
Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.
NASA Astrophysics Data System (ADS)
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A.
2017-12-01
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
Rearranging Pionless Effective Field Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin Savage; Silas Beane
2001-11-19
We point out a redundancy in the operator structure of the pionless effective field theory which dramatically simplifies computations. This redundancy is best exploited by using dibaryon fields as fundamental degrees of freedom. In turn, this suggests a new power counting scheme which sums range corrections to all orders. We explore this method with a few simple observables: the deuteron charge form factor, n p -> d gamma, and Compton scattering from the deuteron. Higher dimension operators involving electroweak gauge fields are not renormalized by the s-wave strong interactions, and therefore do not scale with inverse powers of the renormalizationmore » scale. Thus, naive dimensional analysis of these operators is sufficient to estimate their contribution to a given process.« less
Exhaust pressure pulsation observation from turbocharger instantaneous speed measurement
NASA Astrophysics Data System (ADS)
Macián, V.; Luján, J. M.; Bermúdez, V.; Guardiola, C.
2004-06-01
In internal combustion engines, instantaneous exhaust pressure measurements are difficult to perform in a production environment. The high temperature of the exhaust manifold and its pulsating character make its application to exhaust gas recirculation control algorithms impossible. In this paper an alternative method for estimating the exhaust pressure pulsation is presented. A numerical model is built which enables the exhaust pressure pulses to be predicted from instantaneous turbocharger speed measurements. Although the model is data based, a theoretical description of the process is also provided. This combined approach makes it possible to export the model for different engine operating points. Also, compressor contribution in the turbocharger speed pulsation is discussed extensively. The compressor contribution is initially neglected, and effects of this simplified approach are analysed.
Feldt, Jonas; Miranda, Sebastião; Pratas, Frederico; Roma, Nuno; Tomás, Pedro; Mata, Ricardo A
2017-12-28
In this work, we present an optimized perturbative quantum mechanics/molecular mechanics (QM/MM) method for use in Metropolis Monte Carlo simulations. The model adopted is particularly tailored for the simulation of molecular systems in solution but can be readily extended to other applications, such as catalysis in enzymatic environments. The electrostatic coupling between the QM and MM systems is simplified by applying perturbation theory to estimate the energy changes caused by a movement in the MM system. This approximation, together with the effective use of GPU acceleration, leads to a negligible added computational cost for the sampling of the environment. Benchmark calculations are carried out to evaluate the impact of the approximations applied and the overall computational performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frey, K.A.; Hichwa, R.D.; Ehrenkaufer, R.L.
1985-10-01
A tracer kinetic method is developed for the in vivo estimation of high-affinity radioligand binding to central nervous system receptors. Ligand is considered to exist in three brain pools corresponding to free, nonspecifically bound, and specifically bound tracer. These environments, in addition to that of intravascular tracer, are interrelated by a compartmental model of in vivo ligand distribution. A mathematical description of the model is derived, which allows determination of regional blood-brain barrier permeability, nonspecific binding, the rate of receptor-ligand association, and the rate of dissociation of bound ligand, from the time courses of arterial blood and tissue tracer concentrations.more » The term ''free receptor density'' is introduced to describe the receptor population measured by this method. The technique is applied to the in vivo determination of regional muscarinic acetylcholine receptors in the rat, with the use of (TH)scopolamine. Kinetic estimates of free muscarinic receptor density are in general agreement with binding capacities obtained from previous in vivo and in vitro equilibrium binding studies. In the striatum, however, kinetic estimates of free receptor density are less than those in the neocortex--a reversal of the rank ordering of these regions derived from equilibrium determinations. A simplified model is presented that is applicable to tracers that do not readily dissociate from specific binding sites during the experimental period.« less
Jaiswal, Roshni; Shenoy, Nina; Thomas, Biju
2016-01-01
Background: Stress classically describes a destructive notion that can have a bearing on one's physical and mental health. It may also add to an increased propensity to periodontal disease. Aim: To investigate the association between psychological stress and serum cortisol levels in patients with chronic periodontitis. Materials and Methods: Forty subjects were recruited from the outpatient department at the Department of Periodontics, from a college in Mangalore, divided into two groups, i.e., twenty as healthy controls and twenty were stressed subjects with chronic periodontitis. The clinical examination included the assessment of probing pocket depth, clinical attachment level and oral hygiene index-simplified. Serum cortisol levels were estimated biochemically using the enzyme-linked immunosorbent assay method and the estimation of psychological stress was done by a questionnaire. Results: Descriptive statistics such as mean and standard deviation was used to review the collected data. Independent sample t-test was used for comparison and correlation was evaluation using Pearson's correlation test. As per our observation, high serum cortisol levels and psychological stress are positively linked with chronic periodontitis establishing a risk profile showing a significant correlation (P < 0.05). Conclusion: Routine serum cortisol assessment may be a reasonable and a valuable investigative indicator to rule out stress in periodontitis patients as it should be considered as an imperative risk factor for periodontal disease. PMID:28298818
Derivative-free generation and interpolation of convex Pareto optimal IMRT plans
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk
2006-12-01
In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.
Yan, Xuemei; Zhang, Qianying; Feng, Fang
2016-04-01
Da-Huang-Xiao-Shi decoction, consisting of Rheum officinale Baill, Mirabilitum, Phellodendron amurense Rupr. and Gardenia jasminoides Ellis, is a traditional Chinese medicine used for the treatment of jaundice. As described in "Jin Kui Yao Lue", a traditional multistep decoction of Da-Huang-Xiao-Shi decoction was required while simplified one-step decoction was used in recent repsorts. To investigate the chemical difference between the decoctions obtained by the traditional and simplified preparations, a sensitive and reliable approach of high-performance liquid chromatography coupled with diode-array detection and electrospray ionization time-of-flight mass spectrometry was established. As a result, a total of 105 compounds were detected and identified. Analysis of the chromatogram profiles of the two decoctions showed that many compounds in the decoction of simplified preparation had changed obviously compared with those in traditional preparation. The changes of constituents would be bound to cause the differences in the therapeutic effects of the two decoctions. The present study demonstrated that certain preparation methods significantly affect the holistic quality of traditional Chinese medicines and the use of a suitable preparation method is crucial for these medicines to produce special clinical curative effect. This research results elucidated the scientific basis of traditional preparation methods in Chinese medicines. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Giovenzana, Valentina; Civelli, Raffaele; Beghi, Roberto; Oberti, Roberto; Guidetti, Riccardo
2015-11-01
The aim of this work was to test a simplified optical prototype for a rapid estimation of the ripening parameters of white grape for Franciacorta wine directly in field. Spectral acquisition based on reflectance at four wavelengths (630, 690, 750 and 850 nm) was proposed. The integration of a simple processing algorithm in the microcontroller software would allow to visualize real time values of spectral reflectance. Non-destructive analyses were carried out on 95 grape bunches for a total of 475 berries. Samplings were performed weekly during the last ripening stages. Optical measurements were carried out both using the simplified system and a portable commercial vis/NIR spectrophotometer, as reference instrument for performance comparison. Chemometric analyses were performed in order to extract the maximum useful information from optical data. Principal component analysis (PCA) was performed for a preliminary evaluation of the data. Correlations between the optical data matrix and ripening parameters (total soluble solids content, SSC; titratable acidity, TA) were carried out using partial least square (PLS) regression for spectra and using multiple linear regression (MLR) for data from the simplified device. Classification analysis were also performed with the aim of discriminate ripe and unripe samples. PCA, MLR and classification analyses show the effectiveness of the simplified system in separating samples among different sampling dates and in discriminating ripe from unripe samples. Finally, simple equations for SSC and TA prediction were calculated. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Seguin, B.; Petit, V.; Devillard, R.; Reich, P.; Thouy, G. (Principal Investigator)
1980-01-01
Evapotranspiration was calculated for both the dry and irrigated zone by four methods which were compared with the energy balance method serving as a reference. Two methods did not involve the surface temperature. They are ETR(n) = R(n), liable to be valid under wet conditions and ET(eq) = (delta/delta + gamma) R(n) i.e, the first term of Penman's equation, adapted to moderately dry conditions. The methods using surface temperature were the combined energy balance aerodynamic approach and a simplified approach proposed by Jackson et al. Tests show the surface temperature methods give relatively satisfactory results both in the dry and wet zone, with a precision of 10% to 15% compared with the reference method. As was to be expected, ET(eq) gave satisfactory results only in the dry zone and ET(Rn) in the irrigated zone. Thermography increased the precision in the estimate of ET relative to the most suitable classical method by 5% to 8% and is equally suitable for both dry and wet conditions. The Jackson method does not require extensive ground measurements and the evaluation of the surface roughness.
2018-03-01
of a Simplified Renal Replacement Therapy Suitable for Prolonged Field Care in a Porcine (Sus scrofa) Model of Acute Kidney Injury. PRINCIPAL...and methods, results - include tables/figures, and conclusions/applications.) Objectives/Background: Acute kidney injury (AKI) is a serious
A Simplified Technique for Evaluating Human "CCR5" Genetic Polymorphism
ERIC Educational Resources Information Center
Falteisek, Lukáš; Cerný, Jan; Janštová, Vanda
2013-01-01
To involve students in thinking about the problem of AIDS (which is important in the view of nondecreasing infection rates), we established a practical lab using a simplified adaptation of Thomas's (2004) method to determine the polymorphism of HIV co-receptor CCR5 from students' own epithelial cells. CCR5 is a receptor involved in inflammatory…
Model-based estimation for dynamic cardiac studies using ECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.
1994-06-01
In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performancemore » to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed.« less
A simplified dynamic model of the T700 turboshaft engine
NASA Technical Reports Server (NTRS)
Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.
1992-01-01
A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.
Analysis of temperature distribution in liquid-cooled turbine blades
NASA Technical Reports Server (NTRS)
Livingood, John N B; Brown, W Byron
1952-01-01
The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.
An eight-legged tactile sensor to estimate coefficient of static friction.
Wei Chen; Rodpongpun, Sura; Luo, William; Isaacson, Nathan; Kark, Lauren; Khamis, Heba; Redmond, Stephen J
2015-08-01
It is well known that a tangential force larger than the maximum static friction force is required to initiate the sliding motion between two objects, which is governed by a material constant called the coefficient of static friction. Therefore, knowing the coefficient of static friction is of great importance for robot grippers which wish to maintain a stable and precise grip on an object during various manipulation tasks. Importantly, it is most useful if grippers can estimate the coefficient of static friction without having to explicitly explore the object first, such as lifting the object and reducing the grip force until it slips. A novel eight-legged sensor, based on simplified theoretical principles of friction is presented here to estimate the coefficient of static friction between a planar surface and the prototype sensor. Each of the sensor's eight legs are straight and rigid, and oriented at a specified angle with respect to the vertical, allowing it to estimate one of five ranges (5 = 8/2 + 1) that the coefficient of static friction can occupy. The coefficient of friction can be estimated by determining whether the legs have slipped or not when pressed against a surface. The coefficients of static friction between the sensor and five different materials were estimated and compared to a measurement from traditional methods. A least-squares linear fit of the sensor estimated coefficient showed good correlation with the reference coefficient with a gradient close to one and an r(2) value greater than 0.9.
From LCAs to simplified models: a generic methodology applied to wind power electricity.
Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle
2013-02-05
This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.
A simplified analytic form for generation of axisymmetric plasma boundaries
Luce, Timothy C.
2017-02-23
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
NASA Technical Reports Server (NTRS)
Jones, Robert T
1937-01-01
A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.
A simplified analytic form for generation of axisymmetric plasma boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luce, Timothy C.
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...
48 CFR 8.405-6 - Limiting sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...