An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
Performance of statistical models to predict mental health and substance abuse cost.
Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K
2006-10-26
Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.
Active Control of the Forced and Transient Response of a Finite Beam. M.S. Thesis
NASA Technical Reports Server (NTRS)
Post, John Theodore
1989-01-01
When studying structural vibrations resulting from a concentrated source, many structures may be modelled as a finite beam excited by a point source. The theoretical limit on cancelling the resulting beam vibrations by utilizing another point source as an active controller is explored. Three different types of excitation are considered, harmonic, random, and transient. In each case, a cost function is defined and minimized for numerous parameter variations. For the case of harmonic excitation, the cost function is obtained by integrating the mean squared displacement over a region of the beam in which control is desired. A controller is then found to minimize this cost function in the control interval. The control interval and controller location are continuously varied for several frequencies of excitation. The results show that control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam, but control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, the cost function is realized by integrating the expected value of the displacement squared over the interval of the beam in which control is desired. This is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. A cost function representative of the beam vibration is obtained by integrating the transient displacement squared over a region of the beam and over all time. The form of the controller is chosen a priori as either one or two delayed pulses. Delays constrain the controller to be causal. The best possible control is then examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses. The two pulse controller gives better performance than a single pulse controller, but finding the optimal delay time for the additional controllers increases as the square of the number of control pulses.
1980-1981 Comparative Costs and Staffing Report for Physical Plants of Colleges and Universities.
ERIC Educational Resources Information Center
Association of Physical Plant Administrators of Universities and Colleges, Washington, DC.
Comparative costs of plant maintenance and operations functions, including staffing costs, for higher education institutions are presented for 1980-1981. The objective of the survey data is to promote comparisons of unit costs per gross square foot of the functions classified as maintenance and operations of plant, the number of full-time…
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Investigation of test methods, material properties, and processes for solar cell encapsulants
NASA Technical Reports Server (NTRS)
1984-01-01
Photovoltaic (PV) modules consist of a string of electrically interconnected silicon solar cells capable of producing practical quantities of electrical power when exposed to sunlight. To insure high reliability and long term performance, the functional components of the solar cell module must be adequately protected from the environment by some encapsulation technique. The encapsulation system must provide mechanical support for the cells and corrosion protection for the electrical components. The goal of the program is to identify and develop encapsulation systems consistent with the PV module operating requirements of 30 year life and a target cost of $0.70 per peak watt ($70 per square meter) (1980 dollars). Assuming a module efficiency of ten percent, which is equivalent to a power output of 100 watts per square meter in midday sunlight, the capital cost of the modules may be calculated to be $70.00 per square meter. Out of this cost goal, only 20 percent is available for encapsulation due to the high cost of the cells, interconnects, and other related components. The encapsulation cost allocation may then be stated as $14.00 per square meter, included all coatings, pottant and mechanical supports for the cells.
Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O
2009-04-01
This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.
Extension of suboptimal control theory for flow around a square cylinder
NASA Astrophysics Data System (ADS)
Fujita, Yosuke; Fukagata, Koji
2017-11-01
We extend the suboptimal control theory to control of flow around a square cylinder, which has no point symmetry on the impulse response from the wall in contrast to circular cylinders and spheres previously studied. The cost functions examined are the pressure drag (J1), the friction drag (J2), the squared difference between target pressure and wall pressure (J3) and the time-averaged dissipation (J4). The control input is assumed to be continuous blowing and suction on the cylinder wall and the feedback sensors are assumued on the entire wall surface. The control law is derived so as to minimize the cost function under the constraint of linearized Navier-Stokes equation, and the impulse response field to be convolved with the instantaneous flow quanties are numerically obtained. The amplitide of control input is fixed so that the maximum blowing/suction velocity is 40% of the freestream velocity. When J2 is used as the cost function, the friction drag is reduced as expected but the mean drag is found to increase. In constast, when J1, J3, and J4 were used, the mean drag was found to decrease by 21%, 12%, and 22%, respectively; in addition, vortex shedding is suppressed, which leads to reduction of lift fluctuations.
Complementary effect of patient volume and quality of care on hospital cost efficiency.
Choi, Jeong Hoon; Park, Imsu; Jung, Ilyoung; Dey, Asoke
2017-06-01
This study explores the direct effect of an increase in patient volume in a hospital and the complementary effect of quality of care on the cost efficiency of U.S. hospitals in terms of patient volume. The simultaneous equation model with three-stage least squares is used to measure the direct effect of patient volume and the complementary effect of quality of care and volume. Cost efficiency is measured with a data envelopment analysis method. Patient volume has a U-shaped relationship with hospital cost efficiency and an inverted U-shaped relationship with quality of care. Quality of care functions as a moderator for the relationship between patient volume and efficiency. This paper addresses the economically important question of the relationship of volume with quality of care and hospital cost efficiency. The three-stage least square simultaneous equation model captures the simultaneous effects of patient volume on hospital quality of care and cost efficiency.
Energy cost of square dancing.
Jetté, M; Inglis, H
1975-01-01
This experiment was concerned with determining the energy cost of two popular Western square dancing routines: the "Mish-Mash," which is a relatively fast-moving dance with quick movements, and the "Singing" dance, which is a slower and more deliberate type of dance. The subjects were four middle-aged couples, veteran members of a local square dancing club. Sitting and standing pulmonary ventilations were determined through the use of the Tissot gasometer. Kofrańyi-Michaelis respirometers were employed for the dance routine ventilations. These apparatus were fitted with a Monoghan neoprene cushion plastic mask. Gas samples were collected in polyethylene metallized bags and analyzed for O2 and CO2 content. The net energy cost for the two dances was appropriately summarized. The results indicated that for the males the net average energy cost of the "Mish-Mash" dance was 0.085 and 0.077 kcal/min per kg for the "Singing" dance. For the females, the cost was 0.088 and 0.084 kcal/min per kg, respectively. A net average cost of these two dances yielded a caloric expenditure of 5.7 kcal/min for a 70-kg male and 5.2 kcal/min for a 60-kg female. It was indicated that during the course of a typical square dance evening, a 70-kg man would expend some 425 kcal. while a 60-kg female would burn some 390 kcal. The energy cost of the dances studied were determined to be within the permissible work load of a functional class 1 patient with diseases of the heart as determined by the American Heart Association.
Preliminary Multi-Variable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.
Cost Modeling for Space Optical Telescope Assemblies
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Parametric cost models are used to plan missions, compare concepts and justify technology investments. This paper reviews an on-going effort to develop cost modes for space telescopes. This paper summarizes the methodology used to develop cost models and documents how changes to the database have changed previously published preliminary cost models. While the cost models are evolving, the previously published findings remain valid: it costs less per square meter of collecting aperture to build a large telescope than a small telescope; technology development as a function of time reduces cost; and lower areal density telescopes cost more than more massive telescopes.
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
An Alternative Procedure for Estimating Unit Learning Curves,
1985-09-01
the model accurately describes the real-life situation, i.e., when the model is properly applied to the data, it can be a powerful tool for...predicting unit production costs. There are, however, some unique estimation problems inherent in the model . The usual method of generating predicted unit...production costs attempts to extend properties of least squares estimators to non- linear functions of these estimators. The result is biased estimates of
Solution for a bipartite Euclidean traveling-salesman problem in one dimension
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M.
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Solution for a bipartite Euclidean traveling-salesman problem in one dimension.
Caracciolo, Sergio; Di Gioacchino, Andrea; Gherardi, Marco; Malatesta, Enrico M
2018-05-01
The traveling-salesman problem is one of the most studied combinatorial optimization problems, because of the simplicity in its statement and the difficulty in its solution. We characterize the optimal cycle for every convex and increasing cost function when the points are thrown independently and with an identical probability distribution in a compact interval. We compute the average optimal cost for every number of points when the distance function is the square of the Euclidean distance. We also show that the average optimal cost is not a self-averaging quantity by explicitly computing the variance of its distribution in the thermodynamic limit. Moreover, we prove that the cost of the optimal cycle is not smaller than twice the cost of the optimal assignment of the same set of points. Interestingly, this bound is saturated in the thermodynamic limit.
Square tubing reduces cost of telescoping bridge crane hoist
NASA Technical Reports Server (NTRS)
Bernstein, G.; Graae, J.; Schraidt, J.
1967-01-01
Using standard square tubing in a telescoping arrangement reduces the cost of a bridge crane hoist. Because surface tolerances of square tubing need not be as accurate as the tubing used previously and because no spline is necessary, the square tubing is significantly less expensive than splined telescoping tubes.
Exact solution for the optimal neuronal layout problem.
Chklovskii, Dmitri B
2004-10-01
Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.
Study of the convergence behavior of the complex kernel least mean square algorithm.
Paul, Thomas K; Ogunfunmi, Tokunbo
2013-09-01
The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.
1988-01-01
A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.
Adaptive Path Control of Surface Ships in Restricted Waters.
1980-08-01
and Fn=0.116-- Random Walk Disturbance Model 31 6. Optimal Gains for Tokyo Mazu at H/T=- and Fn=0.116-- Random Walk Disturbance Model 39 7. RMS Cost J...yaw mass moment of inertia [kgm 2 V =21 /pL nondimensional yaw mass moment of inertia zz zz J optimal control or Weighted Least-Squares cost function...J RMS cost , eq. (70) J 5yaw added mass moment of inertia [kgm 2 iz=2Jz/pL nondimensional yaw added mass moment of inertia zz zz K Kalman-Bucy state
A classical density-functional theory for describing water interfaces.
Hughes, Jessica; Krebs, Eric J; Roundy, David
2013-01-14
We develop a classical density functional for water which combines the White Bear fundamental-measure theory (FMT) functional for the hard sphere fluid with attractive interactions based on the statistical associating fluid theory variable range (SAFT-VR). This functional reproduces the properties of water at both long and short length scales over a wide range of temperatures and is computationally efficient, comparable to the cost of FMT itself. We demonstrate our functional by applying it to systems composed of two hard rods, four hard rods arranged in a square, and hard spheres in water.
Application of Output Predictive Algorithmic Control to a Terrain Following Aircraft System.
1982-03-01
non-linear regime the results from an optimal control solution may be questionable. 15 -**—• - •*- "•—"".’" CHAPTER 3 Output Prpdirl- ivf ...strongly influenced by two other factors as well - the sample time T and the least-squares cost function Q. unlike the deadbeat control law of Ref...design of aircraft control systems since these methods offer tremendous insight into the dynamic behavior of the system at relatively low cost . However
The costs of turnover in nursing homes.
Mukamel, Dana B; Spector, William D; Limcangco, Rhona; Wang, Ying; Feng, Zhanlian; Mor, Vincent
2009-10-01
Turnover rates in nursing homes have been persistently high for decades, ranging upwards of 100%. To estimate the net costs associated with turnover of direct care staff in nursing homes. DATA AND SAMPLE: Nine hundred two nursing homes in California in 2005. Data included Medicaid cost reports, the Minimum Data Set, Medicare enrollment files, Census, and Area Resource File. We estimated total cost functions, which included in addition to exogenous outputs and wages, the facility turnover rate. Instrumental variable limited information maximum likelihood techniques were used for estimation to deal with the endogeneity of turnover and costs. The cost functions exhibited the expected behavior, with initially increasing and then decreasing returns to scale. The ordinary least square estimate did not show a significant association between costs and turnover. The instrumental variable estimate of turnover costs was negative and significant (P = 0.039). The marginal cost savings associated with a 10% point increase in turnover for an average facility was $167,063 or 2.9% of annual total costs. The net savings associated with turnover offer an explanation for the persistence of this phenomenon over the last decades, despite the many policy initiatives to reduce it. Future policy efforts need to recognize the complex relationship between turnover and costs.
Gaydos, Leonard
1978-01-01
The cost of classifying 5,607 square kilometers (2,165 sq. mi.) in the Portland area was less than 8 cents per square kilometer ($0.0788, or $0.2041 per square mile). Besides saving in costs, this and other signature extension techniques may be useful in completing land use and land cover mapping in other large areas where multispectral and multitemporal Landsat data are available in digital form but other source materials are generally lacking.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... Office Square, Boston, MA 02109-3912. DATES: Comments must be submitted on or before February 16, 2010..., Region I, 5 Post Office Square, Suite 100, Mailcode ORA 18-1, Boston, Massachusetts 02109-3912, and... Office Square, Suite 100, Mailcode OES 04-2, Boston, Massachusetts 02109-3912, (617) 918-1884. A copy of...
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Two-Year College LRC Buildings.
ERIC Educational Resources Information Center
Bock, D. Joleen
1983-01-01
Reports results of 1981-83 survey of 24 new and 22 remodeled 2-year college Learning Resource Centers, noting gross area, square foot cost, furniture/equipment costs, seats, and types of facilities. Major trends (square foot costs 1965-83, public catalog formats) and the flat roof disaster at Kauai Community College, Hawaii, are discussed. (EJS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Arizona. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Hawaii. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Connecticut. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
The costs of turnover in nursing homes
Mukamel, Dana B.; Spector, William D.; Limcangco, Rhona; Wang, Ying; Feng, Zhanlian; Mor, Vincent
2009-01-01
Background Turnover rates in nursing homes have been persistently high for decades, ranging upwards of 100%. Objectives To estimate the net costs associated with turnover of direct care staff in nursing homes. Data and sample 902 nursing homes in California in 2005. Data included Medicaid cost reports, the Minimum Data Set (MDS), Medicare enrollment files, Census and Area Resource File (ARF). Research Design We estimated total cost functions, which included in addition to exogenous outputs and wages, the facility turnover rate. Instrumental variable (IV) limited information maximum likelihood techniques were used for estimation to deal with the endogeneity of turnover and costs. Results The cost functions exhibited the expected behavior, with initially increasing and then decreasing returns to scale. The ordinary least square estimate did not show a significant association between costs and turnover. The IV estimate of turnover costs was negative and significant (p=0.039). The marginal cost savings associated with a 10 percentage point increase in turnover for an average facility was $167,063 or 2.9% of annual total costs. Conclusion The net savings associated with turnover offer an explanation for the persistence of this phenomenon over the last decades, despite the many policy initiatives to reduce it. Future policy efforts need to recognize the complex relationship between turnover and costs. PMID:19648834
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Arnold, James O. (Technical Monitor)
1994-01-01
A new spin orbital basis is employed in the development of efficient open-shell coupled-cluster and perturbation theories that are based on a restricted Hartree-Fock (RHF) reference function. The spin orbital basis differs from the standard one in the spin functions that are associated with the singly occupied spatial orbital. The occupied orbital (in the spin orbital basis) is assigned the delta(+) = 1/square root of 2(alpha+Beta) spin function while the unoccupied orbital is assigned the delta(-) = 1/square root of 2(alpha-Beta) spin function. The doubly occupied and unoccupied orbitals (in the reference function) are assigned the standard alpha and Beta spin functions. The coupled-cluster and perturbation theory wave functions based on this set of "symmetric spin orbitals" exhibit much more symmetry than those based on the standard spin orbital basis. This, together with interacting space arguments, leads to a dramatic reduction in the computational cost for both coupled-cluster and perturbation theory. Additionally, perturbation theory based on "symmetric spin orbitals" obeys Brillouin's theorem provided that spin and spatial excitations are both considered. Other properties of the coupled-cluster and perturbation theory wave functions and models will be discussed.
Cost effectiveness of conventional versus LANDSAT use data for hydrologic modeling
NASA Technical Reports Server (NTRS)
George, T. S.; Taylor, R. S.
1982-01-01
Six case studies were analyzed to investigate the cost effectiveness of using land use data obtained from LANDSAT as opposed to conventionally obtained data. A procedure was developed to determine the relative effectiveness of the two alternative means of acquiring data for hydrological modelling. The cost of conventionally acquired data ranged between $3,000 and $16,000 for the six test basins. Information based on LANDSAT imagery cost between $2,000 and $5,000. Results of the effectiveness analysis shows the differences between the two methods are insignificant. From the cost comparison and the act that each method, conventional and LANDSAT, is shown to be equally effective in developing land use data for hydrologic studies, the cost effectiveness of the conventional or LANDSAT method is found to be a function of basin size for the six test watersheds analyzed. The LANDSAT approach is cost effective for areas containing more than 10 square miles.
Detection of Fiber Layer-Up Lamination Order of CFRP Composite Using Thermal-Wave Radar Imaging
NASA Astrophysics Data System (ADS)
Wang, Fei; Liu, Junyan; Liu, Yang; Wang, Yang; Gong, Jinlong
2016-09-01
In this paper, thermal-wave radar imaging (TWRI) is used as a nondestructive inspection method to evaluate carbon-fiber-reinforced-polymer (CFRP) composite. An inverse methodology that combines TWRI with numerical optimization technique is proposed to determine the fiber layer-up lamination sequences of anisotropic CFRP composite. A 7-layer CFRP laminate [0°/45°/90°/0°]_{{s}} is heated by a chirp-modulated Gaussian laser beam, and then finite element method (FEM) is employed to calculate the temperature field of CFRP laminates. The phase based on lock-in correlation between reference chirp signal and the thermal-wave signal is performed to obtain the phase image of TWRI, and the least square method is applied to reconstruct the cost function that minimizes the square of the difference between the phase of TWRI inspection and numerical calculation. A hybrid algorithm that combines the simulation annealing with Nelder-Mead simplex research method is employed to solve the reconstructed cost function and find the global optimal solution of the layer-up sequences of CFRP composite. The result shows the feasibility of estimating the fiber layer-up lamination sequences of CFRP composite with optimal discrete and constraint conditions.
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Texas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Texas. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Minnesota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Minnesota. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Indiana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Indiana. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Florida
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Florida. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Maine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Maine. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Vermont
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Vermont. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Michigan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Michigan. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Alabama
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Alabama. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of New Hampshire
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of New Hampshire. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of New Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of New Mexico. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Colorado
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Colorado. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Washington
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Washington. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Montana. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the District of Columbia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the District of Columbia. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Massachusetts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Massachusetts. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Oregon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Oregon. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Wisconsin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Wisconsin. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Ohio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Ohio. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of South Carolina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of South Carolina. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of North Carolina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of North Carolina. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology usedmore » in the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Cost Effectiveness of ASHRAE Standard 90.1-2013 for the State of Iowa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Athalye, Rahul A.; Xie, YuLong
2015-12-01
Moving to the ASHRAE Standard 90.1-2013 (ASHRAE 2013) edition from Standard 90.1-2010 (ASHRAE 2010) is cost-effective for the State of Iowa. The table below shows the state-wide economic impact of upgrading to Standard 90.1-2013 in terms of the annual energy cost savings in dollars per square foot, additional construction cost per square foot required by the upgrade, and life-cycle cost (LCC) per square foot. These results are weighted averages for all building types in all climate zones in the state, based on weightings shown in Table 4. The methodology used for this analysis is consistent with the methodology used inmore » the national cost-effectiveness analysis. Additional results and details on the methodology are presented in the following sections. The report provides analysis of two LCC scenarios: Scenario 1, representing publicly-owned buildings, considers initial costs, energy costs, maintenance costs, and replacement costs—without borrowing or taxes. Scenario 2, representing privately-owned buildings, adds borrowing costs and tax impacts.« less
Antiretroviral drug costs and prescription patterns in British Columbia, Canada: 1996-2011.
Nosyk, Bohdan; Montaner, Julio S G; Yip, Benita; Lima, Viviane D; Hogg, Robert S
2014-04-01
Treatment options and therapeutic guidelines have evolved substantially since highly active antiretroviral treatment (HAART) became the standard of HIV care in 1996. We conducted the present population-based analysis to characterize the determinants of direct costs of HAART over time in British Columbia, Canada. We considered individuals ever receiving HAART in British Columbia from 1996 to 2011. Linear mixed-effects regression models were constructed to determine the effects of demographic indicators, clinical stage, and treatment characteristics on quarterly costs of HAART (in 2010$CDN) among individuals initiating in different temporal periods. The least-square mean values were estimated by CD4 category and over time for each temporal cohort. Longitudinal data on HAART recipients (N = 9601, 17.6% female, mean age at initiation = 40.5) were analyzed. Multiple regression analyses identified demographics, treatment adherence, and pharmacological class to be independently associated with quarterly HAART costs. Higher CD4 cell counts were associated with modestly lower costs among pre-HAART initiators [least-square means (95% confidence interval), CD4 > 500: 4674 (4632-4716); CD4: 350-499: 4765 (4721-4809) CD4: 200-349: 4826 (4780-4871); CD4 <200: 4809 (4759-4859)]; however these differences were not significant among post-2003 HAART initiators. Population-level mean costs increased through 2006 and stabilized post-2003 HAART initiators incurred quarterly costs up to 23% lower than pre-2000 HAART initiators in 2010. Our results highlight the magnitude of the temporal changes in HAART costs, and disparities between recent and pre-HAART initiators. This methodology can improve the precision of economic modeling efforts by using detailed cost functions for annual, population-level medication costs according to the distribution of clients by clinical stage and era of treatment initiation.
Recommended Financial Plan for the Construction of a Permanent Campus for San Joaquin Delta College.
ERIC Educational Resources Information Center
Bortolazzo, Julio L.
The financial plan for the San Joaquin Delta College (California) permanent campus is presented in a table showing the gross square footage, the unit cost (including such fixed equipment as workbenches, laboratory tables, etc.), and the estimated total cost for each department. The unit costs per square foot vary from $18.00 for warehousing to…
Enhancing Least-Squares Finite Element Methods Through a Quantity-of-Interest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaudhry, Jehanzeb Hameed; Cyr, Eric C.; Liu, Kuo
2014-12-18
Here, we introduce an approach that augments least-squares finite element formulations with user-specified quantities-of-interest. The method incorporates the quantity-of-interest into the least-squares functional and inherits the global approximation properties of the standard formulation as well as increased resolution of the quantity-of-interest. We establish theoretical properties such as optimality and enhanced convergence under a set of general assumptions. Central to the approach is that it offers an element-level estimate of the error in the quantity-of-interest. As a result, we introduce an adaptive approach that yields efficient, adaptively refined approximations. Several numerical experiments for a range of situations are presented to supportmore » the theory and highlight the effectiveness of our methodology. Notably, the results show that the new approach is effective at improving the accuracy per total computational cost.« less
Characterization of HEM silicon for solar cells. [Heat Exchanger Method
NASA Technical Reports Server (NTRS)
Dumas, K. A.; Khattak, C. P.; Schmid, F.
1981-01-01
The Heat Exchanger Method (HEM) is a promising low-cost ingot casting process for material used for solar cells. This is the only method that is capable of casting single crystal ingots with a square cross section using a directional solidification technique. This paper describes the chemical, mechanical and electrical properties of the HEM silicon material as a function of position within the ingot.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Choosing Models for Health Care Cost Analyses: Issues of Nonlinearity and Endogeneity
Garrido, Melissa M; Deb, Partha; Burgess, James F; Penrod, Joan D
2012-01-01
Objective To compare methods of analyzing endogenous treatment effect models for nonlinear outcomes and illustrate the impact of model specification on estimates of treatment effects such as health care costs. Data Sources Secondary data on cost and utilization for inpatients hospitalized in five Veterans Affairs acute care facilities in 2005–2006. Study Design We compare results from analyses with full information maximum simulated likelihood (FIMSL); control function (CF) approaches employing different types and functional forms for the residuals, including the special case of two-stage residual inclusion; and two-stage least squares (2SLS). As an example, we examine the effect of an inpatient palliative care (PC) consultation on direct costs of care per day. Data Collection/Extraction Methods We analyzed data for 3,389 inpatients with one or more life-limiting diseases. Principal Findings The distribution of average treatment effects on the treated and local average treatment effects of a PC consultation depended on model specification. CF and FIMSL estimates were more similar to each other than to 2SLS estimates. CF estimates were sensitive to choice and functional form of residual. Conclusions When modeling cost or other nonlinear data with endogeneity, one should be aware of the impact of model specification and treatment effect choice on results. PMID:22524165
Some tradeoffs in ingot shaping and price of solar photovoltaic modules
NASA Technical Reports Server (NTRS)
Daud, T.
1982-01-01
Growth of round ingots is cost-effective for sheets but leaves unused space when round cells are packed into a module. This reduces the packing efficiency, which approaches 95% for square cells, to about 78% and reduces the conversion efficiency of the module by the same ratio. Shaping these ingots into squares with regrowth of cut silicon improves the packing factor, but increases growth cost. The cost impact on solar cell modules was determined by considering shaping ingots in stages from full round to complete square. The sequence of module production with relevant price allocation guidelines is outlined. The severe penalties in add-on price due to increasing slice thickness and kerf are presented. Trade-offs between advantages of recycling silicon and shaping costs are developed for different slicing scenarios. It is shown that shaping results in cost saving of up to 21% for a 15 cm dia. ingot.
Dung, Van Than; Tjahjowidodo, Tegoeh
2017-01-01
B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.
An economic systems analysis of land mobile radio telephone services
NASA Technical Reports Server (NTRS)
Leroy, B. E.; Stevenson, S. M.
1980-01-01
This paper deals with the economic interaction of the terrestrial and satellite land-mobile radio service systems. The cellular, trunked and satellite land-mobile systems are described. Parametric equations are formulated to allow examination of necessary user thresholds and growth rates as functions of system costs. Conversely, first order allowable systems costs are found as a function of user thresholds and growth rates. Transitions between satellite and terrestrial service systems are examined. User growth rate density (user/year/km squared) is shown to be a key parameter in the analysis of systems compatibility. The concept of system design matching the price demand curves is introduced and examples are given. The role of satellite systems is critically examined and the economic conditions necessary for the introduction of satellite service are identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less
Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.
Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry
2011-10-01
An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.
Digital robust active control law synthesis for large order systems using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1987-01-01
This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.
The Economic Cost of Communicable Disease Surveillance in Local Public Health Agencies.
Atherly, Adam; Whittington, Melanie; VanRaemdonck, Lisa; Lampe, Sarah
2017-12-01
We identify economic costs associated with communicable disease (CD) monitoring/surveillance in Colorado local public health agencies and identify possible economies of scale. Data were collected via a survey of local public health employees engaged in CD work. Survey respondents logged time spent on CD surveillance for 2-week periods in the spring of 2014 and fall of 2014. Forty-three of the 54 local public health agencies in Colorado participated. We used a microcosting approach. We estimated a statistical cost function using cost as a function of the number of reported investigable diseases during the matched 2-week period. We also controlled for other independent variables, including case mix, characteristics of the agency, the community, and services provided. Data were collected from a microcosting survey using time logs. Costs increased at a decreasing rate as cases increased, with both cases (β = 431.5, p < .001) and cases squared (β = -3.62, p = .05) statistically significant. The results of the model suggest economies of scale. Cost per unit is estimated to be one-third lower for high-volume agencies as compared to low-volume agencies. Cost savings could potentially be achieved if smaller agencies shared services. © Health Research and Educational Trust.
Maciejewski, Matthew L; Liu, Chuan-Fen; Fihn, Stephan D
2009-01-01
To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R(2) statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. Administrative data-based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed.
The quality estimation of exterior wall’s and window filling’s construction design
NASA Astrophysics Data System (ADS)
Saltykov, Ivan; Bovsunovskaya, Maria
2017-10-01
The article reveals the term of “artificial envelope” in dwelling building. Authors offer a complex multifactorial approach to the design quality estimation of external fencing structures, which is based on various parameters impact. These referred parameters are: functional, exploitation, cost, and also, the environmental index is among them. The quality design index Qк is inputting for the complex characteristic of observed above parameters. The mathematical relation of this index from these parameters is the target function for the quality design estimation. For instance, the article shows the search of optimal variant for wall and window designs in small, middle and large square dwelling premises of economic class buildings. The graphs of target function single parameters are expressed for the three types of residual chamber’s dimensions. As a result of the showing example, there is a choice of window opening’s dimensions, which make the wall’s and window’s constructions properly correspondent to the producible complex requirements. The authors reveal the comparison of recommended window filling’s square in accordance with the building standards, and the square, due to the finding of the optimal variant of the design quality index. The multifactorial approach for optimal design searching, which is mentioned in this article, can be used in consideration of various construction elements of dwelling buildings in accounting of suitable climate, social and economic construction area features.
Examining variation in treatment costs: a cost function for outpatient methadone treatment programs.
Dunlap, Laura J; Zarkin, Gary A; Cowell, Alexander J
2008-06-01
To estimate a hybrid cost function of the relationship between total annual cost for outpatient methadone treatment and output (annual patient days and selected services), input prices (wages and building space costs), and selected program and patient case-mix characteristics. Data are from a multistate study of 159 methadone treatment programs that participated in the Center for Substance Abuse Treatment's Evaluation of the Methadone/LAAM Treatment Program Accreditation Project between 1998 and 2000. Using least squares regression for weighted data, we estimate the relationship between total annual costs and selected output measures, wages, building space costs, and selected program and patient case-mix characteristics. Findings indicate that total annual cost is positively associated with program's annual patient days, with a 10 percent increase in patient days associated with an 8.2 percent increase in total cost. Total annual cost also increases with counselor wages (p<.01), but no significant association is found for nurse wages or monthly building costs. Surprisingly, program characteristics and patient case mix variables do not appear to explain variations in methadone treatment costs. Similar results are found for a model with services as outputs. This study provides important new insights into the determinants of methadone treatment costs. Our findings concur with economic theory in that total annual cost is positively related to counselor wages. However, among our factor inputs, counselor wages are the only significant driver of these costs. Furthermore, our findings suggest that methadone programs may realize economies of scale; however, other important factors, such as patient access, should be considered.
How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods
2007-08-01
Attack Trees for Modeling and Analysis 10 2.8 Misuse and Abuse Cases 10 2.9 Formal Methods 11 2.9.1 Software Cost Reduction 12 2.9.2 Common...modern or efficient techniques. • Requirements analysis typically is either not performed at all (identified requirements are directly specified without...any analysis or modeling) or analysis is restricted to functional re- quirements and ignores quality requirements, other nonfunctional requirements
Nonlinear least-squares data fitting in Excel spreadsheets.
Kemmer, Gerdi; Keller, Sandro
2010-02-01
We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
NASA Astrophysics Data System (ADS)
Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang
2016-10-01
In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.
A grid layout algorithm for automatic drawing of biochemical networks.
Li, Weijiang; Kurata, Hiroyuki
2005-05-01
Visualization is indispensable in the research of complex biochemical networks. Available graph layout algorithms are not adequate for satisfactorily drawing such networks. New methods are required to visualize automatically the topological architectures and facilitate the understanding of the functions of the networks. We propose a novel layout algorithm to draw complex biochemical networks. A network is modeled as a system of interacting nodes on squared grids. A discrete cost function between each node pair is designed based on the topological relation and the geometric positions of the two nodes. The layouts are produced by minimizing the total cost. We design a fast algorithm to minimize the discrete cost function, by which candidate layouts can be produced efficiently. A simulated annealing procedure is used to choose better candidates. Our algorithm demonstrates its ability to exhibit cluster structures clearly in relatively compact layout areas without any prior knowledge. We developed Windows software to implement the algorithm for CADLIVE. All materials can be freely downloaded from http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/ http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/
Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng
2012-12-01
This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Maciejewski, Matthew L.; Liu, Chuan-Fen; Fihn, Stephan D.
2009-01-01
OBJECTIVE—To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. RESEARCH DESIGN AND METHODS—This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R2 statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. RESULTS—Administrative data–based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. CONCLUSIONS—Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed. PMID:18945927
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less
From Data to Images:. a Shape Based Approach for Fluorescence Tomography
NASA Astrophysics Data System (ADS)
Dorn, O.; Prieto, K. E.
2012-12-01
Fluorescence tomography is treated as a shape reconstruction problem for a coupled system of two linear transport equations in 2D. The shape evolution is designed in order to minimize the least squares data misfit cost functional either in the excitation frequency or in the emission frequency. Furthermore, a level set technique is employed for numerically modelling the evolving shapes. Numerical results are presented which demonstrate the performance of this novel technique in the situation of noisy simulated data in 2D.
NASA Astrophysics Data System (ADS)
Boldea, M.; Sala, F.
2010-09-01
We admit that the mathematical relation between agricultural production f(x, y) and the two types of fertilizers x and y is given by function (1). The coefficients that appear are determined by using the least squares method by comparison with the experimental data. We took into consideration the following economic indicators: absolute benefit, relative benefit, profitableness and cost price. These are maximized or minimized, thus obtaining the optimal solutions by annulling the partial derivatives.
Using Sensor-based Demand Controlled Ventilation to Realize Energy Savings in Laboratories
2014-03-27
is warranted. The results show that a DCV system is life-cycle cost effective for many different HVAC system total pressure and square footage ...Name and Description of System Sensors ......................................................... 44 Table 5. BEL Laboratory HVAC Zones, Square Footage ...Intensity ............................................................................. 74 Table 9. Range of USAF Laboratory Square Footage and Occupancy
Rabani, Amir
2016-01-01
The market for process instruments generally requires low cost devices that are robust, small in size, portable, and usable in-plant. Ultrasonic torsional guided wave sensors have received much attention by researchers for measurement of viscosity and/or density of fluids in recent years. The supporting electronic systems for these sensors providing many different settings of sine-wave signals are bulky and expensive. In contrast, a system based on bursts of square waves instead of sine waves would have a considerable advantage in that respect and could be built using simple integrated circuits at a cost that is orders of magnitude lower than for a windowed sine wave device. This paper explores the possibility of using square wave bursts as the driving signal source for the ultrasonic torsional guided wave viscosity sensor. A simple design of a compact and fully automatic analogue square wave front-end for the sensor is also proposed. The successful operation of the system is demonstrated by using the sensor for measuring the viscosity in a representative fluid. This work provides the basis for design and manufacture of low cost compact standalone ultrasonic guided wave sensors and enlightens the possibility of using coded excitation techniques utilising square wave sequences in such applications. PMID:27754324
Rabani, Amir
2016-10-12
The market for process instruments generally requires low cost devices that are robust, small in size, portable, and usable in-plant. Ultrasonic torsional guided wave sensors have received much attention by researchers for measurement of viscosity and/or density of fluids in recent years. The supporting electronic systems for these sensors providing many different settings of sine-wave signals are bulky and expensive. In contrast, a system based on bursts of square waves instead of sine waves would have a considerable advantage in that respect and could be built using simple integrated circuits at a cost that is orders of magnitude lower than for a windowed sine wave device. This paper explores the possibility of using square wave bursts as the driving signal source for the ultrasonic torsional guided wave viscosity sensor. A simple design of a compact and fully automatic analogue square wave front-end for the sensor is also proposed. The successful operation of the system is demonstrated by using the sensor for measuring the viscosity in a representative fluid. This work provides the basis for design and manufacture of low cost compact standalone ultrasonic guided wave sensors and enlightens the possibility of using coded excitation techniques utilising square wave sequences in such applications.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Active control of the forced and transient response of a finite beam. M.S. Thesis
NASA Technical Reports Server (NTRS)
Post, John T.
1990-01-01
Structural vibrations from a point force are modelled on a finite beam. This research explores the theoretical limit on controlling beam vibrations utilizing another point source as an active controller. Three different types of excitation are considered, harmonic, random, and transient. For harmonic excitation, control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam. Control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, integrating the expected value of the displacement squared over the required interval, is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. The form of the controller is specified as either one or two delayed pulses, thus constraining the controller to be casual. The best possible control is examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses.
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2017-04-01
Ionospheric tomography is a very cost-effective method which is used frequently to modeling of electron density distributions. In this paper, residual minimization training neural network (RMTNN) is used in voxel based ionospheric tomography. Due to the use of wavelet neural network (WNN) with back-propagation (BP) algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). To train the WNN with BP algorithm, two cost functions is defined: total and vertical cost functions. Using minimization of cost functions, temporal and spatial ionospheric variations is studied. The GPS measurements of the international GNSS service (IGS) in the central Europe have been used for constructing a 3-D image of the electron density. Three days (2009.04.15, 2011.07.20 and 2013.06.01) with different solar activity index is used for the processing. To validate and better assess reliability of the proposed method, 4 ionosonde and 3 testing stations have been used. Also the results of MRMTNN has been compared to that of the RMTNN method, international reference ionosphere model 2012 (IRI-2012) and spherical cap harmonic (SCH) method as a local ionospheric model. The comparison of MRMTNN results with RMTNN, IRI-2012 and SCH models shows that the root mean square error (RMSE) and standard deviation of the proposed approach are superior to those of the traditional method.
Low-cost solar array structure development
NASA Astrophysics Data System (ADS)
Wilson, A. H.
1981-06-01
Early studies of flat-plate arrays have projected costs on the order of $50/square meter for installed array support structures. This report describes an optimized low-cost frame-truss structure that is estimated to cost below $25/square meter, including all markups, shipping an installation. The structure utilizes a planar frame made of members formed from light-gauge galvanized steel sheet and is supposed in the field by treated-wood trusses that are partially buried in trenches. The buried trusses use the overburden soil to carry uplift wind loads and thus to obviate reinforced-concrete foundations. Details of the concept, including design rationale, fabrication and assembly experience, structural testing and fabrication drawings are included.
Low-cost solar array structure development
NASA Technical Reports Server (NTRS)
Wilson, A. H.
1981-01-01
Early studies of flat-plate arrays have projected costs on the order of $50/square meter for installed array support structures. This report describes an optimized low-cost frame-truss structure that is estimated to cost below $25/square meter, including all markups, shipping an installation. The structure utilizes a planar frame made of members formed from light-gauge galvanized steel sheet and is supposed in the field by treated-wood trusses that are partially buried in trenches. The buried trusses use the overburden soil to carry uplift wind loads and thus to obviate reinforced-concrete foundations. Details of the concept, including design rationale, fabrication and assembly experience, structural testing and fabrication drawings are included.
Incremental cost of postacute care in nursing homes.
Spector, William D; Limcangco, Maria Rhona; Ladd, Heather; Mukamel, Dana
2011-02-01
To determine whether the case mix index (CMI) based on the 53-Resource Utilization Groups (RUGs) captures all the cross-sectional variation in nursing home (NH) costs or whether NHs that have a higher percent of Medicare skilled care days (%SKILLED) have additional costs. DATA AND SAMPLE: Nine hundred and eighty-eight NHs in California in 2005. Data are from Medicaid cost reports, the Minimum Data Set, and the Economic Census. We estimate hybrid cost functions, which include in addition to outputs, case mix, ownership, wages, and %SKILLED. Two-stage least-square (2SLS) analysis was used to deal with the potential endogeneity of %SKILLED and CMI. On average 11 percent of NHs days were due to skilled care. Based on the 2SLS model, %SKILLED is associated with costs even when controlling for CMI. The marginal cost of a one percentage point increase in %SKILLED is estimated at U.S.$70,474 or about 1.2 percent of annual costs for the average cost facility. Subanalyses show that the increase in costs is mainly due to additional expenses for nontherapy ancillaries and rehabilitation. The 53-RUGs case mix does not account completely for all the variation in actual costs of care for postacute patients in NHs. © Health Research and Educational Trust.
Incremental Cost of Postacute Care in Nursing Homes
Spector, William D; Limcangco, Maria Rhona; Ladd, Heather; Mukamel, Dana A
2011-01-01
Objectives To determine whether the case mix index (CMI) based on the 53-Resource Utilization Groups (RUGs) captures all the cross-sectional variation in nursing home (NH) costs or whether NHs that have a higher percent of Medicare skilled care days (%SKILLED) have additional costs. Data and Sample Nine hundred and eighty-eight NHs in California in 2005. Data are from Medicaid cost reports, the Minimum Data Set, and the Economic Census. Research Design We estimate hybrid cost functions, which include in addition to outputs, case mix, ownership, wages, and %SKILLED. Two-stage least-square (2SLS) analysis was used to deal with the potential endogeneity of %SKILLED and CMI. Results On average 11 percent of NHs days were due to skilled care. Based on the 2SLS model, %SKILLED is associated with costs even when controlling for CMI. The marginal cost of a one percentage point increase in %SKILLED is estimated at U.S.$70,474 or about 1.2 percent of annual costs for the average cost facility. Subanalyses show that the increase in costs is mainly due to additional expenses for nontherapy ancillaries and rehabilitation. Conclusion The 53-RUGs case mix does not account completely for all the variation in actual costs of care for postacute patients in NHs. PMID:21029085
Cost-Sharing of Ecological Construction Based on Trapezoidal Intuitionistic Fuzzy Cooperative Games.
Liu, Jiacai; Zhao, Wenjian
2016-11-08
There exist some fuzziness and uncertainty in the process of ecological construction. The aim of this paper is to develop a direct and an effective simplified method for obtaining the cost-sharing scheme when some interested parties form a cooperative coalition to improve the ecological environment of Min River together. Firstly, we propose the solution concept of the least square prenucleolus of cooperative games with coalition values expressed by trapezoidal intuitionistic fuzzy numbers. Then, based on the square of the distance in the numerical value between two trapezoidal intuitionistic fuzzy numbers, we establish a corresponding quadratic programming model to obtain the least square prenucleolus, which can effectively avoid the information distortion and uncertainty enlargement brought about by the subtraction of trapezoidal intuitionistic fuzzy numbers. Finally, we give a numerical example about the cost-sharing of ecological construction in Fujian Province in China to show the validity, applicability, and advantages of the proposed model and method.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... available for public inspection at U.S. Environmental Protection Agency, Region I, 5 Post Office Square..., 5 Post Office Square, Suite 100 (Mailcode: ORA18-1), Boston, Massachusetts 02109-3912 and should... Square, Suite 100 (OES04-3), Boston, MA 02109-2023, (617) 918-1438. Dated: January 6, 2010. James T...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-03
... available for public inspection at 5 Post Office Square, Boston, MA 02109-3912. DATES: Comments must be....S. Environmental Protection Agency, Region I, 5 Post Office Square, Suite 100, Mailcode ORA 18-1... Square, Suite 100, Mailcode OES 04-2, Boston, Massachusetts 02109-3912, (617) 918-1884. Dated: May 18...
75 FR 21292 - Proposed CERCLA Administrative Cost Recovery Settlement Agreement; AVX Corporation
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-23
... public inspection at 5 Post Office Square, Suite 100, Boston, MA 02109. DATES: Comments must be submitted... Office Square, Suite 100, Mailcode LIB01-2, Boston, MA 02109-3912, by appointment, (617) 918- 1440..., 5 Post Office Square, Suite 100, Mailcode ORA18-1, Boston, MA 02109-3912 and should refer to: In re...
Heuristic-driven graph wavelet modeling of complex terrain
NASA Astrophysics Data System (ADS)
Cioacǎ, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai-Sorin; Pǎtru-Stupariu, Ileana; Nǎpǎrus, Magdalena; Stoicescu, Ioana; Peringer, Alexander; Buttler, Alexandre; Golay, François
2015-03-01
We present a novel method for building a multi-resolution representation of large digital surface models. The surface points coincide with the nodes of a planar graph which can be processed using a critically sampled, invertible lifting scheme. To drive the lazy wavelet node partitioning, we employ an attribute aware cost function based on the generalized quadric error metric. The resulting algorithm can be applied to multivariate data by storing additional attributes at the graph's nodes. We discuss how the cost computation mechanism can be coupled with the lifting scheme and examine the results by evaluating the root mean square error. The algorithm is experimentally tested using two multivariate LiDAR sets representing terrain surface and vegetation structure with different sampling densities.
System identification and model reduction using modulating function techniques
NASA Technical Reports Server (NTRS)
Shen, Yan
1993-01-01
Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.
A Study of the Thermal Environment Developed by a Traveling Slipper at High Velocity
2013-03-01
Power Partition Function The next partition function takes the same formulation as the powered function but now the exponent is squared. The...function and note the squared term in the exponent . 66 Equation 4.27 (4.36) Thus far the three partition functions each give a predicted...hypothesized that the function would fall somewhere between the first exponential decay function and the power function. However, by squaring the exponent
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT
Nguyen, Thu L. N.; Shin, Yoan
2016-01-01
Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Landslide risk in the San Francisco Bay region
Coe, J.A.; Crovelli, R.A.
2008-01-01
We have used historical records of damaging landslides triggered by rainstorms, and a newly developed Probabilistic Landslide Assessment Cost Estimation System (PLACES), to estimate the numbers and direct costs of future landslides in the San Francisco Bay region. The estimated annual cost of future landslides in the entire region is about US $15 million (year 2000 $). The estimated annual cost is highest for San Mateo County ($3.32 million) and lowest for Solano County ($0.18 million). Normalizing costs by dividing by the percentage of land area with slopes equal or greater than about 10° indicates that San Francisco County will have the highest cost per square km ($7,400), whereas Santa Clara County will have the lowest cost per square km ($230). These results indicate that the San Francisco Bay region has one of the highest levels of landslide risk in the United States. Compared to landslide cost estimates from the rest of the world, the risk level in the Bay region seems high, but not exceptionally high.
7 CFR 3550.117 - WWD grant purposes.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Construction and/or partitioning off a portion of the dwelling for a bathroom, not to exceed 4.6 square meters (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells...
7 CFR 3550.117 - WWD grant purposes.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Construction and/or partitioning off a portion of the dwelling for a bathroom, not to exceed 4.6 square meters (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells...
Sail Plan Configuration Optimization for a Modern Clipper Ship
NASA Astrophysics Data System (ADS)
Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz
2002-11-01
We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
....S. EPA Region 1, 5 Post Office Square, Suite 100, Boston, MA 02109-3912. DATES: Comments must be... at U.S. EPA Region 1, OSRR Records and Information Center, 5 Post Office Square, Suite 100, Mailcode... be addressed to Audrey Zucker, U.S. EPA Region 1, 5 Post Office Square, Suite 100, Mailcode OES04-2...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-12
... received will be available for public inspection at 5 Post Office Square, Boston, MA 02109-3912. DATES... Sherman, Senior Enforcement Counsel, U.S. Environmental Protection Agency, 5 Post Office Square, Suite 100... Protection Agency, 5 Post Office Square, Suite 100 (OES04-3), Boston, MA 02109-3912 (Telephone No. 617-918...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-26
... inspection at 5 Post Office Square, Boston, Massachusetts 02109- 3912. DATES: Comments must be submitted on... Enforcement Counsel, U.S. Environmental Protection Agency, Region 1, 5 Post Office Square, Suite 100 (OES 04-3... Agency, Region 1, 5 Post Office Square, Suite 100 (OES 04-3), Boston, MA 02109-3912 (telephone no. (617...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-04
... public inspection at 5 Post Office Square, Suite 100, Boston, MA 02109-3912. DATES: Comments must be... Counsel, U.S. Environmental Protection Agency, Region I, 5 Post Office Square, Suite 100 (OES04-1), Boston..., Region I, 5 Post Office Square, Suite 100, (OES04-1), Boston, Massachusetts 02109-3912 (Telephone No. 617...
ERIC Educational Resources Information Center
Agron, Joe
2001-01-01
Presents comparative data on college and university residence hall construction projects, including construction costs, size, numbers of students, and cost per square foot. Also provides comparative housing cost data over the past decade and identifies types of amenities being added to dormitories. (GR)
Code-modulated interferometric imaging system using phased arrays
NASA Astrophysics Data System (ADS)
Chauhan, Vikas; Greene, Kevin; Floyd, Brian
2016-05-01
Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.
Improved FCG-1 cell technology
NASA Astrophysics Data System (ADS)
Breault, R. D.; Congdon, J. V.; Coykendall, R. D.; Luoma, W. L.
1980-10-01
Fuel cell performance in the ribbed substrate cell configuration consistent with that projected for a commercial power plant is demonstrated. Tests were conducted on subscale cells and on two 20 cell stacks of 4.8 MW demonstrator size cell components. These tests evaluated cell stack materials, processes, components, and assembly configurations. The first task was to conduct a component development effort to introduce improvements in 3.7 square foot, ribbed substrate acid cell repeating parts which represented advances in performance, function, life, and lower cost for application in higher pressure and temperature power plants. Specific areas of change were the electrode substrate, catalyst, matrix, seals, separator plates, and coolers. Full sized ribbed substrate stack components incorporating more stable materials were evaluated at increased pressure (93 psia) and temperature (405 F) conditions. Two 20 cell stacks with a 3.7 square feet, ribbed substrate cell configuration were tested.
ERIC Educational Resources Information Center
Dempsey, William M.
1997-01-01
A Rochester Institute of Technology (New York) program costing model designed to reflect costs more accurately allocates indirect costs according to salaries and wages, modified total direct costs, square footage of space used, credit hours, and student and faculty full-time equivalents. It allows administrators to make relative value judgments…
Numerical scheme approximating solution and parameters in a beam equation
NASA Astrophysics Data System (ADS)
Ferdinand, Robert R.
2003-12-01
We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.
Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)
Effect of increasing energy cost on arm coordination in elite sprint swimmers.
Komar, J; Leprêtre, P M; Alberty, M; Vantorre, J; Fernandes, R J; Hellard, P; Chollet, D; Seifert, L
2012-06-01
The purpose of this study was to analyze the changes in stroke parameters, motor organization and swimming efficiency with increasing energy cost in aquatic locomotion. Seven elite sprint swimmers performed a 6×300-m incremental swimming test. Stroke parameters (speed, stroke rate and stroke length), motor organization (arm stroke phases and arm coordination index), swimming efficiency (swimming speed squared and hand speed squared) and stroke index were calculated from aerial and underwater side-view cameras. The energy cost of locomotion was assessed by measuring oxygen consumption and blood lactate. Results showed that the increase in energy cost of locomotion was correlated to an increase in the index of coordination and stroke rate, and a decrease in stroke length (p<.05). Furthermore, indicators of swimming efficiency and stroke index did not change significantly with the speed increments (p<.05), indicating that swimmers did not decrease their efficiency despite the increase in energy cost. In parallel, an increase in the index of coordination IdC and stroke rate were observed, along with a decrease in stroke length, stroke index and hand speed squared with each increment, revealing an adaptation to the fatigue within the 300m. Copyright © 2011 Elsevier B.V. All rights reserved.
Cost-Sharing of Ecological Construction Based on Trapezoidal Intuitionistic Fuzzy Cooperative Games
Liu, Jiacai; Zhao, Wenjian
2016-01-01
There exist some fuzziness and uncertainty in the process of ecological construction. The aim of this paper is to develop a direct and an effective simplified method for obtaining the cost-sharing scheme when some interested parties form a cooperative coalition to improve the ecological environment of Min River together. Firstly, we propose the solution concept of the least square prenucleolus of cooperative games with coalition values expressed by trapezoidal intuitionistic fuzzy numbers. Then, based on the square of the distance in the numerical value between two trapezoidal intuitionistic fuzzy numbers, we establish a corresponding quadratic programming model to obtain the least square prenucleolus, which can effectively avoid the information distortion and uncertainty enlargement brought about by the subtraction of trapezoidal intuitionistic fuzzy numbers. Finally, we give a numerical example about the cost-sharing of ecological construction in Fujian Province in China to show the validity, applicability, and advantages of the proposed model and method. PMID:27834830
Milne, S C
1996-12-24
In this paper, we give two infinite families of explicit exact formulas that generalize Jacobi's (1829) 4 and 8 squares identities to 4n(2) or 4n(n + 1) squares, respectively, without using cusp forms. Our 24 squares identity leads to a different formula for Ramanujan's tau function tau(n), when n is odd. These results arise in the setting of Jacobi elliptic functions, Jacobi continued fractions, Hankel or Turánian determinants, Fourier series, Lambert series, inclusion/exclusion, Laplace expansion formula for determinants, and Schur functions. We have also obtained many additional infinite families of identities in this same setting that are analogous to the eta-function identities in appendix I of Macdonald's work [Macdonald, I. G. (1972) Invent. Math. 15, 91-143]. A special case of our methods yields a proof of the two conjectured [Kac, V. G. and Wakimoto, M. (1994) in Progress in Mathematics, eds. Brylinski, J.-L., Brylinski, R., Guillemin, V. & Kac, V. (Birkhäuser Boston, Boston, MA), Vol. 123, pp. 415-456] identities involving representing a positive integer by sums of 4n(2) or 4n(n + 1) triangular numbers, respectively. Our 16 and 24 squares identities were originally obtained via multiple basic hypergeometric series, Gustafson's C(l) nonterminating (6)phi(5) summation theorem, and Andrews' basic hypergeometric series proof of Jacobi's 4 and 8 squares identities. We have (elsewhere) applied symmetry and Schur function techniques to this original approach to prove the existence of similar infinite families of sums of squares identities for n(2) or n(n + 1) squares, respectively. Our sums of more than 8 squares identities are not the same as the formulas of Mathews (1895), Glaisher (1907), Ramanujan (1916), Mordell (1917, 1919), Hardy (1918, 1920), Kac and Wakimoto, and many others.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1977-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UD(transpose of U), where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and coloured process noise parameters increase mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Gram-Schmidt algorithms for covariance propagation
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1975-01-01
This paper addresses the time propagation of triangular covariance factors. Attention is focused on the square-root free factorization, P = UDU/T/, where U is unit upper triangular and D is diagonal. An efficient and reliable algorithm for U-D propagation is derived which employs Gram-Schmidt orthogonalization. Partitioning the state vector to distinguish bias and colored process noise parameters increases mapping efficiency. Cost comparisons of the U-D, Schmidt square-root covariance and conventional covariance propagation methods are made using weighted arithmetic operation counts. The U-D time update is shown to be less costly than the Schmidt method; and, except in unusual circumstances, it is within 20% of the cost of conventional propagation.
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-01-01
Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831
Advanced optical system for scanning-spot photorefractive keratectomy (PRK)
NASA Astrophysics Data System (ADS)
Mrochen, Michael; Wullner, Christian; Semchishen, Vladimir A.; Seiler, Theo
1999-06-01
Purpose: The goal of this presentation is to discuss the use of the Light Shaping Beam Homogenizer in an optical system for scanning-spot PRK. Methods: The basic principle of the LSBH is the transformation of any incident intensity distribution by light scattering on an irregular microlens structure z = f(x,y). The relief of this microlens structure is determined by a defined statistical function, i.e. it is defined by the mean root-squared tilt σ of the surface relief. Therefore, the beam evolution after the LSBH and in the focal plane of an imaging lens was measured for various root-squared tilts. Beside this, an optical setup for scanning-spot PRK was assembled according to the theoretical and experimental results. Results: The divergence, homogeneity and the Gaussian radius of the intensity distribution in the treatment plane of the scanning-spot PRK laser system is mainly characterized by dependent on root-mean-square tilt σ of the LSBH, as it will be explained by the theoretical description of the LSBH. Conclusions: The LSBH represents a simple, low cost beam homogenizer with low energy losses, for scanning-spot excimer laser systems.
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
NREL Researchers Test Solar Thermal Technology
and manufacturing modifications that could lead to significant cost reductions. The major modifications include a larger reflective area (170 square meters) and a low-cost mirror facet design in which this program. SAIC's low cost stretched-membrane heliostat represents a significant advancement in
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Energy-Smart Choices for Schools. An HVAC Comparison Tool. [CD-ROM].
ERIC Educational Resources Information Center
Geothermal Heat Pump Consortium, Inc., Washington, DC.
A CD ROM program provides comparison construction cost capabilities for heating, ventilation, and air conditioning (HVAC) systems in educational facilities. The program combines multiple types of systems with square footage data on low and high construction cost and school size to automatically calculate HVAC comparative construction costs. (GR)
Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei
2016-03-01
We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Fogel, L. J.; Calabrese, P. G.; Walsh, M. J.; Owens, A. J.
1982-01-01
Ways in which autonomous behavior of spacecraft can be extended to treat situations wherein a closed loop control by a human may not be appropriate or even possible are explored. Predictive models that minimize mean least squared error and arbitrary cost functions are discussed. A methodology for extracting cyclic components for an arbitrary environment with respect to usual and arbitrary criteria is developed. An approach to prediction and control based on evolutionary programming is outlined. A computer program capable of predicting time series is presented. A design of a control system for a robotic dense with partially unknown physical properties is presented.
Li, Yun
2017-01-01
We addressed the fusion estimation problem for nonlinear multisensory systems. Based on the Gauss–Hermite approximation and weighted least square criterion, an augmented high-dimension measurement from all sensors was compressed into a lower dimension. By combining the low-dimension measurement function with the particle filter (PF), a weighted measurement fusion PF (WMF-PF) is presented. The accuracy of WMF-PF appears good and has a lower computational cost when compared to centralized fusion PF (CF-PF). An example is given to show the effectiveness of the proposed algorithms. PMID:28956862
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
Measuring the Hall weighting function for square and cloverleaf geometries
NASA Astrophysics Data System (ADS)
Scherschligt, Julia K.; Koon, Daniel W.
2000-02-01
We have directly measured the Hall weighting function—the sensitivity of a four-wire Hall measurement to the position of macroscopic inhomogeneities in Hall angle—for both a square shaped and a cloverleaf specimen. Comparison with the measured resistivity weighting function for a square geometry [D. W. Koon and W. K. Chan, Rev. Sci. Instrum. 69, 12 (1998)] proves that the two measurements sample the same specimen differently. For Hall measurements on both a square and a cloverleaf, the function is nonnegative with its maximum in the center and its minimum of zero at the edges of the square. Converting a square into a cloverleaf is shown to dramatically focus the measurement process onto a much smaller portion of the specimen. While our results agree qualitatively with theory, details are washed out, owing to the finite size of the magnetic probe used.
Ultimate Cost of Building Walls.
ERIC Educational Resources Information Center
Grimm, Clayford T.; Gross, James G.
The need for economic analysis of building walls is discussed, and the factors influencing the ultimate cost of exterior walls are studied. The present worth method is used to analyze three types of exterior non-loadbearing panel or curtain walls. Anticipated costs are expressed in terms of their present value per square foot of wall area. The…
Evaluation of microfabricated deformable mirror systems
NASA Astrophysics Data System (ADS)
Cowan, William D.; Lee, Max K.; Bright, Victor M.; Welsh, Byron M.
1998-09-01
This paper presents recent result for aberration correction and beam steering experiments using polysilicon surface micromachined piston micromirror arrays. Microfabricated deformable mirrors offer a substantial cost reduction for adaptive optic systems. In addition to the reduced mirror cost, microfabricated mirrors typically require low control voltages, thus eliminating high voltage amplifiers. The greatly reduced cost per channel of adaptive optic systems employing microfabricated deformable mirrors promise high order aberration correction at low cost. Arrays of piston micromirrors with 128 active elements were tested. Mirror elements are on a 203 micrometers 12 by 12 square grid. The overall array size is 2.4 mm square. The arrays were fabricated in the commercially available DARPA supported MUMPs surface micromachining foundry process. The cost per mirror array in this prototyping process is less than 200 dollars. Experimental results are presented for a hybrid correcting element comprised of a lenslet array and piston micromirror array, and for a piston micromirror array only. Also presented is a novel digital deflection micromirror which requires no digital to analog converters, further reducing the cost of adaptive optics system.
Optimal Frequency-Domain System Realization with Weighting
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Maghami, Peiman G.
1999-01-01
Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.
Squared eigenfunctions for the Sasa-Satsuma equation
NASA Astrophysics Data System (ADS)
Yang, Jianke; Kaup, D. J.
2009-02-01
Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions which satisfy the linearized equation of an integrable equation. They are needed for various studies related to integrable equations, such as the development of its soliton perturbation theory. In this article, squared eigenfunctions are derived for the Sasa-Satsuma equation whose spectral operator is a 3×3 system, while its linearized operator is a 2×2 system. It is shown that these squared eigenfunctions are sums of two terms, where each term is a product of a Jost function and an adjoint Jost function. The procedure of this derivation consists of two steps: First is to calculate the variations of the potentials via variations of the scattering data by the Riemann-Hilbert method. The second one is to calculate the variations of the scattering data via the variations of the potentials through elementary calculations. While this procedure has been used before on other integrable equations, it is shown here, for the first time, that for a general integrable equation, the functions appearing in these variation relations are precisely the squared eigenfunctions and adjoint squared eigenfunctions satisfying, respectively, the linearized equation and the adjoint linearized equation of the integrable system. This proof clarifies this procedure and provides a unified explanation for previous results of squared eigenfunctions on individual integrable equations. This procedure uses primarily the spectral operator of the Lax pair. Thus two equations in the same integrable hierarchy will share the same squared eigenfunctions (except for a time-dependent factor). In the Appendix, the squared eigenfunctions are presented for the Manakov equations whose spectral operator is closely related to that of the Sasa-Satsuma equation.
A Comparison of Lord's Chi Square and Raju's Area Measures in Detection of DIF.
ERIC Educational Resources Information Center
Cohen, Allan S.; Kim, Seock-Ho
1993-01-01
The effectiveness of two statistical tests of the area between item response functions (exact signed area and exact unsigned area) estimated in different samples, a measure of differential item functioning (DIF), was compared with Lord's chi square. Lord's chi square was found the most effective in determining DIF. (SLD)
Cost Estimation of Naval Ship Acquisition.
1983-12-01
one a 9-sub- system model , the other a single total cost model . The models were developed using the linear least squares regression tech- nique with...to Linear Statistical Models , McGraw-Hill, 1961. 11. Helmer, F. T., Bibliography on Pricing Methodology and Cost Estimating, Dept. of Economics and...SUPPI.EMSaTARY NOTES IS. KWRo" (Cowaft. en tever aide of ..aesep M’ Idab~t 6 Week ONNa.) Cost estimation; Acquisition; Parametric cost estimate; linear
Cross-correlation least-squares reverse time migration in the pseudo-time domain
NASA Astrophysics Data System (ADS)
Li, Qingyang; Huang, Jianping; Li, Zhenchun
2017-08-01
The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.
NASA Astrophysics Data System (ADS)
Lim, S.; Park, S. K.; Zupanski, M.
2015-04-01
Since the air quality forecast is related to both chemistry and meteorology, the coupled atmosphere-chemistry data assimilation (DA) system is essential to air quality forecasting. Ozone (O3) plays an important role in chemical reactions and is usually assimilated in chemical DA. In tropical cyclones (TCs), O3 usually shows a lower concentration inside the eyewall and an elevated concentration around the eye, impacting atmospheric as well as chemical variables. To identify the impact of O3 observations on TC structure, including atmospheric and chemical information, we employed the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) with an ensemble-based DA algorithm - the maximum likelihood ensemble filter (MLEF). For a TC case that occurred over the East Asia, our results indicate that the ensemble forecast is reasonable, accompanied with larger background state uncertainty over the TC, and also over eastern China. Similarly, the assimilation of O3 observations impacts atmospheric and chemical variables near the TC and over eastern China. The strongest impact on air quality in the lower troposphere was over China, likely due to the pollution advection. In the vicinity of the TC, however, the strongest impact on chemical variables adjustment was at higher levels. The impact on atmospheric variables was similar in both over China and near the TC. The analysis results are validated using several measures that include the cost function, root-mean-squared error with respect to observations, and degrees of freedom for signal (DFS). All measures indicate a positive impact of DA on the analysis - the cost function and root mean square error have decreased by 16.9 and 8.87%, respectively. In particular, the DFS indicates a strong positive impact of observations in the TC area, with a weaker maximum over northeast China.
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
Accelerating the two-point and three-point galaxy correlation functions using Fourier transforms
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2016-01-01
Though Fourier transforms (FTs) are a common technique for finding correlation functions, they are not typically used in computations of the anisotropy of the two-point correlation function (2PCF) about the line of sight in wide-angle surveys because the line-of-sight direction is not constant on the Cartesian grid. Here we show how FTs can be used to compute the multipole moments of the anisotropic 2PCF. We also show how FTs can be used to accelerate the 3PCF algorithm of Slepian & Eisenstein. In both cases, these FT methods allow one to avoid the computational cost of pair counting, which scales as the square of the number density of objects in the survey. With the upcoming large data sets of Dark Energy Spectroscopic Instrument, Euclid, and Large Synoptic Survey Telescope, FT techniques will therefore offer an important complement to simple pair or triplet counts.
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.
Three filters for visualization of phase objects with large variations of phase gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz
2009-02-20
We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.
Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Burken, John; Ishihara, Abraham
2011-01-01
This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Riemannian geometric approach to human arm dynamics, movement optimization, and invariance
NASA Astrophysics Data System (ADS)
Biess, Armin; Flash, Tamar; Liebermann, Dario G.
2011-03-01
We present a generally covariant formulation of human arm dynamics and optimization principles in Riemannian configuration space. We extend the one-parameter family of mean-squared-derivative (MSD) cost functionals from Euclidean to Riemannian space, and we show that they are mathematically identical to the corresponding dynamic costs when formulated in a Riemannian space equipped with the kinetic energy metric. In particular, we derive the equivalence of the minimum-jerk and minimum-torque change models in this metric space. Solutions of the one-parameter family of MSD variational problems in Riemannian space are given by (reparametrized) geodesic paths, which correspond to movements with least muscular effort. Finally, movement invariants are derived from symmetries of the Riemannian manifold. We argue that the geometrical structure imposed on the arm’s configuration space may provide insights into the emerging properties of the movements generated by the motor system.
Goble, Jacob A; Zhang, Yanxin; Shimansky, Yury; Sharma, Siddharth; Dounskaia, Natalia V
2007-09-01
Strategies used by the CNS to optimize arm movements in terms of speed, accuracy, and resistance to fatigue remain largely unknown. A hypothesis is studied that the CNS exploits biomechanical properties of multijoint limbs to increase efficiency of movement control. To test this notion, a novel free-stroke drawing task was used that instructs subjects to make straight strokes in as many different directions as possible in the horizontal plane through rotations of the elbow and shoulder joints. Despite explicit instructions to distribute strokes uniformly, subjects showed biases to move in specific directions. These biases were associated with a tendency to perform movements that included active motion at one joint and largely passive motion at the other joint, revealing a tendency to minimize intervention of muscle torque for regulation of the effect of interaction torque. Other biomechanical factors, such as inertial resistance and kinematic manipulability, were unable to adequately account for these significant biases. Also, minimizations of jerk, muscle torque change, and sum of squared muscle torque were analyzed; however, these cost functions failed to explain the observed directional biases. Collectively, these results suggest that knowledge of biomechanical cost functions regarding interaction torque (IT) regulation is available to the control system. This knowledge may be used to evaluate potential movements and to select movement of "low cost." The preference to reduce active regulation of interaction torque suggests that, in addition to muscle energy, the criterion for movement cost may include neural activity required for movement control.
The impact of financial incentives on physician productivity in medical groups.
Conrad, Douglas A; Sales, Anne; Liang, Su-Ying; Chaudhuri, Anoshua; Maynard, Charles; Pieper, Lisa; Weinstein, Laurel; Gans, David; Piland, Neill
2002-08-01
To estimate the effect of financial incentives in medical groups--both at the level of individual physician and collectively--on individual physician productivity. Secondary data from 1997 on individual physician and group characteristics from two surveys: Medical Group Management Association (MGMA) Physician Compensation and Production Survey and the Cost Survey Area Resource File data on market characteristics, and various sources of state regulatory data. Cross-sectional estimation of individual physician production function models, using ordinary least squares and two-stage least squares regression. Data from respondents completing all items required for the two stages of production function estimation on both MGMA surveys (with RBRVS units as production measure: 102 groups, 2,237 physicians; and with charges as the production measure: 383 groups, 6,129 physicians). The 102 groups with complete data represent 1.8 percent of the 5,725 MGMA member groups. Individual production-based physician compensation leads to increased productivity, as expected (elasticity = .07, p < .05). The productivity effects of compensation methods based on equal shares of group net income and incentive bonuses are significantly positive (p < .05) and smaller in magnitude. The group-level financial incentive does not appear to be significantly related to physician productivity. Individual physician incentives based on own production do increase physician productivity.
Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad
2018-05-01
The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.
NASA Technical Reports Server (NTRS)
Hardy, E. E.; Skaley, J. E.; Phillips, E. S.
1974-01-01
This investigation was to develop a low cost, manual technique for enhancing ERTS-1 imagery and preparing it in suitable format for use by users with wide and varied interests related to land use and natural resources information. The goals were: to develop enhancement techniques based on concepts and practices extant in photographic sciences, to provide a means of allowing productive interpretation of the imagery by manual means, to produce a product at low cost, to provide a product that would have wide applications, and one compatible with existing information systems. Cost of preparation of the photographically enhanced, enlarged negatives and positives and the diazo materials is about 1 cent per square mile. Cost of creating and mapping a land use classification of twelve use types at a scale of 1:250,000 is only $1 per square mile. The product is understood by users, is economical, and is compatible with existing information systems.
A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment
NASA Astrophysics Data System (ADS)
Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong
Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng; Sun, Zhenglong; Qu, Xingda
2018-02-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial-lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior-posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly.
Hu, Xinyao; Zhao, Jun; Peng, Dongsheng
2018-01-01
Postural control is a complex skill based on the interaction of dynamic sensorimotor processes, and can be challenging for people with deficits in sensory functions. The foot plantar center of pressure (COP) has often been used for quantitative assessment of postural control. Previously, the foot plantar COP was mainly measured by force plates or complicated and expensive insole-based measurement systems. Although some low-cost instrumented insoles have been developed, their ability to accurately estimate the foot plantar COP trajectory was not robust. In this study, a novel individual-specific nonlinear model was proposed to estimate the foot plantar COP trajectories with an instrumented insole based on low-cost force sensitive resistors (FSRs). The model coefficients were determined by a least square error approximation algorithm. Model validation was carried out by comparing the estimated COP data with the reference data in a variety of postural control assessment tasks. We also compared our data with the COP trajectories estimated by the previously well accepted weighted mean approach. Comparing with the reference measurements, the average root mean square errors of the COP trajectories of both feet were 2.23 mm (±0.64) (left foot) and 2.72 mm (±0.83) (right foot) along the medial–lateral direction, and 9.17 mm (±1.98) (left foot) and 11.19 mm (±2.98) (right foot) along the anterior–posterior direction. The results are superior to those reported in previous relevant studies, and demonstrate that our proposed approach can be used for accurate foot plantar COP trajectory estimation. This study could provide an inexpensive solution to fall risk assessment in home settings or community healthcare center for the elderly. It has the potential to help prevent future falls in the elderly. PMID:29389857
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2005-01-01
Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.
Alber, S A; Schaffner, D W
1992-01-01
A comparison was made between mathematical variations of the square root and Schoolfield models for predicting growth rate as a function of temperature. The statistical consequences of square root and natural logarithm transformations of growth rate use in several variations of the Schoolfield and square root models were examined. Growth rate variances of Yersinia enterocolitica in brain heart infusion broth increased as a function of temperature. The ability of the two data transformations to correct for the heterogeneity of variance was evaluated. A natural logarithm transformation of growth rate was more effective than a square root transformation at correcting for the heterogeneity of variance. The square root model was more accurate than the Schoolfield model when both models used natural logarithm transformation. PMID:1444367
BIOMECHANICS. Why the seahorse tail is square.
Porter, Michael M; Adriaens, Dominique; Hatton, Ross L; Meyers, Marc A; McKittrick, Joanna
2015-07-03
Whereas the predominant shapes of most animal tails are cylindrical, seahorse tails are square prisms. Seahorses use their tails as flexible grasping appendages, in spite of a rigid bony armor that fully encases their bodies. We explore the mechanics of two three-dimensional-printed models that mimic either the natural (square prism) or hypothetical (cylindrical) architecture of a seahorse tail to uncover whether or not the square geometry provides any functional advantages. Our results show that the square prism is more resilient when crushed and provides a mechanism for preserving articulatory organization upon extensive bending and twisting, as compared with its cylindrical counterpart. Thus, the square architecture is better than the circular one in the context of two integrated functions: grasping ability and crushing resistance. Copyright © 2015, American Association for the Advancement of Science.
A spectral mimetic least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bochev, Pavel; Gerritsma, Marc
We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less
A spectral mimetic least-squares method
Bochev, Pavel; Gerritsma, Marc
2014-09-01
We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less
What is the correct cost functional for variational data assimilation?
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2018-03-01
Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with maximum aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this situation, the MAP estimator (or "most probable path" of the SDE) is obtained by minimising the Onsager-Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or "least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager-Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.
42 CFR 124.705 - Amount of recovery.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH RESOURCES DEVELOPMENT... calculating a reproduction value using construction cost indexes or current costs per square foot for... under the Public Works and Economic Development Act of 1965 (42 U.S.C. 3121, et seq.) or the Local...
42 CFR 124.705 - Amount of recovery.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES HEALTH RESOURCES DEVELOPMENT... calculating a reproduction value using construction cost indexes or current costs per square foot for... under the Public Works and Economic Development Act of 1965 (42 U.S.C. 3121, et seq.) or the Local...
NASA Astrophysics Data System (ADS)
Zhou, Yali; Zhang, Qizhi; Yin, Yixin
2015-05-01
In this paper, active control of impulsive noise with symmetric α-stable (SαS) distribution is studied. A general step-size normalized filtered-x Least Mean Square (FxLMS) algorithm is developed based on the analysis of existing algorithms, and the Gaussian distribution function is used to normalize the step size. Compared with existing algorithms, the proposed algorithm needs neither the parameter selection and thresholds estimation nor the process of cost function selection and complex gradient computation. Computer simulations have been carried out to suggest that the proposed algorithm is effective for attenuating SαS impulsive noise, and then the proposed algorithm has been implemented in an experimental ANC system. Experimental results show that the proposed scheme has good performance for SαS impulsive noise attenuation.
Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A
2013-01-01
A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.
NASA Technical Reports Server (NTRS)
Wolf, M.; Goldman, H.
1981-01-01
The attributes of the various metallization processes were investigated. It is shown that several metallization process sequences will lead to adequate metallization for large area, high performance solar cells at a metallization add on price in the range of $6. to 12. m squared, or 4 to $.8/W(peak), assuming 15% efficiency. Conduction layer formation by thick film silver or by tin or tin/lead solder leads to metallization add-on prices significantly above the $6. to 12/m squared range c.) The wet chemical processes of electroless and electrolytic plating for strike/barrier layer and conduction layer formation, respectively, seem to be most cost effective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, E. K.; Forristall, R.
2005-11-01
Industrial Solar Technology has assembled a team of experts to develop a large-aperture parabolic trough for the electric power market that moves beyond cost and operating limitations of 1980's designs based on sagged glass reflectors. IST's structurally efficient space frame design will require nearly 50% less material per square meter than a Solel LS-2 concentrator and the new trough will rotate around the focal point. This feature eliminates flexhoses that increase pump power, installation and maintenance costs. IST aims to deliver a concentrator module costing less than $100 per square meter that can produce temperatures up to 400 C. Themore » IST concentrator is ideally suited for application of front surface film reflectors and ensures that US corporations will manufacture major components, except for the high temperature receivers.« less
Development of a Low-Cost Attitude Sensor for Agricultural Vehicles
USDA-ARS?s Scientific Manuscript database
The objective of this research was to develop a low-cost attitude sensor for agricultural vehicles. The attitude sensor was composed of three vibratory gyroscopes and two inclinometers. A sensor fusion algorithm was developed to estimate tilt angles (roll and pitch) by least-squares method. In the a...
Cao, Jiguo; Huang, Jianhua Z.; Wu, Hulin
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online. PMID:23155351
Asymptotic Analysis Of The Total Least Squares ESPRIT Algorithm'
NASA Astrophysics Data System (ADS)
Ottersten, B. E.; Viberg, M.; Kailath, T.
1989-11-01
This paper considers the problem of estimating the parameters of multiple narrowband signals arriving at an array of sensors. Modern approaches to this problem often involve costly procedures for calculating the estimates. The ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) algorithm was recently proposed as a means for obtaining accurate estimates without requiring a costly search of the parameter space. This method utilizes an array invariance to arrive at a computationally efficient multidimensional estimation procedure. Herein, the asymptotic distribution of the estimation error is derived for the Total Least Squares (TLS) version of ESPRIT. The Cramer-Rao Bound (CRB) for the ESPRIT problem formulation is also derived and found to coincide with the variance of the asymptotic distribution through numerical examples. The method is also compared to least squares ESPRIT and MUSIC as well as to the CRB for a calibrated array. Simulations indicate that the theoretic expressions can be used to accurately predict the performance of the algorithm.
Reduced cost mission design using surrogate models
NASA Astrophysics Data System (ADS)
Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad
2016-01-01
This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
Toward Large-Area Sub-Arcsecond X-Ray Telescopes
NASA Technical Reports Server (NTRS)
ODell, Stephen L.; Aldcroft, Thomas L.; Allured, Ryan; Atkins, Carolyn; Burrows, David N.; Cao, Jian; Chalifoux, Brandon D.; Chan, Kai-Wing; Cotroneo, Vincenzo; Elsner, Ronald F.;
2014-01-01
The future of x-ray astronomy depends upon development of x-ray telescopes with larger aperture areas (approx. = 3 square meters) and fine angular resolution (approx. = 1 inch). Combined with the special requirements of nested grazing-incidence optics, the mass and envelope constraints of space-borne telescopes render such advances technologically and programmatically challenging. Achieving this goal will require precision fabrication, alignment, mounting, and assembly of large areas (approx. = 600 square meters) of lightweight (approx. = 1 kilogram/square meter areal density) high-quality mirrors at an acceptable cost (approx. = 1 million dollars/square meter of mirror surface area). This paper reviews relevant technological and programmatic issues, as well as possible approaches for addressing these issues-including active (in-space adjustable) alignment and figure correction.
Robust nonlinear canonical correlation analysis: application to seasonal climate forecasting
NASA Astrophysics Data System (ADS)
Cannon, A. J.; Hsieh, W. W.
2008-02-01
Robust variants of nonlinear canonical correlation analysis (NLCCA) are introduced to improve performance on datasets with low signal-to-noise ratios, for example those encountered when making seasonal climate forecasts. The neural network model architecture of standard NLCCA is kept intact, but the cost functions used to set the model parameters are replaced with more robust variants. The Pearson product-moment correlation in the double-barreled network is replaced by the biweight midcorrelation, and the mean squared error (mse) in the inverse mapping networks can be replaced by the mean absolute error (mae). Robust variants of NLCCA are demonstrated on a synthetic dataset and are used to forecast sea surface temperatures in the tropical Pacific Ocean based on the sea level pressure field. Results suggest that adoption of the biweight midcorrelation can lead to improved performance, especially when a strong, common event exists in both predictor/predictand datasets. Replacing the mse by the mae leads to improved performance on the synthetic dataset, but not on the climate dataset except at the longest lead time, which suggests that the appropriate cost function for the inverse mapping networks is more problem dependent.
On removing interpolation and resampling artifacts in rigid image registration.
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce
2013-02-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.
On Removing Interpolation and Resampling Artifacts in Rigid Image Registration
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce
2013-01-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044
NASA Astrophysics Data System (ADS)
Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.
2017-02-01
Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.
Diedrich, Karl T; Roberts, John A; Schmidt, Richard H; Parker, Dennis L
2012-12-01
Attributes like length, diameter, and tortuosity of tubular anatomical structures such as blood vessels in medical images can be measured from centerlines. This study develops methods for comparing the accuracy and stability of centerline algorithms. Sample data included numeric phantoms simulating arteries and clinical human brain artery images. Centerlines were calculated from segmented phantoms and arteries with shortest paths centerline algorithms developed with different cost functions. The cost functions were the inverse modified distance from edge (MDFE(i) ), the center of mass (COM), the binary-thinned (BT)-MDFE(i) , and the BT-COM. The accuracy of the centerline algorithms were measured by the root mean square error from known centerlines of phantoms. The stability of the centerlines was measured by starting the centerline tree from different points and measuring the differences between trees. The accuracy and stability of the centerlines were visualized by overlaying centerlines on vasculature images. The BT-COM cost function centerline was the most stable in numeric phantoms and human brain arteries. The MDFE(i) -based centerline was most accurate in the numeric phantoms. The COM-based centerline correctly handled the "kissing" artery in 16 of 16 arteries in eight subjects whereas the BT-COM was correct in 10 of 16 and MDFE(i) was correct in 6 of 16. The COM-based centerline algorithm was selected for future use based on the ability to handle arteries where the initial binary vessels segmentation exhibits closed loops. The selected COM centerline was found to measure numerical phantoms to within 2% of the known length. Copyright © 2012 Wiley Periodicals, Inc.
Development of an Ultrasonic Airflow Measurement Device for Ducted Air
Raine, Andrew B.; Aslam, Nauman; Underwood, Christopher P.; Danaher, Sean
2015-01-01
In this study, an in-duct ultrasonic airflow measurement device has been designed, developed and tested. The airflow measurement results for a small range of airflow velocities and temperatures show that the accuracy was better than 3.5% root mean square (RMS) when it was tested within a round or square duct compared to the in-line Venturi tube airflow meter used for reference. This proof of concept device has provided evidence that with further development it could be a low-cost alternative to pressure differential devices such as the orifice plate airflow meter for monitoring energy efficiency performance and reliability of ventilation systems. The design uses a number of techniques and design choices to provide solutions to lower the implementation cost of the device compared to traditional airflow meters. The design choices that were found to work well are the single sided transducer arrangement for a “V” shaped reflective path and the use of square wave transmitter pulses ending with the necessary 180° phase changed pulse train to suppress transducer ringing. The device is also designed so that it does not have to rely on high-speed analogue to digital converters (ADC) and intensive digital signal processing, so could be implemented using voltage comparators and low-cost microcontrollers. PMID:25954952
Squared exponential covariance function for prediction of hydrocarbon in seabed logging application
NASA Astrophysics Data System (ADS)
Mukhtar, Siti Mariam; Daud, Hanita; Dass, Sarat Chandra
2016-11-01
Seabed Logging technology (SBL) has progressively emerged as one of the demanding technologies in Exploration and Production (E&P) industry. Hydrocarbon prediction in deep water areas is crucial task for a driller in any oil and gas company as drilling cost is very expensive. Simulation data generated by Computer Software Technology (CST) is used to predict the presence of hydrocarbon where the models replicate real SBL environment. These models indicate that the hydrocarbon filled reservoirs are more resistive than surrounding water filled sediments. Then, as hydrocarbon depth is increased, it is more challenging to differentiate data with and without hydrocarbon. MATLAB is used for data extractions for curve fitting process using Gaussian process (GP). GP can be classified into regression and classification problems, where this work only focuses on Gaussian process regression (GPR) problem. Most popular choice to supervise GPR is squared exponential (SE), as it provides stability and probabilistic prediction in huge amounts of data. Hence, SE is used to predict the presence or absence of hydrocarbon in the reservoir from the data generated.
NASA Astrophysics Data System (ADS)
Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei
2018-01-01
Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.
Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz
2015-10-06
In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.
First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients
NASA Technical Reports Server (NTRS)
Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard
1996-01-01
The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.
Probabilistic estimation of numbers and costs of future landslides in the San Francisco Bay region
Crovelli, R.A.; Coe, J.A.
2009-01-01
We used historical records of damaging landslides triggered by rainstorms and a newly developed Probabilistic Landslide Assessment Cost Estimation System (PLACES) to estimate the numbers and direct costs of future landslides in the 10-county San Francisco Bay region. Historical records of damaging landslides in the region are incomplete. Therefore, our estimates of numbers and costs of future landslides are minimal estimates. The estimated mean annual number of future damaging landslides for the entire 10-county region is about 65. Santa Cruz County has the highest estimated mean annual number of damaging future landslides (about 18), whereas Napa, San Francisco, and Solano Counties have the lowest estimated mean numbers of damaging landslides (about 1 each). The estimated mean annual cost of future landslides in the entire region is about US $14.80 million (year 2000 $). The estimated mean annual cost is highest for San Mateo County ($3.24 million) and lowest for Solano County ($0.18 million). The annual per capita cost for the entire region will be about $2.10. Santa Cruz County will have the highest annual per capita cost at $8.45, whereas San Francisco County will have the lowest per capita cost at $0.31. Normalising costs by dividing by the percentage of land area with slopes equal to or greater than 17% indicates that San Francisco County will have the highest cost per square km ($7,101), whereas Santa Clara County will have the lowest cost per square km ($229). These results indicate that the San Francisco Bay region has one of the highest levels of landslide risk in the United States. Compared with landslide cost estimates from the rest of the world, the risk level in the Bay region seems high, but not exceptionally high.
Ngodock, Hans; Carrier, Matthew; Fabre, Josette; Zingarelli, Robert; Souopgui, Innocent
2017-07-01
This study presents the theoretical framework for variational data assimilation of acoustic pressure observations into an acoustic propagation model, namely, the range dependent acoustic model (RAM). RAM uses the split-step Padé algorithm to solve the parabolic equation. The assimilation consists of minimizing a weighted least squares cost function that includes discrepancies between the model solution and the observations. The minimization process, which uses the principle of variations, requires the derivation of the tangent linear and adjoint models of the RAM. The mathematical derivations are presented here, and, for the sake of brevity, a companion study presents the numerical implementation and results from the assimilation simulated acoustic pressure observations.
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
A risk-based prospective payment system that integrates patient, hospital and national costs.
Siegel, C; Jones, K; Laska, E; Meisner, M; Lin, S
1992-05-01
We suggest that a desirable form for prospective payment for inpatient care is hospital average cost plus a linear combination of individual patient and national average cost. When the coefficients are chosen to minimize mean squared error loss between payment and costs, the payment has efficiency and access incentives. The coefficient multiplying patient costs is a hospital specific measure of financial risk of the patient. Access is promoted since providers receive higher reimbursements for risky, high cost patients. Historical cost data can be used to obtain estimates of payment parameters. The method is applied to Medicare data on psychiatric inpatients.
NASA Astrophysics Data System (ADS)
Hoffer, Nathan Von
Remote sensing has traditionally been done with satellites and manned aircraft. While. these methods can yield useful scientificc data, satellites and manned aircraft have limitations in data frequency, process time, and real time re-tasking. Small low-cost unmanned aerial vehicles (UAVs) provide greater possibilities for personal scientic research than traditional remote sensing platforms. Precision aerial data requires an accurate vehicle dynamics model for controller development, robust flight characteristics, and fault tolerance. One method of developing a model is system identification (system ID). In this thesis system ID of a small low-cost fixed-wing T-tail UAV is conducted. The linerized longitudinal equations of motion are derived from first principles. Foundations of Recursive Least Squares (RLS) are presented along with RLS with an Error Filtering Online Learning scheme (EFOL). Sensors, data collection, data consistency checking, and data processing are described. Batch least squares (BLS) and BLS with EFOL are used to identify aerodynamic coecoefficients of the UAV. Results of these two methods with flight data are discussed.
On the Optimization of the Doses of Chemical Fertilizers for Crops
NASA Astrophysics Data System (ADS)
Sala, Florin; Boldea, Marius
2011-09-01
The mono-factorial model, which gives the relation between the yield and the dose of chemical fertilizers, is based on the Mitscherlich function f1(x) = f1(0)+a1(1-e-11x). In addition to this function, we can consider f2(x) = f2(0)+a2 tanh(b2x), to be the basis for a new mathematical model, where tanh(b2x) represents the hyperbolic tangent. In the case of a bi-factorial model: f(x,y) = f(0,0)+a1 tanh(b1x)+a2 tanh(b2y)+a3 tanh(b1x)tanh(b2y) represents a generalization of the last relation. The constants that are involved in these functions are determined with the least squares method, by comparison with the experimental data. Taking into account both the market value of the products and the cost of fertilizers, we can find the optimal doses for maximizing certain economic indicators, such as revenue or profitability.
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Barker, Jeffrey W.; Rosso, Andrea L.; Sparto, Patrick J.; Huppert, Theodore J.
2016-01-01
Abstract. Functional near-infrared spectroscopy (fNIRS) is a relatively low-cost, portable, noninvasive neuroimaging technique for measuring task-evoked hemodynamic changes in the brain. Because fNIRS can be applied to a wide range of populations, such as children or infants, and under a variety of study conditions, including those involving physical movement, gait, or balance, fNIRS data are often confounded by motion artifacts. Furthermore, the high sampling rate of fNIRS leads to high temporal autocorrelation due to systemic physiology. These two factors can reduce the sensitivity and specificity of detecting hemodynamic changes. In a previous work, we showed that these factors could be mitigated by autoregressive-based prewhitening followed by the application of an iterative reweighted least squares algorithm offline. This current work extends these same ideas to real-time analysis of brain signals by modifying the linear Kalman filter, resulting in an algorithm for online estimation that is robust to systemic physiology and motion artifacts. We evaluated the performance of the proposed method via simulations of evoked hemodynamics that were added to experimental resting-state data, which provided realistic fNIRS noise. Last, we applied the method post hoc to data from a standing balance task. Overall, the new method showed good agreement with the analogous offline algorithm, in which both methods outperformed ordinary least squares methods. PMID:27226974
The Impact of Financial Incentives on Physician Productivity in Medical Groups
Conrad, Douglas A; Sales, Anne; Liang, Su-Ying; Chaudhuri, Anoshua; Maynard, Charles; Pieper, Lisa; Weinstein, Laurel; Gans, David; Piland, Neill
2002-01-01
Objective To estimate the effect of financial incentives in medical groups—both at the level of individual physician and collectively—on individual physician productivity. Data Sources/Study Setting Secondary data from 1997 on individual physician and group characteristics from two surveys: Medical Group Management Association (MGMA) Physician Compensation and Production Survey and the Cost Survey; Area Resource File data on market characteristics, and various sources of state regulatory data. Study Design Cross-sectional estimation of individual physician production function models, using ordinary least squares and two-stage least squares regression. Data Collection Data from respondents completing all items required for the two stages of production function estimation on both MGMA surveys (with RBRVS units as production measure: 102 groups, 2,237 physicians; and with charges as the production measure: 383 groups, 6,129 physicians). The 102 groups with complete data represent 1.8 percent of the 5,725 MGMA member groups. Principal Findings Individual production-based physician compensation leads to increased productivity, as expected (elasticity=.07, p<.05). The productivity effects of compensation methods based on equal shares of group net income and incentive bonuses are significantly positive (p<.05) and smaller in magnitude. The group-levelfinancial incentive does not appear to be significantly related to physician productivity. Conclusions Individual physician incentives based on own production do increase physician productivity. PMID:12236389
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
Desktop Nanofabrication with Massively Multiplexed Beam Pen Lithography
Liao, Xing; Brown, Keith A.; Schmucker, Abrin L.; Liu, Guoliang; He, Shu; Shim, Wooyoung; Mirkin, Chad A.
2013-01-01
The development of a lithographic method that can rapidly define nanoscale features across centimeter-scale surfaces has been a long standing goal of the nanotechnology community. If such a ‘desktop nanofab’ could be implemented in a low-cost format, it would bring the possibility of point-of-use nanofabrication for rapidly prototyping diverse functional structures. Here we report the development of a new tool that is capable of writing arbitrary patterns composed of diffraction-unlimited features over square centimeter areas that are in registry with existing patterns and nanostructures. Importantly, this instrument is based on components that are inexpensive compared to the combination of state-of-the-art nanofabrication tools that approach its capabilities. This tool can be used to prototype functional electronic devices in a mask-free fashion in addition to providing a unique platform for performing high throughput nano- to macroscale photochemistry with relevance to biology and medicine. PMID:23868336
Manifold Learning by Preserving Distance Orders.
Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz
2014-03-01
Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.
Desktop nanofabrication with massively multiplexed beam pen lithography.
Liao, Xing; Brown, Keith A; Schmucker, Abrin L; Liu, Guoliang; He, Shu; Shim, Wooyoung; Mirkin, Chad A
2013-01-01
The development of a lithographic method that can rapidly define nanoscale features across centimetre-scale surfaces has been a long-standing goal for the nanotechnology community. If such a 'desktop nanofab' could be implemented in a low-cost format, it would bring the possibility of point-of-use nanofabrication for rapidly prototyping diverse functional structures. Here we report the development of a new tool that is capable of writing arbitrary patterns composed of diffraction-unlimited features over square centimetre areas that are in registry with existing patterns and nanostructures. Importantly, this instrument is based on components that are inexpensive compared with the combination of state-of-the-art nanofabrication tools that approach its capabilities. This tool can be used to prototype functional electronic devices in a mask-free fashion in addition to providing a unique platform for performing high-throughput nano- to macroscale photochemistry with relevance to biology and medicine.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J. (Principal Investigator)
1982-01-01
Snow reflectance in all 6 TM reflective bands, i.e., 1, 2, 3, 4, 5, and 7 was simulated using a delta-Eddington model. Snow reflectance in bands 4, 5, and 7 appear sensitive to grain size. It appears that the TM filters resemble a ""square-wave'' closely enough that a square-wave can be assumed in calculations. Integrated band reflectance over the actual response functions was calculated using sensor data supplied by Santa Barbara Research Center. Differences between integrating over the actual response functions and the equivalent square wave were negligible. Tables are given which show (1) sensor saturation radiance as a percentage of the solar constant, integrated through the band response function; (2) comparisons of integrations through the sensor response function with integrations over the equivalent square wave; and (3) calculations of integrated reflectance for snow over all reflective TM bands, and water and ice clouds with thickness of 1 mm water equivalent over TM bands 5 and 7. These calculations look encouraging for snow/cloud discrimination with TM bands 5 and 7.
NLINEAR - NONLINEAR CURVE FITTING PROGRAM
NASA Technical Reports Server (NTRS)
Everhart, J. L.
1994-01-01
A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.
NASA Technical Reports Server (NTRS)
Montgomery, Edward E., IV; Heaton, Andrew F.; Garbe, Gregory P.
2003-01-01
Solar sails are a near term, low thrust, propellantless propulsion technology suitable for orbital maneuvering, station keeping, and attitude control applications for small payloads. Furthermore, these functions can be highly integrated, reducing mass, cost and complexity. The solar sail concept is based on momentum exchange with solar flux reflected from a large, deployed thin membrane. Thrust performance increases as the square of the distance to the sun. In comparison to conventional chemical systems, there are missions where solar sails are vastly more and less economical. The less attractive applications involve large payloads, outer solar system transfers, and short trip times. However, for inclination changes and station keeping at locations requiring constant thrust, the solar sail is the only economical option for missions of more than a few weeks duration. We compare the location and energies required for these applications between solar sails, advanced electric propulsion, and conventional rockets. We address the effect on mass fraction to understand solar sail mission cost and capability. Finally, the benefit of potential applications to near term science missions is reported.
Promising Results from Three NASA SBIR Solar Array Technology Development Programs
NASA Technical Reports Server (NTRS)
Eskenazi, Mike; White, Steve; Spence, Brian; Douglas, Mark; Glick, Mike; Pavlick, Ariel; Murphy, David; O'Neill, Mark; McDanal, A. J.; Piszczor, Michael
2005-01-01
Results from three NASA SBIR solar array technology programs are presented. The programs discussed are: 1) Thin Film Photovoltaic UltraFlex Solar Array; 2) Low Cost/Mass Electrostatically Clean Solar Array (ESCA); and 3) Stretched Lens Array SquareRigger (SLASR). The purpose of the Thin Film UltraFlex (TFUF) Program is to mature and validate the use of advanced flexible thin film photovoltaics blankets as the electrical subsystem element within an UltraFlex solar array structural system. In this program operational prototype flexible array segments, using United Solar amorphous silicon cells, are being manufactured and tested for the flight qualified UltraFlex structure. In addition, large size (e.g. 10 kW GEO) TFUF wing systems are being designed and analyzed. Thermal cycle and electrical test and analysis results from the TFUF program are presented. The purpose of the second program entitled, Low Cost/Mass Electrostatically Clean Solar Array (ESCA) System, is to develop an Electrostatically Clean Solar Array meeting NASA s design requirements and ready this technology for commercialization and use on the NASA MMS and GED missions. The ESCA designs developed use flight proven materials and processes to create a ESCA system that yields low cost, low mass, high reliability, high power density, and is adaptable to any cell type and coverglass thickness. All program objectives, which included developing specifications, creating ESCA concepts, concept analysis and trade studies, producing detailed designs of the most promising ESCA treatments, manufacturing ESCA demonstration panels, and LEO (2,000 cycles) and GEO (1,350 cycles) thermal cycling testing of the down-selected designs were successfully achieved. The purpose of the third program entitled, "High Power Platform for the Stretched Lens Array," is to develop an extremely lightweight, high efficiency, high power, high voltage, and low stowed volume solar array suitable for very high power (multi-kW to MW) applications. These objectives are achieved by combining two cutting edge technologies, the SquareRigger solar array structure and the Stretched Lens Array (SLA). The SLA SquareRigger solar array is termed SLASR. All program objectives, which included developing specifications, creating preliminary designs for a near-term SLASR, detailed structural, mass, power, and sizing analyses, fabrication and power testing of a functional flight-like SLASR solar blanket, were successfully achieved.
Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...
2016-08-04
This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less
Thomas, Phillip S.
2017-01-01
We propose a method for solving the vibrational Schrödinger equation with which one can compute spectra for molecules with more than ten atoms. It uses sum-of-product (SOP) basis functions stored in a canonical polyadic tensor format and generated by evaluating matrix-vector products. By doing a sequence of partial optimizations, in each of which the factors in a SOP basis function for a single coordinate are optimized, the rank of the basis functions is reduced as matrix-vector products are computed. This is better than using an alternating least squares method to reduce the rank, as is done in the reduced-rank block power method. Partial optimization is better because it speeds up the calculation by about an order of magnitude and allows one to significantly reduce the memory cost. We demonstrate the effectiveness of the new method by computing vibrational spectra of two molecules, ethylene oxide (C2H4O) and cyclopentadiene (C5H6), with 7 and 11 atoms, respectively. PMID:28571348
Thomas, Phillip S; Carrington, Tucker
2017-05-28
We propose a method for solving the vibrational Schrödinger equation with which one can compute spectra for molecules with more than ten atoms. It uses sum-of-product (SOP) basis functions stored in a canonical polyadic tensor format and generated by evaluating matrix-vector products. By doing a sequence of partial optimizations, in each of which the factors in a SOP basis function for a single coordinate are optimized, the rank of the basis functions is reduced as matrix-vector products are computed. This is better than using an alternating least squares method to reduce the rank, as is done in the reduced-rank block power method. Partial optimization is better because it speeds up the calculation by about an order of magnitude and allows one to significantly reduce the memory cost. We demonstrate the effectiveness of the new method by computing vibrational spectra of two molecules, ethylene oxide (C 2 H 4 O) and cyclopentadiene (C 5 H 6 ), with 7 and 11 atoms, respectively.
NASA Astrophysics Data System (ADS)
Cotar, Codina; Friesecke, Gero; Klüppelberg, Claudia
2018-06-01
We prove rigorously that the exact N-electron Hohenberg-Kohn density functional converges in the strongly interacting limit to the strictly correlated electrons (SCE) functional, and that the absolute value squared of the associated constrained search wavefunction tends weakly in the sense of probability measures to a minimizer of the multi-marginal optimal transport problem with Coulomb cost associated to the SCE functional. This extends our previous work for N = 2 ( Cotar etal. in Commun Pure Appl Math 66:548-599, 2013). The correct limit problem has been derived in the physics literature by Seidl (Phys Rev A 60 4387-4395, 1999) and Seidl, Gorigiorgi and Savin (Phys Rev A 75:042511 1-12, 2007); in these papers the lack of a rigorous proofwas pointed out.We also give amathematical counterexample to this type of result, by replacing the constraint of given one-body density—an infinite dimensional quadratic expression in the wavefunction—by an infinite-dimensional quadratic expression in the wavefunction and its gradient. Connections with the Lawrentiev phenomenon in the calculus of variations are indicated.
Bread Basket: a gaming model for estimating home-energy costs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An instructional manual for answering the twenty variables on COLORADO ENERGY's computerized program estimating home energy costs. The program will generate home-energy cost estimates based on individual household data, such as total square footage, number of windows and doors, number and variety of appliances, heating system design, etc., and will print out detailed costs, showing the percentages of the total household budget that energy costs will amount to over a twenty-year span. Using the program, homeowners and policymakers alike can predict the effects of rising energy prices on total spending by Colorado households.
2015-08-18
techniques of measuring energy loss due to enve- lope inefficiencies from the built environment. A multi-sensor hardware device attached to the roof of a...at this installa- tion, recommends specific energy conservation measures (ECMs), and quantifies significant potential return on investment. ERDC/CERL...to several thousand square feet, total building square feet was used as a metric to measure the cost effectiveness of handheld versus mobile
Go Green, Save Green with ENERGY STAR[R
ERIC Educational Resources Information Center
Hatcher, Caterina
2010-01-01
Did you know that the nation's 17,450 K-12 school districts spend more on energy than on computers and textbooks combined? Energy costs represent a typical school district's second largest operating expense after salaries. Schools that have earned the ENERGY STAR--EPA's mark of superior energy performance--cost 40 cents per square foot less to…
ERIC Educational Resources Information Center
Collins, Michael T.
2011-01-01
The purpose of this study is to develop a costing model for maintenance and operations expenditures among 16 single-campus California community college districts and assess the impact of a variety of variables including size of student enrollment, physical plant age, acreage, gross square footage, and general obligation facility bonds on district…
Opondo, Everisto; Wanzala, Peter; Makokha, Ansellimo
2013-01-01
A prospective quasi experimental study was undertaken at the Thika level 5 hospital. The study aimed to compare the costs of managing femoral shaft fracture by surgery as compared to skeletal traction. Sixty nine (46.6%) patients were enrolled in group A and managed surgically by intramedullary nailing while 79 (53.4%) patients were enrolled in group B and managed by skeletal traction. Exclusion criteria included patients with pathological fractures and previous femoral fractures. Data was collected by evaluation of patients in patient bills using a standardized questionnaire. The questionnaire included cost of haematological and radiological tests, bed fees, theatre fees and physiotherapy costs. The data was compiled and analyzed using SPSS version 16. Person's chi square and odds ratios were used to measure associations and risk analysis respectively. A higher proportion of patients (88.4%) in group A were hospitalized for less than one month compared to 20 patients (30.4%) in group B (p, 0.001).Total cost of treatment in group A was significantly lower than in group B. Nineteen (27.9%) patients who underwent surgery paid a total bill of Ksh 5000-7500 compared to 7(10.4%) who were treated by traction. The financial cost benefit of surgery was further complimented by better functional outcomes. The data indicates a cost advantage of managing femoral shaft fracture by surgery compared to traction. Furthermore the longer hospital stay in the traction group is associated with more malunion, limb deformity and shortening.
Printed wide-slot antenna design with bandwidth and gain enhancement on low-cost substrate.
Samsuzzaman, M; Islam, M T; Mandeep, J S; Misran, N
2014-01-01
This paper presents a printed wide-slot antenna design and prototyping on available low-cost polymer resin composite material fed by a microstrip line with a rotated square slot for bandwidth enhancement and defected ground structure for gain enhancement. An I-shaped microstrip line is used to excite the square slot. The rotated square slot is embedded in the middle of the ground plane, and its diagonal points are implanted in the middle of the strip line and ground plane. To increase the gain, four L-shaped slots are etched in the ground plane. The measured results show that the proposed structure retains a wide impedance bandwidth of 88.07%, which is 20% better than the reference antenna. The average gain is also increased, which is about 4.17 dBi with a stable radiation pattern in the entire operating band. Moreover, radiation efficiency, input impedance, current distribution, axial ratio, and parametric studies of S11 for different design parameters are also investigated using the finite element method-based simulation software HFSS.
Printed Wide-Slot Antenna Design with Bandwidth and Gain Enhancement on Low-Cost Substrate
Samsuzzaman, M.; Islam, M. T.; Mandeep, J. S.; Misran, N.
2014-01-01
This paper presents a printed wide-slot antenna design and prototyping on available low-cost polymer resin composite material fed by a microstrip line with a rotated square slot for bandwidth enhancement and defected ground structure for gain enhancement. An I-shaped microstrip line is used to excite the square slot. The rotated square slot is embedded in the middle of the ground plane, and its diagonal points are implanted in the middle of the strip line and ground plane. To increase the gain, four L-shaped slots are etched in the ground plane. The measured results show that the proposed structure retains a wide impedance bandwidth of 88.07%, which is 20% better than the reference antenna. The average gain is also increased, which is about 4.17 dBi with a stable radiation pattern in the entire operating band. Moreover, radiation efficiency, input impedance, current distribution, axial ratio, and parametric studies of S11 for different design parameters are also investigated using the finite element method-based simulation software HFSS. PMID:24696661
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
A Simple Introduction to Moving Least Squares and Local Regression Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao Veerabhadra
In this brief note, a highly simpli ed introduction to esimating functions over a set of particles is presented. The note starts from Global Least Squares tting, going on to Moving Least Squares estimation (MLS) and nally, Local Regression Estimation (LRE).
Low-cost evacuated-tube solar collector
NASA Astrophysics Data System (ADS)
1981-02-01
A prototype design for an evacuated tube air cooled solar collector module was completed. A product cost study, based on the production of 60,000 of the prototype modules per year (approx. 1,000,000 square feet annually), estimates that the module as shipped would have a cost at inventory of $7.09 to $7.40 per square foot of aperture. Computer programs were developed to predict the optical and thermal performane of the module. Antireflective coatings (porous aluminum oxide) formed by spraying or dipping were demonstrated but degraded more rapidly when exposed to a high humidity ambient acid etched films. A selective black chromium oxide multi-layered graded film was vapor deposited which had an absorptivity of about 0.9 and an emissivity of 0.03. When the film was heated to temperatures of 4000 C in a gettered vacuum for as little as 24 hours, however, irreversible changes took place both between and within coating layers which resulted in alpha decreasing to about 0.73 and epsilon increasing to 0.14.
The GLC8 - A miniature low cost ring laser gyroscope
NASA Astrophysics Data System (ADS)
Godart, D.-F.; Peghaire, J.-P.
SAGEM is enlarging its family of ring laser gyros (RLG) which already includes a triangular 32-cm path-length gyro and a square 16-cm path-length gyro, in order to meet the increasing demand for low cost, medium accuracy strap-down inertial measurement units for applications such as short- and medium-range tactical missiles as well as aided navigation systems for aircrafts and land vehicles. Based on the experience acquired in the past 13 years in the RLG field, and especially in mirror manufacturing, SAGEM developed the GLC8 which has a square 8-cm path length cavity, central piezoelectric dither. It incorporates two cathodes, a single anode, and is technologically designed to minimize production-costs while optimizing the performance to global device size ratio. This gyro is characterized by a bias and a scale-factor stability respectively better than 0.5 deg/h and 100 ppm (1 sigma), and has an operating lifetime compatible with the most demanding relevant applications and a high robustness to mechanical environments.
NASA Astrophysics Data System (ADS)
Liu, L. H.; Tan, J. Y.
2007-02-01
A least-squares collocation meshless method is employed for solving the radiative heat transfer in absorbing, emitting and scattering media. The least-squares collocation meshless method for radiative transfer is based on the discrete ordinates equation. A moving least-squares approximation is applied to construct the trial functions. Except for the collocation points which are used to construct the trial functions, a number of auxiliary points are also adopted to form the total residuals of the problem. The least-squares technique is used to obtain the solution of the problem by minimizing the summation of residuals of all collocation and auxiliary points. Three numerical examples are studied to illustrate the performance of this new solution method. The numerical results are compared with the other benchmark approximate solutions. By comparison, the results show that the least-squares collocation meshless method is efficient, accurate and stable, and can be used for solving the radiative heat transfer in absorbing, emitting and scattering media.
Generalized adjustment by least squares ( GALS).
Elassal, A.A.
1983-01-01
The least-squares principle is universally accepted as the basis for adjustment procedures in the allied fields of geodesy, photogrammetry and surveying. A prototype software package for Generalized Adjustment by Least Squares (GALS) is described. The package is designed to perform all least-squares-related functions in a typical adjustment program. GALS is capable of supporting development of adjustment programs of any size or degree of complexity. -Author
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
A Demonstration System for Capturing Geothermal Energy from Mine Waters beneath Butte, Montana
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blackketter, Donald
2015-06-01
Executive Summary An innovative 50-ton ground-source heat pump (GSHP) system was installed to provide space heating and cooling for a 56,000 square foot (5,200 square meter) building in Butte Montana, in conjunction with its heating and chiller systems. Butte is a location with winter conditions much colder than the national average. The GSHP uses flooded mine waters at 78F (25C) as the heat source and heat sink. The heat transfer performance and efficiency of the system were analyzed using data from January through July 2014. This analysis indicated that for typical winter conditions in Butte, Montana, the GSHP could delivermore » about 88% of the building’s annual heating needs. Compared with a baseline natural-gas/electric system, the system demonstrated at least 69% site energy savings, 38% source energy savings, 39% carbon dioxide emissions reduction, and a savings of $17,000 per year (40%) in utility costs. Assuming a $10,000 per ton cost for installing a production system, the payback period at natural gas costs of $9.63/MMBtu and electricity costs of $0.08/kWh would be in the range of 40 to 50 years. At higher utility prices, or lower installation costs, the payback period would obviously be reduced.« less
Cost, Energy, and Environmental Impact of Automated Electric Taxi Fleets in Manhattan.
Bauer, Gordon S; Greenblatt, Jeffery B; Gerke, Brian F
2018-04-17
Shared automated electric vehicles (SAEVs) hold great promise for improving transportation access in urban centers while drastically reducing transportation-related energy consumption and air pollution. Using taxi-trip data from New York City, we develop an agent-based model to predict the battery range and charging infrastructure requirements of a fleet of SAEVs operating on Manhattan Island. We also develop a model to estimate the cost and environmental impact of providing service and perform extensive sensitivity analysis to test the robustness of our predictions. We estimate that costs will be lowest with a battery range of 50-90 mi, with either 66 chargers per square mile, rated at 11 kW or 44 chargers per square mile, rated at 22 kW. We estimate that the cost of service provided by such an SAEV fleet will be $0.29-$0.61 per revenue mile, an order of magnitude lower than the cost of service of present-day Manhattan taxis and $0.05-$0.08/mi lower than that of an automated fleet composed of any currently available hybrid or internal combustion engine vehicle (ICEV). We estimate that such an SAEV fleet drawing power from the current NYC power grid would reduce GHG emissions by 73% and energy consumption by 58% compared to an automated fleet of ICEVs.
Simultaneous prediction of muscle and contact forces in the knee during gait.
Lin, Yi-Chung; Walter, Jonathan P; Banks, Scott A; Pandy, Marcus G; Fregly, Benjamin J
2010-03-22
Musculoskeletal models are currently the primary means for estimating in vivo muscle and contact forces in the knee during gait. These models typically couple a dynamic skeletal model with individual muscle models but rarely include articular contact models due to their high computational cost. This study evaluates a novel method for predicting muscle and contact forces simultaneously in the knee during gait. The method utilizes a 12 degree-of-freedom knee model (femur, tibia, and patella) combining muscle, articular contact, and dynamic skeletal models. Eight static optimization problems were formulated using two cost functions (one based on muscle activations and one based on contact forces) and four constraints sets (each composed of different combinations of inverse dynamic loads). The estimated muscle and contact forces were evaluated using in vivo tibial contact force data collected from a patient with a force-measuring knee implant. When the eight optimization problems were solved with added constraints to match the in vivo contact force measurements, root-mean-square errors in predicted contact forces were less than 10 N. Furthermore, muscle and patellar contact forces predicted by the two cost functions became more similar as more inverse dynamic loads were used as constraints. When the contact force constraints were removed, estimated medial contact forces were similar and lateral contact forces lower in magnitude compared to measured contact forces, with estimated muscle forces being sensitive and estimated patellar contact forces relatively insensitive to the choice of cost function and constraint set. These results suggest that optimization problem formulation coupled with knee model complexity can significantly affect predicted muscle and contact forces in the knee during gait. Further research using a complete lower limb model is needed to assess the importance of this finding to the muscle and contact force estimation process. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
NASA Technical Reports Server (NTRS)
Cai, Zhiqiang; Manteuffel, Thomas A.; McCormick, Stephen F.
1996-01-01
In this paper, we study the least-squares method for the generalized Stokes equations (including linear elasticity) based on the velocity-vorticity-pressure formulation in d = 2 or 3 dimensions. The least squares functional is defined in terms of the sum of the L(exp 2)- and H(exp -1)-norms of the residual equations, which is weighted appropriately by by the Reynolds number. Our approach for establishing ellipticity of the functional does not use ADN theory, but is founded more on basic principles. We also analyze the case where the H(exp -1)-norm in the functional is replaced by a discrete functional to make the computation feasible. We show that the resulting algebraic equations can be uniformly preconditioned by well-known techniques.
Aircraft Airframe Cost Estimation Using a Random Coefficients Model
1979-12-01
approach will also be used here. 2 Model Formulation Several different types of equations could be used for the basic form of the CER, such as linear ...5) Marcotte developed several CER’s for fighter aircraft airframes using the log- linear model . A plot of the residuals from the CER for recurring...of the natural logarithm. Ordinary Least Squares The ordinary least squares procedure starts with the equation for the general linear model . The
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Mohammadi, Vahid
2017-08-01
In this research, we investigate the numerical solution of nonlinear Schrödinger equations in two and three dimensions. The numerical meshless method which will be used here is RBF-FD technique. The main advantage of this method is the approximation of the required derivatives based on finite difference technique at each local-support domain as Ωi. At each Ωi, we require to solve a small linear system of algebraic equations with a conditionally positive definite matrix of order 1 (interpolation matrix). This scheme is efficient and its computational cost is same as the moving least squares (MLS) approximation. A challengeable issue is choosing suitable shape parameter for interpolation matrix in this way. In order to overcome this matter, an algorithm which was established by Sarra (2012), will be applied. This algorithm computes the condition number of the local interpolation matrix using the singular value decomposition (SVD) for obtaining the smallest and largest singular values of that matrix. Moreover, an explicit method based on Runge-Kutta formula of fourth-order accuracy will be applied for approximating the time variable. It also decreases the computational costs at each time step since we will not solve a nonlinear system. On the other hand, to compare RBF-FD method with another meshless technique, the moving kriging least squares (MKLS) approximation is considered for the studied model. Our results demonstrate the ability of the present approach for solving the applicable model which is investigated in the current research work.
High-resolution inkjet printing of all-polymer transistor circuits.
Sirringhaus, H; Kawase, T; Friend, R H; Shimoda, T; Inbasekaran, M; Wu, W; Woo, E P
2000-12-15
Direct printing of functional electronic materials may provide a new route to low-cost fabrication of integrated circuits. However, to be useful it must allow continuous manufacturing of all circuit components by successive solution deposition and printing steps in the same environment. We demonstrate direct inkjet printing of complete transistor circuits, including via-hole interconnections based on solution-processed polymer conductors, insulators, and self-organizing semiconductors. We show that the use of substrate surface energy patterning to direct the flow of water-based conducting polymer inkjet droplets enables high-resolution definition of practical channel lengths of 5 micrometers. High mobilities of 0.02 square centimeters per volt second and on-off current switching ratios of 10(5) were achieved.
The Quételet index revisited in children and adults.
Chiquete, Erwin; Ruiz-Sandoval, José L; Ochoa-Guzmán, Ana; Sánchez-Orozco, Laura V; Lara-Zaragoza, Erika B; Basaldúa, Nancy; Ruiz-Madrigal, Bertha; Martínez-López, Erika; Román, Sonia; Godínez-Gutiérrez, Sergio A; Panduro, Arturo
2014-02-01
The body mass index (BMI) is based on the original concept that body weight increases as a function of height squared. As an indicator of obesity the modern BMI assumption postulates that adiposity also increases as a function of height in states of positive energy balance. To evaluate the BMI concept across different adiposity magnitudes, in both children and adults. We studied 975 individuals who underwent anthropometric evaluation: 474 children and 501 adults. Tetrapolar bioimpedance analysis was used to assess body fat and lean mass. BMI significantly correlated with percentage of body fat (%BF; children: r=0.893; adults: r=0.878) and with total fat mass (children: r=0.967; adults: r=0.953). In children, body weight, fat mass, %BF and waist circumference progressively increased as a function of height squared. In adults body weight increased as a function of height squared, but %BF actually decreased with increasing height both in men (r=-0.406; p<0.001) and women (r=-0.413; p<0.001). Most of the BMI variance in adults was explained by a positive correlation of total lean mass with height squared (r(2)=0.709), and by a negative correlation of BMI with total fat mass (r=-0.193). Body weight increases as a function of height squared. However, adiposity progressively increases as a function of height only in children. BMI is not an ideal indicator of obesity in adults since it is significantly influenced by the lean mass, even in obese individuals. Copyright © 2013 SEEN. Published by Elsevier Espana. All rights reserved.
Report on Utilities Usage and Cost, 1980-81 to 1984-85.
ERIC Educational Resources Information Center
Alabama State Commission on Higher Education, Montgomery.
The consumption and cost of energy and other types of utilities by state college campuses were analyzed by the Alabama Commission on Higher Education. A focus of attention has been changes in energy usage per square foot from year to year as an indicator of the institutions' energy conservation and, over time, of the changing characteristics of…
NASA Astrophysics Data System (ADS)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-11-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.
Integrated polymer polarization rotator based on tilted laser ablation
NASA Astrophysics Data System (ADS)
Poulopoulos, Giannis; Kalavrouziotis, Dimitrios; Missinne, Jeroen; Bosman, Erwin; Van Steenberge, Geert; Apostolopoulos, Dimitrios; Avramopoulos, Hercules
2017-02-01
The ubiquitous need for compact, low-cost and mass production photonic devices, for next generation photonic enabled applications, necessitates the development of integrated components exhibiting functionalities that are, to date, carried out by free space elements or standard fiber equipment. The polarization rotator is a typical example of such tendency, as it is a crucial part of the PBS operation of future transceiver modules that leverage polarization multiplexing schemes for increasing the optical network capacity. Up to now, a variety of integrated polarization rotating concepts has been proposed and reported, relying, mainly, on special waveguide crossection configurations for achieving the rotation. Nevertheless, most of those concepts employ SiPh or III-V integration platforms, significantly increasing the fabrication complexity required for customizing the waveguide crossection, which in turn leads to either prohibitively increased cost or compromised quality and performance. In this manuscript we demonstrate the extensive design of a low-cost integrated polymer polarization rotator employing a right-trapezoidal waveguide interfaced to standard square polymer waveguides. First the crossection of the waveguide is defined by calculating and analyzing the components of the hybrid modes excited in the waveguide structure, using a Finite Difference mode solver. Mode overlaps between the fundamental polymer mode and each hybrid mode reveal the optimum lateral offset between the square polymer and the trapezoidal waveguide that ensures both minimum interface loss and maximized polarization rotation performance. The required trapezoidal waveguide length is obtained through EigenMode Expansion (EME) propagation simulations, while more than 95% maximum theoretical conversion efficiency is reported over the entire C-band, resulting to more than 13dB polarization extinction ratio. The polarization rotator design relies on the development of angled polymer waveguide sidewalls, employing the tilted laser ablation technology, currently available at CMST. Therefore, the aforementioned simulation steps adhere fully to the respective design rules, taking into account the anticipated fabrication variations
Stein, Koen W H; Werner, Jan
2013-01-01
Osteocytes harbour much potential for paleobiological studies. Synchrotron radiation and spectroscopic analyses are providing fascinating data on osteocyte density, size and orientation in fossil taxa. However, such studies may be costly and time consuming. Here we describe an uncomplicated and inexpensive method to measure osteocyte lacunar densities in bone thin sections. We report on cell lacunar densities in the long bones of various extant and extinct tetrapods, with a focus on sauropodomorph dinosaurs, and how lacunar densities can help us understand bone formation rates in the iconic sauropod dinosaurs. Ordinary least square and phylogenetic generalized least square regressions suggest that sauropodomorphs have lacunar densities higher than scaled up or comparably sized mammals. We also found normal mammalian-like osteocyte densities for the extinct bovid Myotragus, questioning its crocodilian-like physiology. When accounting for body mass effects and phylogeny, growth rates are a main factor determining the density of the lacunocanalicular network. However, functional aspects most likely play an important role as well. Observed differences in cell strategies between mammals and dinosaurs likely illustrate the convergent nature of fast growing bone tissues in these groups.
Stein, Koen W. H.; Werner, Jan
2013-01-01
Osteocytes harbour much potential for paleobiological studies. Synchrotron radiation and spectroscopic analyses are providing fascinating data on osteocyte density, size and orientation in fossil taxa. However, such studies may be costly and time consuming. Here we describe an uncomplicated and inexpensive method to measure osteocyte lacunar densities in bone thin sections. We report on cell lacunar densities in the long bones of various extant and extinct tetrapods, with a focus on sauropodomorph dinosaurs, and how lacunar densities can help us understand bone formation rates in the iconic sauropod dinosaurs. Ordinary least square and phylogenetic generalized least square regressions suggest that sauropodomorphs have lacunar densities higher than scaled up or comparably sized mammals. We also found normal mammalian-like osteocyte densities for the extinct bovid Myotragus, questioning its crocodilian-like physiology. When accounting for body mass effects and phylogeny, growth rates are a main factor determining the density of the lacunocanalicular network. However, functional aspects most likely play an important role as well. Observed differences in cell strategies between mammals and dinosaurs likely illustrate the convergent nature of fast growing bone tissues in these groups. PMID:24204748
NASA Astrophysics Data System (ADS)
El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.
2007-11-01
We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
Sampling functions for geophysics
NASA Technical Reports Server (NTRS)
Giacaglia, G. E. O.; Lunquist, C. A.
1972-01-01
A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
Low-Cost Linear Optical Sensors.
ERIC Educational Resources Information Center
Kinsey, Kenneth F.; Meisel, David D.
1994-01-01
Discusses the properties and application of three light-to-voltage optical sensors. The sensors have been used for sensing diffraction patterns, the inverse-square law, and as a fringe counter with an interferometer. (MVL)
Courtyard Provides Space for New College Bookstore.
ERIC Educational Resources Information Center
Ferreri, Joseph P.; McAninch, Harold D.
1983-01-01
An open-air courtyard converted into a three-level bookstore retains its attractiveness with skylighted malls on two sides. Despite construction obstacles, the cost was a reasonable $55 per square foot. (MLF)
Xanthium strumarium L. seed hull as a zero cost alternative for Rhodamine B dye removal.
Khamparia, Shraddha; Jaspal, Dipika Kaur
2017-07-15
Treatment of polluted water has been considered as one of the most important aspects in environmental sciences. Present study explores the decolorization potential of a low cost natural adsorbent Xanthium strumarium L. seed hull for the adsorption of a toxic xanthene dye, Rhodamine B (RHB). The characterization of the adsorbent revealed the presence of high amount of carbon, when exposed to Electron Dispersive Spectroscopy (EDS). Further appreciable decolorization took place which was confirmed by Fourier Transform Infrared Spectroscopy (FTIR) analysis noticing shift in peaks. Isothermal studies indicated multilayer adsorption following Freundlich isotherm. The rate of adsorption was supported by second order kinetics directing a chemical phenomenon during the process with dominance of film diffusion as the rate governing step. Moreover paper aims at correlating the chemical arena to the mathematical aspect providing an in-depth information of the studied treatment process. For proper assessment and validation of the observed data, experimental data has been statistically treated by applying different error functions namely, Chi-square test (χ 2 ), Sum of absolute errors (EABS) and Normalized standard deviation (NSD). Further practical applicability of the low cost adsorbent was evaluated by continuous column mode studies with 72.2% of dye recovery. Xanthium strumarium L. proved to be environment friendly low cost natural adsorbent for decolorizing RHB from aquatic system. Copyright © 2017 Elsevier Ltd. All rights reserved.
Note: An inexpensive square waveform ion funnel driver
NASA Astrophysics Data System (ADS)
Hoffman, Nathan M.; Opačić, Bojana; Reilly, Peter T. A.
2017-01-01
An inexpensive frequency variable square waveform generator (WFG) was developed to use with existing sinusoidal waveform driven ion funnels. The developed WFG was constructed using readily available low voltage DC power supplies and discrete components placed in printed circuit boards. As applied to ion funnels, this WFG represents considerable cost savings over commercially available products without sacrificing performance. Operation of the constructed pulse generator has been demonstrated for a 1 nF ion funnel at an operating frequency of 1 MHz while switching 48 Vp-p.
Note: An inexpensive square waveform ion funnel driver.
Hoffman, Nathan M; Opačić, Bojana; Reilly, Peter T A
2017-01-01
An inexpensive frequency variable square waveform generator (WFG) was developed to use with existing sinusoidal waveform driven ion funnels. The developed WFG was constructed using readily available low voltage DC power supplies and discrete components placed in printed circuit boards. As applied to ion funnels, this WFG represents considerable cost savings over commercially available products without sacrificing performance. Operation of the constructed pulse generator has been demonstrated for a 1 nF ion funnel at an operating frequency of 1 MHz while switching 48 V p-p .
Corruption costs lives: evidence from a cross-country study.
Li, Qiang; An, Lian; Xu, Jing; Baliamoune-Lutz, Mina
2018-01-01
This paper investigates the effect of corruption on health outcomes by using cross-country panel data covering about 150 countries for the period of 1995 to 2012. We employ ordinary least squares (OLS), fixed-effects and two-stage least squares (2SLS) estimation methods, and find that corruption significantly increases mortality rates, and reduces life expectancy and immunization rates. The results are consistent across different regions, gender, and measures of corruption. The findings suggest that reducing corruption can be an effective method to improve health outcomes.
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
ERIC Educational Resources Information Center
Helmreich, James E.; Krog, K. Peter
2018-01-01
We present a short, inquiry-based learning course on concepts and methods underlying ordinary least squares (OLS), least absolute deviation (LAD), and quantile regression (QR). Students investigate squared, absolute, and weighted absolute distance functions (metrics) as location measures. Using differential calculus and properties of convex…
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.
NASA Astrophysics Data System (ADS)
Farhadi, L.; Abdolghafoorian, A.
2015-12-01
The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states
The stress intensity factor for the double cantilever beam
NASA Technical Reports Server (NTRS)
Fichter, W. B.
1983-01-01
Fourier transforms and the Wiener-Hopf technique are used in conjunction with plane elastostatics to examine the singular crack tip stress field in the double cantilever beam (DCB) specimen. In place of the Dirac delta function, a family of functions which duplicates the important features of the concentrated forces without introducing unmanageable mathematical complexities is used as a loading function. With terms of order h-squared/a-squared retained in the series expansion, the dimensionless stress intensity factor is found to be K (h to the 1/2)/P = 12 to the 1/2 (a/h + 0.6728 + 0.0377 h-squared/a-squared), in which P is the magnitude of the concentrated forces per unit thickness, a is the distance from the crack tip to the points of load application, and h is the height of each cantilever beam. The result is similar to that obtained by Gross and Srawley by fitting a line to discrete results from their boundary collocation analysis.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Capillas Pérez, R; Cabré Aguilar, V; Gil Colomé, A M; Gaitano García, A; Torra i Bou, J E
2000-01-01
The discovery of moist environment dressings as alternatives to the traditional treatments based on exposing wounds to air, opened new expectations for the care and treatment of chronic wounds. Over the years, these expectations have led to the availability of new moist environment dressings which have made it possible to improve the care provided to patients suffering this kind of wounds, as well as providing important reasons to weigh in terms of cost-benefit-effectiveness at the time of selecting which type of treatment should be employed. The lack of comparative analysis among traditional treatments and moist environment treatments for chronic wounds among patients receiving primary health care led the authors to perform an analysis comparing these aforementioned options of treatment on patients suffering venous leg ulcers or pressure ulcers. The authors designed a Randomized Clinical Trial involving patients receiving ambulatory care in order to compare the effectiveness and cost-benefit of traditional versus moist environment dressing during the treatment of patients suffering stage II or III pressure ulcers or venous leg ulcers. In this trial, variables related to effectiveness of both treatments, as well as their costs were analyzed. 70 wounds were included in this Randomized Clinical Trial, 41 were venous leg ulcers of which 21 received a moist environment treatment while 20 received traditional cure, the other 29 wounds were pressure ulcers of which 15 received moist environment dressings treatment and 14 received traditional dressings. No statistically significant differences were found among the defining variables for these lesions in either group under treatment. In the venous leg ulcer study group, the authors conclusions were an average of 18.13 days, 16.33 treatment sessions and a cost of 10,616 pesetas to heal one square centimeter of the initial surface area of a wound on patients treated with traditional treatment compared to an average of 18.22 days, 4.54 treatment sessions and a cost of 2409 pesetas to heal one square centimeter of the initial surface area of a wound on patients treated with moist environment dressings. In the pressure ulcers study group, the authors conclusions were an average of 12.18 days, 12.1 treatment sessions and a cost of 15,490 pesetas to heal one square centimeter of the initial surface area of a wound on patients treated with traditional treatment compared to an average of 7.12 days, 1.86 treatment sessions and a cost of 2610 pesetas to heal one square centimeter of the initial surface area of a wound on patients treated with moist environment dressings. The results of this randomized clinical trial demosntrated that the moist environment treatment group was more effective and had a better cost-benefit ratio than the traditional treatment group in the treatment of pressure ulcers and venous leg ulcers on patients cared for by nursing personnel in primary health care centers all of which agrees with publications consulted by authors.
WEBnm@ v2.0: Web server and services for comparing protein flexibility.
Tiwari, Sandhya P; Fuglebakk, Edvin; Hollup, Siv M; Skjærven, Lars; Cragnolini, Tristan; Grindhaug, Svenn H; Tekle, Kidane M; Reuter, Nathalie
2014-12-30
Normal mode analysis (NMA) using elastic network models is a reliable and cost-effective computational method to characterise protein flexibility and by extension, their dynamics. Further insight into the dynamics-function relationship can be gained by comparing protein motions between protein homologs and functional classifications. This can be achieved by comparing normal modes obtained from sets of evolutionary related proteins. We have developed an automated tool for comparative NMA of a set of pre-aligned protein structures. The user can submit a sequence alignment in the FASTA format and the corresponding coordinate files in the Protein Data Bank (PDB) format. The computed normalised squared atomic fluctuations and atomic deformation energies of the submitted structures can be easily compared on graphs provided by the web user interface. The web server provides pairwise comparison of the dynamics of all proteins included in the submitted set using two measures: the Root Mean Squared Inner Product and the Bhattacharyya Coefficient. The Comparative Analysis has been implemented on our web server for NMA, WEBnm@, which also provides recently upgraded functionality for NMA of single protein structures. This includes new visualisations of protein motion, visualisation of inter-residue correlations and the analysis of conformational change using the overlap analysis. In addition, programmatic access to WEBnm@ is now available through a SOAP-based web service. Webnm@ is available at http://apps.cbu.uib.no/webnma . WEBnm@ v2.0 is an online tool offering unique capability for comparative NMA on multiple protein structures. Along with a convenient web interface, powerful computing resources, and several methods for mode analyses, WEBnm@ facilitates the assessment of protein flexibility within protein families and superfamilies. These analyses can give a good view of how the structures move and how the flexibility is conserved over the different structures.
Life Cycle Assessment and Cost Analysis of Water and ...
changes in drinking and wastewater infrastructure need to incorporate a holistic view of the water service sustainability tradeoffs and potential benefits when considering shifts towards new treatment technology, decentralized systems, energy recovery and reuse of treated wastewater. The main goal of this study is to determine the influence of scale on the energy and cost performance of different transitional membrane bioreactors (MBR) in decentralized wastewater treatment (WWT) systems by performing a life cycle assessment (LCA) and cost analysis. LCA is a tool used to quantify sustainability-related metrics from a systems perspective. The study calculates the environmental and cost profiles of both aerobic MBRs (AeMBR) and anaerobic MBRs (AnMBR), which not only recover energy from waste, but also produce recycled water that can displace potable water for uses such as irrigation and toilet flushing. MBRs represent an intriguing technology to provide decentralized WWT services while maximizing resource recovery. A number of scenarios for these WWT technologies are investigated for different scale systems serving various population density and land area combinations to explore the ideal application potentials. MBR systems are examined from 0.05 million gallons per day (MGD) to 10 MGD and serve land use types from high density urban (100,000 people per square mile) to semi-rural single family (2,000 people per square mile). The LCA and cost model was built with ex
Regression analysis on the variation in efficiency frontiers for prevention stage of HIV/AIDS.
Kamae, Maki S; Kamae, Isao; Cohen, Joshua T; Neumann, Peter J
2011-01-01
To investigate how the cost effectiveness of preventing HIV/AIDS varies across possible efficiency frontiers (EFs) by taking into account potentially relevant external factors, such as prevention stage, and how the EFs can be characterized using regression analysis given uncertainty of the QALY-cost estimates. We reviewed cost-effectiveness estimates for the prevention and treatment of HIV/AIDS published from 2002-2007 and catalogued in the Tufts Medical Center Cost-Effectiveness Analysis (CEA) Registry. We constructed efficiency frontier (EF) curves by plotting QALYs against costs, using methods used by the Institute for Quality and Efficiency in Health Care (IQWiG) in Germany. We stratified the QALY-cost ratios by prevention stage, country of study, and payer perspective, and estimated EF equations using log and square-root models. A total of 53 QALY-cost ratios were identified for HIV/AIDS in the Tufts CEA Registry. Plotted ratios stratified by prevention stage were visually grouped into a cluster consisting of primary/secondary prevention measures and a cluster consisting of tertiary measures. Correlation coefficients for each cluster were statistically significant. For each cluster, we derived two EF equations - one based on the log model, and one based on the square-root model. Our findings indicate that stratification of HIV/AIDS interventions by prevention stage can yield distinct EFs, and that the correlation and regression analyses are useful for parametrically characterizing EF equations. Our study has certain limitations, such as the small number of included articles and the potential for study populations to be non-representative of countries of interest. Nonetheless, our approach could help develop a deeper appreciation of cost effectiveness beyond the deterministic approach developed by IQWiG.
Orthogonality catastrophe and fractional exclusion statistics
NASA Astrophysics Data System (ADS)
Ares, Filiberto; Gupta, Kumar S.; de Queiroz, Amilcar R.
2018-02-01
We show that the N -particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N -body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.
Orthogonality catastrophe and fractional exclusion statistics.
Ares, Filiberto; Gupta, Kumar S; de Queiroz, Amilcar R
2018-02-01
We show that the N-particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N-body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.
Gemperline, Paul J; Cash, Eric
2003-08-15
A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1977-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1976-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described, and its numerical properties are compared with the numerical properties of the conventional least squares estimator.
A Christoffel function weighted least squares algorithm for collocation approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayan, Akil; Jakeman, John D.; Zhou, Tao
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
A Christoffel function weighted least squares algorithm for collocation approximations
Narayan, Akil; Jakeman, John D.; Zhou, Tao
2016-11-28
Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less
Kim, Dong-Ju; Kim, Hyo-Joong; Seo, Ki-Won; Kim, Ki-Hyun; Kim, Tae-Wong; Kim, Han-Ki
2015-01-01
We report on an indium-free and cost-effective Cu2O/Cu/Cu2O multilayer mesh electrode grown by room temperature roll-to-roll sputtering as a viable alternative to ITO electrodes for the cost-effective production of large-area flexible touch screen panels (TSPs). By using a low resistivity metallic Cu interlayer and a patterned mesh structure, we obtained Cu2O/Cu/Cu2O multilayer mesh electrodes with a low sheet resistance of 15.1 Ohm/square and high optical transmittance of 89% as well as good mechanical flexibility. Outer/inner bending test results showed that the Cu2O/Cu/Cu2O mesh electrode had a mechanical flexibility superior to that of conventional ITO films. Using the diamond-patterned Cu2O/Cu/Cu2O multilayer mesh electrodes, we successfully demonstrated TSPS of the flexible film-film type and rigid glass-film-film type TSPs. The TSPs with Cu2O/Cu/Cu2O mesh electrode were used to perform zoom in/out functions and multi-touch writing, indicating that these electrodes are promising cost-efficient transparent electrodes to substitute for conventional ITO electrodes in large-area flexible TSPs. PMID:26582471
Kim, Dong-Ju; Kim, Hyo-Joong; Seo, Ki-Won; Kim, Ki-Hyun; Kim, Tae-Wong; Kim, Han-Ki
2015-11-19
We report on an indium-free and cost-effective Cu2O/Cu/Cu2O multilayer mesh electrode grown by room temperature roll-to-roll sputtering as a viable alternative to ITO electrodes for the cost-effective production of large-area flexible touch screen panels (TSPs). By using a low resistivity metallic Cu interlayer and a patterned mesh structure, we obtained Cu2O/Cu/Cu2O multilayer mesh electrodes with a low sheet resistance of 15.1 Ohm/square and high optical transmittance of 89% as well as good mechanical flexibility. Outer/inner bending test results showed that the Cu2O/Cu/Cu2O mesh electrode had a mechanical flexibility superior to that of conventional ITO films. Using the diamond-patterned Cu2O/Cu/Cu2O multilayer mesh electrodes, we successfully demonstrated TSPS of the flexible film-film type and rigid glass-film-film type TSPs. The TSPs with Cu2O/Cu/Cu2O mesh electrode were used to perform zoom in/out functions and multi-touch writing, indicating that these electrodes are promising cost-efficient transparent electrodes to substitute for conventional ITO electrodes in large-area flexible TSPs.
Henriksen, Lisa; Schleicher, Nina C; Barker, Dianne C; Liu, Yawen; Chaloupka, Frank J
2016-10-01
To examine disparities in the price of tobacco and nontobacco products in pharmacies compared with other types of stores. We recorded the prices of Marlboro, Newport, the cheapest cigarettes, and bottled water in a random sample of licensed tobacco retailers (n = 579) in California in 2014. We collected comparable data from retailers (n = 2603) in school enrollment zones for representative samples of US 8th, 10th, and 12th graders in 2012. Ordinary least squares regressions modeled pretax prices as a function of store type and neighborhood demographics. In both studies, the cheapest cigarettes cost significantly less in pharmacies than other stores; the average estimated difference was $0.47 to $1.19 less in California. We observed similar patterns for premium-brand cigarettes. Conversely, bottled water cost significantly more in pharmacies than elsewhere. Newport cost less in areas with higher proportions of African Americans; other cigarette prices were related to neighborhood income and age. Neighborhood demographics were not related to water prices. Compared with other stores, pharmacies charged customers less for cigarettes and more for bottled water. State and local policies to promote tobacco-free pharmacies would eliminate an important source of discounted cigarettes.
NASA Astrophysics Data System (ADS)
Kim, Dong-Ju; Kim, Hyo-Joong; Seo, Ki-Won; Kim, Ki-Hyun; Kim, Tae-Wong; Kim, Han-Ki
2015-11-01
We report on an indium-free and cost-effective Cu2O/Cu/Cu2O multilayer mesh electrode grown by room temperature roll-to-roll sputtering as a viable alternative to ITO electrodes for the cost-effective production of large-area flexible touch screen panels (TSPs). By using a low resistivity metallic Cu interlayer and a patterned mesh structure, we obtained Cu2O/Cu/Cu2O multilayer mesh electrodes with a low sheet resistance of 15.1 Ohm/square and high optical transmittance of 89% as well as good mechanical flexibility. Outer/inner bending test results showed that the Cu2O/Cu/Cu2O mesh electrode had a mechanical flexibility superior to that of conventional ITO films. Using the diamond-patterned Cu2O/Cu/Cu2O multilayer mesh electrodes, we successfully demonstrated TSPS of the flexible film-film type and rigid glass-film-film type TSPs. The TSPs with Cu2O/Cu/Cu2O mesh electrode were used to perform zoom in/out functions and multi-touch writing, indicating that these electrodes are promising cost-efficient transparent electrodes to substitute for conventional ITO electrodes in large-area flexible TSPs.
Schleicher, Nina C.; Barker, Dianne C.; Liu, Yawen; Chaloupka, Frank J.
2016-01-01
Objectives. To examine disparities in the price of tobacco and nontobacco products in pharmacies compared with other types of stores. Methods. We recorded the prices of Marlboro, Newport, the cheapest cigarettes, and bottled water in a random sample of licensed tobacco retailers (n = 579) in California in 2014. We collected comparable data from retailers (n = 2603) in school enrollment zones for representative samples of US 8th, 10th, and 12th graders in 2012. Ordinary least squares regressions modeled pretax prices as a function of store type and neighborhood demographics. Results. In both studies, the cheapest cigarettes cost significantly less in pharmacies than other stores; the average estimated difference was $0.47 to $1.19 less in California. We observed similar patterns for premium-brand cigarettes. Conversely, bottled water cost significantly more in pharmacies than elsewhere. Newport cost less in areas with higher proportions of African Americans; other cigarette prices were related to neighborhood income and age. Neighborhood demographics were not related to water prices. Conclusions. Compared with other stores, pharmacies charged customers less for cigarettes and more for bottled water. State and local policies to promote tobacco-free pharmacies would eliminate an important source of discounted cigarettes. PMID:27552272
Chen, C P; Wan, J Z
1999-01-01
A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.
Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-08-01
Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.
ERIC Educational Resources Information Center
Bortolazzo, Julio L.
San Joaquin Delta College (California), planning on an enrollment increase of more than 10% annually, has estimated its minimum facility needs for an enrollment of approximately 7500 students by 1972. The gross cost per square foot is expected to be $25.00 for general construction and $38.50 for special construction. For an estimated total of…
Sparse and stable Markowitz portfolios.
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-07-28
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.
NASA Astrophysics Data System (ADS)
1990-01-01
The Rayovac TANDEM is an advanced technology combination work light and general purpose flashlight that incorporates several NASA technologies. The TANDEM functions as two lights in one. It features a long range spotlight and wide angle floodlight; simple one-hand electrical switching changes the beam from spot to flood. TANDEM developers made particular use of NASA's extensive research in ergonomics in the TANDEM's angled handle, convenient shape and different orientations. The shatterproof, water resistant plastic casing also draws on NASA technology, as does the shape and beam distance of the square diffused flood. TANDEM's heavy duty magnet that permits the light to be affixed to any metal object borrows from NASA research on rare earth magnets that combine strong magnetic capability with low cost. Developers used a NASA-developed ultrasonic welding technique in the light's interior.
Optimal landing of a helicopter in autorotation
NASA Technical Reports Server (NTRS)
Lee, A. Y. N.
1985-01-01
Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.
Anisotropic mean-square displacements in two-dimensional colloidal crystals of tilted dipoles
NASA Astrophysics Data System (ADS)
Froltsov, V. A.; Likos, C. N.; Löwen, H.; Eisenmann, C.; Gasser, U.; Keim, P.; Maret, G.
2005-03-01
Superparamagnetic colloidal particles confined to a flat horizontal air-water interface in an external magnetic field, which is tilted relative to the interface, form anisotropic two-dimensional crystals resulting from their mutual dipole-dipole interactions. Using real-space experiments and harmonic lattice theory we explore the mean-square displacements of the particles in the directions parallel and perpendicular to the in-plane component of the external magnetic field as a function of the tilt angle. We find that the anisotropy of the mean-square displacement behaves nonmonotonically as a function of the tilt angle and does not correlate with the structural anisotropy of the crystal.
Functional Generalized Structured Component Analysis.
Suk, Hye Won; Hwang, Heungsun
2016-12-01
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.
Four-square. Practice profitability stands on four foundations.
Mefford, Daniel D
2003-09-01
A medical practice's profitability stands on four legs: physician productivity, accounts receivable, overhead costs and ancillary revenue. The author describes where weakness can occur in each of these foundations and how to remedy such structural defects.
Stability Criteria Analysis for Landing Craft Utility
2017-12-01
Square meter m/s Meters per Second m/s2 Meters per Second Squared n Vertical Displacement of Sea Water Free Surface n3 Ship’s Heave... Displacement n5 Ship’s Pitch Angle p(ξ) Rayleigh Distribution Probability Function POSSE Program of Ship Salvage Engineering pk...Spectrum Constant γ JONSWAP Wave Spectrum Peak Factor Γ(λ) Gamma Probability Function Δ Ship’s Displacement Δω Small Frequency
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Accumulated energy norm for full waveform inversion of marine data
NASA Astrophysics Data System (ADS)
Shin, Changsoo; Ha, Wansoo
2017-12-01
Macro-velocity models are important for imaging the subsurface structure. However, the conventional objective functions of full waveform inversion in the time and the frequency domain have a limited ability to recover the macro-velocity model because of the absence of low-frequency information. In this study, we propose new objective functions that can recover the macro-velocity model by minimizing the difference between the zero-frequency components of the square of seismic traces. Instead of the seismic trace itself, we use the square of the trace, which contains low-frequency information. We apply several time windows to the trace and obtain zero-frequency information of the squared trace for each time window. The shape of the new objective functions shows that they are suitable for local optimization methods. Since we use the acoustic wave equation in this study, this method can be used for deep-sea marine data, in which elastic effects can be ignored. We show that the zero-frequency components of the square of the seismic traces can be used to recover macro-velocities from synthetic and field data.
Adams, J; Adler, C; Ahammed, Z; Allgower, C; Amonett, J; Anderson, B D; Anderson, M; Averichev, G S; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellwied, R; Berger, J; Bichsel, H; Billmeier, A; Bland, L C; Blyth, C O; Bonner, B E; Boucham, A; Brandin, A; Bravar, A; Cadman, R V; Caines, H; Calderónde la Barca Sánchez, M; Cardenas, A; Carroll, J; Castillo, J; Castro, M; Cebra, D; Chaloupka, P; Chattopadhyay, S; Chen, Y; Chernenko, S P; Cherney, M; Chikanian, A; Choi, B; Christie, W; Coffin, J P; Cormier, T M; Corral, M M; Cramer, J G; Crawford, H J; Derevschikov, A A; Didenko, L; Dietel, T; Draper, J E; Dunin, V B; Dunlop, J C; Eckardt, V; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Fachini, P; Faine, V; Faivre, J; Fatemi, R; Filimonov, K; Finch, E; Fisyak, Y; Flierl, D; Foley, K J; Fu, J; Gagliardi, C A; Gagunashvili, N; Gans, J; Gaudichet, L; Germain, M; Geurts, F; Ghazikhanian, V; Grachov, O; Grigoriev, V; Guedon, M; Guertin, S M; Gushin, E; Hallman, T J; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Heppelmann, S; Herston, T; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Horsley, M; Huang, H Z; Humanic, T J; Igo, G; Ishihara, A; Ivanshin, Yu I; Jacobs, P; Jacobs, W W; Janik, M; Johnson, I; Jones, P G; Judd, E G; Kaneta, M; Kaplan, M; Keane, D; Kiryluk, J; Kisiel, A; Klay, J; Klein, S R; Klyachko, A; Kollegger, T; Konstantinov, A S; Kopytine, M; Kotchenda, L; Kovalenko, A D; Kramer, M; Kravtsov, P; Krueger, K; Kuhn, C; Kulikov, A I; Kunde, G J; Kunz, C L; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Lansdell, C P; Lasiuk, B; Laue, F; Lauret, J; Lebedev, A; Lednický, R; Leontiev, V M; LeVine, M J; Li, Q; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, L; Liu, Z; Liu, Q J; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Ludlam, T; Lynn, D; Ma, J; Magestro, D; Majka, R; Margetis, S; Markert, C; Martin, L; Marx, J; Matis, H S; Matulenko, Yu A; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Messer, M; Miller, M L; Milosevich, Z; Minaev, N G; Mitchell, J; Moore, C F; Morozov, V; de Moura, M M; Munhoz, M G; Nelson, J M; Nevski, P; Nikitin, V A; Nogach, L V; Norman, B; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Paic, G; Pandey, S U; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Perevoztchikov, V; Peryt, W; Petrov, V A; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potrebenikova, E; Prindle, D; Pruneau, C; Putschke, J; Rai, G; Rakness, G; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevski, O V; Romero, J L; Rose, A; Roy, C; Rykov, V; Sakrejda, I; Salur, S; Sandweiss, J; Savin, I; Schambach, J; Scharenberg, R P; Schmitz, N; Schroeder, L S; Schüttauf, A; Schweda, K; Seger, J; Seliverstov, D; Seyboth, P; Shahaliev, E; Shestermanov, K E; Shimanskii, S S; Simon, F; Skoro, G; Smirnov, N; Snellings, R; Sorensen, P; Sowinski, J; Spinka, H M; Srivastava, B; Stephenson, E J; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Struck, C; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Symons, T J M; de Toledo, A Szanto; Szarwas, P; Tai, A; Takahashi, J; Tang, A H; Thein, D; Thomas, J H; Thompson, M; Tikhomirov, V; Tokarev, M; Tonjes, M B; Trainor, T A; Trentalange, S; Tribble, R E; Trofimov, V; Tsai, O; Ullrich, T; Underwood, D G; Van Buren, G; Vander Molen, A M; Vasilevski, I M; Vasiliev, A N; Vigdor, S E; Voloshin, S A; Wang, F; Ward, H; Watson, J W; Wells, R; Westfall, G D; Whitten, C; Wieman, H; Willson, R; Wissink, S W; Witt, R; Wood, J; Xu, N; Xu, Z; Yakutin, A E; Yamamoto, E; Yang, J; Yepes, P; Yurevich, V I; Zanevski, Y V; Zborovský, I; Zhang, H; Zhang, W M; Zoulkarneev, R; Zubarev, A N
2003-05-02
The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au+Au collisions at the square root of SNN = 130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization.
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
NASA Astrophysics Data System (ADS)
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
Yu, Tzy-Chyi; Zhou, Huanxue
2015-09-01
Evaluate performance of techniques used to handle missing cost-to-charge ratio (CCR) data in the USA Healthcare Cost and Utilization Project's Nationwide Inpatient Sample. Four techniques to replace missing CCR data were evaluated: deleting discharges with missing CCRs (complete case analysis), reweighting as recommended by Healthcare Cost and Utilization Project, reweighting by adjustment cells and hot deck imputation by adjustment cells. Bias and root mean squared error of these techniques on hospital cost were evaluated in five disease cohorts. Similar mean cost estimates would be obtained with any of the four techniques when the percentage of missing data is low (<10%). When total cost is the outcome of interest, a reweighting technique to avoid underestimation from dropping observations with missing data should be adopted.
Nikola Tesla Educational Opportunity School.
ERIC Educational Resources Information Center
Design Cost Data, 2001
2001-01-01
Describes the architectural design, costs, general description, and square footage data for the Nikola Tesla Educational Opportunity School in Colorado Springs, Colorado. A floor plan and photos are included along with a list of manufacturers and suppliers used for the project. (GR)
Excess flow valve benefit/cost analysis.
DOT National Transportation Integrated Search
1994-12-31
The Office of Pipeline Safety (OPS) is adopting regulations requiring the installation of Excess Flow Valves (EFVs) on all new or renewed single-family residential gas services that operate at pressures that are always 10 psig (pounds per square inch...
Least squares reverse time migration of controlled order multiples
NASA Astrophysics Data System (ADS)
Liu, Y.
2016-12-01
Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).
From direct-space discrepancy functions to crystallographic least squares.
Giacovazzo, Carmelo
2015-01-01
Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
Household response to environmental incentives for rain garden adoption
NASA Astrophysics Data System (ADS)
Newburn, David A.; Alberini, Anna
2016-02-01
A decentralized approach to encourage the voluntary adoption of household stormwater management practices is increasingly needed to mitigate urban runoff and to comply with more stringent water quality regulations. We analyze the household response to a hypothetical rebate program to incentivize rain garden adoption using household survey data from the Baltimore-Washington corridor. We asked respondents whether the household would adopt a rain garden without a rebate or when offered a randomly assigned rebate. An interval-data model is used to estimate household demand on the willingness to pay (WTP) for a rain garden as a function of demographic factors, gardening activities, environmental attitudes, and other household characteristics. Estimation results indicate that mean WTP for a rain garden in our sample population is approximately $6.72 per square foot, corresponding to almost three-fourths of the installation cost. The expected adoption rate more than tripled when comparing no rebate versus a government rebate set at one-third of the installation cost, indicating that economic incentives matter. There is substantial heterogeneity in the WTP among households. Higher levels of WTP are estimated for households with higher environmental concern for the Chesapeake Bay and local streams, garden experience, higher income, and non-senior citizen adults. We conclude that a cost-share rebate approach is likely to significantly affect household adoption decisions, and the partial contributions paid by households can assist with lowering the substantial compliance costs for local governments to meet water quality requirements.
Plowes, Nicola J.R; Adams, Eldridge S
2005-01-01
Lanchester's models of attrition describe casualty rates during battles between groups as functions of the numbers of individuals and their fighting abilities. Originally developed to describe human warfare, Lanchester's square law has been hypothesized to apply broadly to social animals as well, with important consequences for their aggressive behaviour and social structure. According to the square law, the fighting ability of a group is proportional to the square of the number of individuals, but rises only linearly with fighting ability of individuals within the group. By analyzing mortality rates of fire ants (Solenopsis invicta) fighting in different numerical ratios, we provide the first quantitative test of Lanchester's model for a non-human animal. Casualty rates of fire ants were not consistent with the square law; instead, group fighting ability was an approximately linear function of group size. This implies that the relative numbers of casualties incurred by two fighting groups are not strongly affected by relative group sizes and that battles do not disproportionately favour group size over individual prowess. PMID:16096093
de Almeida, Maurício Liberal; Saatkamp, Cassiano Junior; Fernandes, Adriana Barrinha; Pinheiro, Antonio Luiz Barbosa; Silveira, Landulfo
2016-09-01
Urea and creatinine are commonly used as biomarkers of renal function. Abnormal concentrations of these biomarkers are indicative of pathological processes such as renal failure. This study aimed to develop a model based on Raman spectroscopy to estimate the concentration values of urea and creatinine in human serum. Blood sera from 55 clinically normal subjects and 47 patients with chronic kidney disease undergoing dialysis were collected, and concentrations of urea and creatinine were determined by spectrophotometric methods. A Raman spectrum was obtained with a high-resolution dispersive Raman spectrometer (830 nm). A spectral model was developed based on partial least squares (PLS), where the concentrations of urea and creatinine were correlated with the Raman features. Principal components analysis (PCA) was used to discriminate dialysis patients from normal subjects. The PLS model showed r = 0.97 and r = 0.93 for urea and creatinine, respectively. The root mean square errors of cross-validation (RMSECV) for the model were 17.6 and 1.94 mg/dL, respectively. PCA showed high discrimination between dialysis and normality (95 % accuracy). The Raman technique was able to determine the concentrations with low error and to discriminate dialysis from normal subjects, consistent with a rapid and low-cost test.
Diaphragm motion quantification in megavoltage cone-beam CT projection images.
Chen, Mingqing; Siochi, R Alfredo
2010-05-01
To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.
Demonstration of the feasibility of automated silicon solar cell fabrication
NASA Technical Reports Server (NTRS)
Thornhill, J. W.; Taylor, W. E.
1976-01-01
An analysis of estimated costs indicate that for an annual output of 4,747,000 hexagonal cells (38 mm. on a side) a total factory cost of $0.866 per cell could be achieved. For cells with 14% efficiency at AMO intensity (1353 watts per square meter), this annual production rate is equivalent to 3,373 kilowatts and a manufacturing cost of $1.22 per watt of electrical output. A laboratory model of such a facility was operated to produce a series of demonstration runs, producing hexagonal cells, 2 x 2 cm cells and 2 x 4 cm cells.
NASA Astrophysics Data System (ADS)
Cheng, Jian; Zhang, Fan; Liu, Tiegang
2018-06-01
In this paper, a class of new high order reconstructed DG (rDG) methods based on the compact least-squares (CLS) reconstruction [23,24] is developed for simulating the two dimensional steady-state compressible flows on hybrid grids. The proposed method combines the advantages of the DG discretization with the flexibility of the compact least-squares reconstruction, which exhibits its superior potential in enhancing the level of accuracy and reducing the computational cost compared to the underlying DG methods with respect to the same number of degrees of freedom. To be specific, a third-order compact least-squares rDG(p1p2) method and a fourth-order compact least-squares rDG(p2p3) method are developed and investigated in this work. In this compact least-squares rDG method, the low order degrees of freedom are evolved through the underlying DG(p1) method and DG(p2) method, respectively, while the high order degrees of freedom are reconstructed through the compact least-squares reconstruction, in which the constitutive relations are built by requiring the reconstructed polynomial and its spatial derivatives on the target cell to conserve the cell averages and the corresponding spatial derivatives on the face-neighboring cells. The large sparse linear system resulted by the compact least-squares reconstruction can be solved relatively efficient when it is coupled with the temporal discretization in the steady-state simulations. A number of test cases are presented to assess the performance of the high order compact least-squares rDG methods, which demonstrates their potential to be an alternative approach for the high order numerical simulations of steady-state compressible flows.
Design of a colorimetric sensing platform using reflection mode plasmonic colour filters
NASA Astrophysics Data System (ADS)
Mudachathi, Renilkumar; Tanaka, Takuo
2017-08-01
Plasmonic nano structures fabricated using inexpensive and abundant aluminum metal shows intense narrow reflection peaks with strong response to the external stimuli, provides a simple yet powerful detection mechanism that is well suited for the development of low cost and low power sensors, such as colorimetric sensors, which transduces external stimuli or environmental changes in to visible colour changes. Such low cost and disposable sensors have huge demands in the point-of-care and home health care diagnostic applications. We present the design of a colorimetric sensing platform based on reflection mode plasmonic colour filters on both silicon and glass substrate, which demonstrate a sharp colour change for varying ambient refractive index. The sensor is essentially a plasmonic metamaterial in which the aluminum square plate hovering on a PMMA nano pillar in the background of a perforated aluminum reflector forms the unit cell which is arranged periodically in a 2D square lattice. The meta-surface has two distinct absorption peaks in the visible region leaving a strong reflection band, which strongly responds to the ambient refractive index change, provides a means for the realization of low cost colorimetric sensing platform.
Correspondence between spanning trees and the Ising model on a square lattice
NASA Astrophysics Data System (ADS)
Viswanathan, G. M.
2017-06-01
An important problem in statistical physics concerns the fascinating connections between partition functions of lattice models studied in equilibrium statistical mechanics on the one hand and graph theoretical enumeration problems on the other hand. We investigate the nature of the relationship between the number of spanning trees and the partition function of the Ising model on the square lattice. The spanning tree generating function T (z ) gives the spanning tree constant when evaluated at z =1 , while giving the lattice green function when differentiated. It is known that for the infinite square lattice the partition function Z (K ) of the Ising model evaluated at the critical temperature K =Kc is related to T (1 ) . Here we show that this idea in fact generalizes to all real temperatures. We prove that [Z(K ) s e c h 2 K ] 2=k exp[T (k )] , where k =2 tanh(2 K )s e c h (2 K ) . The identical Mahler measure connects the two seemingly disparate quantities T (z ) and Z (K ) . In turn, the Mahler measure is determined by the random walk structure function. Finally, we show that the the above correspondence does not generalize in a straightforward manner to nonplanar lattices.
NASA Technical Reports Server (NTRS)
Periaux, J.
1979-01-01
The numerical simulation of the transonic flows of idealized fluids and of incompressible viscous fluids, by the nonlinear least squares methods is presented. The nonlinear equations, the boundary conditions, and the various constraints controlling the two types of flow are described. The standard iterative methods for solving a quasi elliptical nonlinear equation with partial derivatives are reviewed with emphasis placed on two examples: the fixed point method applied to the Gelder functional in the case of compressible subsonic flows and the Newton method used in the technique of decomposition of the lifting potential. The new abstract least squares method is discussed. It consists of substituting the nonlinear equation by a problem of minimization in a H to the minus 1 type Sobolev functional space.
Integral Equations and Scattering Solutions for a Square-Well Potential.
ERIC Educational Resources Information Center
Bagchi, B.; Seyler, R. G.
1979-01-01
Derives Green's functions and integral equations for scattering solutions subject to a variety of boundary conditions. Exact solutions are obtained for the case of a finite spherical square-well potential, and properties of these solutions are discussed. (Author/HM)
Dynamic Polymorphic Reconfiguration to Effectively Cloak a Circuit’s Function
2011-03-24
86 B . RSA Traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 2.1...56 22 SEMA of Separate Square and Multiply Trace with key (E B5) - RSA Version B ...56 23 Separate Square and Multiply Trace after signal processing - RSA Version B
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
A sigmoidal model for biosorption of heavy metal cations from aqueous media.
Özen, Rümeysa; Sayar, Nihat Alpagu; Durmaz-Sam, Selcen; Sayar, Ahmet Alp
2015-07-01
A novel multi-input single output (MISO) black-box sigmoid model is developed to simulate the biosorption of heavy metal cations by the fission yeast from aqueous medium. Validation and verification of the model is done through statistical chi-squared hypothesis tests and the model is evaluated by uncertainty and sensitivity analyses. The simulated results are in agreement with the data of the studied system in which Schizosaccharomyces pombe biosorbs Ni(II) cations at various process conditions. Experimental data is obtained originally for this work using dead cells of an adapted variant of S. Pombe and represented by Freundlich isotherms. A process optimization scheme is proposed using the present model to build a novel application of a cost-merit objective function which would be useful to predict optimal operation conditions. Copyright © 2015. Published by Elsevier Inc.
Evaluation of the Williams-type model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The Williams-type yield model is based on multiple regression analysis of historial time series data at CRD level pooled to regional level (groups of similar CRDs). Basic variables considered in the analysis include USDA yield, monthly mean temperature, monthly precipitation, soil texture and topographic information, and variables derived from these. Technologic trend is represented by piecewise linear and/or quadratic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-1979) demonstrate that biases are small and performance based on root mean square appears to be acceptable for the intended AgRISTARS large area applications. The model is objective, adequate, timely, simple, and not costly. It consideres scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Sparse and stable Markowitz portfolios
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-01-01
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio. PMID:19617537
Drew, L.J.
1979-01-01
In this study the selection of the optimum type of drilling pattern to be used when exploring for elliptical shaped targets is examined. The rhombic pattern is optimal when the targets are known to have a preferred orientation. Situations can also be found where a rectangular pattern is as efficient as the rhombic pattern. A triangular or square drilling pattern should be used when the orientations of the targets are unknown. The way in which the optimum hole spacing varies as a function of (1) the cost of drilling, (2) the value of the targets, (3) the shape of the targets, (4) the target occurrence probabilities was determined for several examples. Bayes' rule was used to show how target occurrence probabilities can be revised within a multistage pattern drilling scheme. ?? 1979 Plenum Publishing Corporation.
1980-08-01
varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of
Upper Kalamazoo watershed land cover inventory. [based on remote sensing
NASA Technical Reports Server (NTRS)
Richason, B., III; Enslin, W.
1973-01-01
Approximately 1000 square miles of the eastern portion of the watershed were inventoried based on remote sensing imagery. The classification scheme, imagery and interpretation procedures, and a cost analysis are discussed. The distributions of land cover within the area are tabulated.
The distance function effect on k-nearest neighbor classification for medical datasets.
Hu, Li-Yu; Huang, Min-Wei; Ke, Shih-Wen; Tsai, Chih-Fong
2016-01-01
K-nearest neighbor (k-NN) classification is conventional non-parametric classifier, which has been used as the baseline classifier in many pattern classification problems. It is based on measuring the distances between the test data and each of the training data to decide the final classification output. Since the Euclidean distance function is the most widely used distance metric in k-NN, no study examines the classification performance of k-NN by different distance functions, especially for various medical domain problems. Therefore, the aim of this paper is to investigate whether the distance function can affect the k-NN performance over different medical datasets. Our experiments are based on three different types of medical datasets containing categorical, numerical, and mixed types of data and four different distance functions including Euclidean, cosine, Chi square, and Minkowsky are used during k-NN classification individually. The experimental results show that using the Chi square distance function is the best choice for the three different types of datasets. However, using the cosine and Euclidean (and Minkowsky) distance function perform the worst over the mixed type of datasets. In this paper, we demonstrate that the chosen distance function can affect the classification accuracy of the k-NN classifier. For the medical domain datasets including the categorical, numerical, and mixed types of data, K-NN based on the Chi square distance function performs the best.
Zaman, Khalid
2018-02-01
The renewable energy sources are considered the vital factor to promote global green business. The environmental cost of doing business is the pre-requisite to analyze sustainable policies that facilitate the eco-minded entrepreneurs to produce healthier goods. This study examines the impact of renewable energy sources (i.e., hydro energy, biofuel energy, and wind energy) on the environmental cost of doing business in a panel of BRICS (Brazil, Russian Federation, India, China, and South Africa) countries, for the period of 1995-2015. The study employed principal component analysis to construct an "integrated environmental index" by using three alternative and plausible factors including carbon dioxide emissions, fossil fuel energy consumption, and chemicals used in the manufacturing process. The environmental index is used as an interactive term with the three cost of doing business indicators including business disclosure index, the cost of business start-up procedures, and logistics performance index to form environmental cost of doing business (ECDB) indicators. The results of three-stage least squares (3SLS) estimator show that foreign direct investment (FDI) inflows supported the green business while trade openness deteriorates the environment, which partially validates the "pollution haven hypotheses (PHH)" in a panel of countries. There is no evidence for environmental Kuznets curve (EKC) hypothesis; however, there is a monotonic decreasing relationship between per capita income and ECDB indicators. The hydro energy supports the sustainable business environment, while biofuel consumption deteriorates the environmental impact on the cost of business start-up procedures. Finally, wind energy subsequently affected the ECDB indicators in a panel of BRICS countries. The overall results conclude that growth factors and energy sources both have a considerable impact on the cost of doing business; therefore, there is a momentous need to formulate sustainable policy vista to magnetize green business across countries.
Cost analysis of aquatic biomass systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-07-25
A cost analysis of aquatic biomass systems was conducted to provide the U.S. Department of Energy with engineering cost information on which to base decisions in the area of planning and executing research and development programs dealing with aquatic biomass as an alternative energy resource. Calculations show that several hundred 100 square mile aquatic biomass farms, the size selected by DOE staff for this analysis, would be needed to provide meaningful supplies of energy. With this background, specific engineering analyses were conducted on two original design concepts for 100 square mile aquatic biomass energy farms. These systems were an open-oceanmore » system and a land-based system; outstanding experts in all aspects of this project were called upon to participate and provide information in projecting the costs for harvested aquatic biomass for these systems. It was found that the projections of costs for harvested open-ocean biomass, utilizing optimistic assumptions of scientific and engineering design parameters, appear to be above any practical costs to be considered for energy. One of the major limitations is due to the need to provide upwelled sub-surface water containing needed nutrients, for which pumping energy is required. It is concluded from this analysis that large scale land-based aquatic biomass farms may merit development, but perhaps within a much narrower range than heretofore investigated. Aquatic plants which appear to have potential for development as an energy resource are the so-called emersed plants, or angiosperms, including many types of freshwater weeds such as duckweed, Hydrilla, and water hyacinths. It is recommended that substantially greater basic and applied knowledge on these aquatic biomass are needed, especially on growth rates and nutrient requirements.« less
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas
2015-06-01
The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.
Robust Nonrigid Multimodal Image Registration using Local Frequency Maps*
Jian, Bing; Vemuri, Baba C.; Marroquin, José L.
2008-01-01
Automatic multi-modal image registration is central to numerous tasks in medical imaging today and has a vast range of applications e.g., image guidance, atlas construction, etc. In this paper, we present a novel multi-modal 3D non-rigid registration algorithm where in 3D images to be registered are represented by their corresponding local frequency maps efficiently computed using the Riesz transform as opposed to the popularly used Gabor filters. The non-rigid registration between these local frequency maps is formulated in a statistically robust framework involving the minimization of the integral squared error a.k.a. L2E (L2 error). This error is expressed as the squared difference between the true density of the residual (which is the squared difference between the non-rigidly transformed reference and the target local frequency representations) and a Gaussian or mixture of Gaussians density approximation of the same. The non-rigid transformation is expressed in a B-spline basis to achieve the desired smoothness in the transformation as well as computational efficiency. The key contributions of this work are (i) the use of Riesz transform to achieve better efficiency in computing the local frequency representation in comparison to Gabor filter-based approaches, (ii) new mathematical model for local-frequency based non-rigid registration, (iii) analytic computation of the gradient of the robust non-rigid registration cost function to achieve efficient and accurate registration. The proposed non-rigid L2E-based registration is a significant extension of research reported in literature to date. We present experimental results for registering several real data sets with synthetic and real non-rigid misalignments. PMID:17354721
Optimal secondary source position in exterior spherical acoustical holophony
NASA Astrophysics Data System (ADS)
Pasqual, A. M.; Martin, V.
2012-02-01
Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.
Moussaoui, Ahmed; Bouziane, Touria
2016-01-01
The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).
NASA Astrophysics Data System (ADS)
Vargas, S. A., Jr.; Tweedie, C. E.; Oberbauer, S. F.
2013-12-01
The need to improve the spatial and temporal scaling and extrapolation of plot level measurements of ecosystem structure and function to the landscape level has been identified as a persistent research challenge in the arctic terrestrial sciences. Although there has been a range of advances in remote sensing capabilities on satellite, fixed wing, helicopter and unmanned aerial vehicle platforms over the past decade, these present costly, logistically challenging (especially in the Arctic), technically demanding solutions for applications in an arctic environment. Here, we present a relatively low cost alternative to these platforms that uses kite aerial photography (KAP). Specifically, we demonstrate how digital elevation models (DEMs) were derived from this system for a coastal arctic landscape near Barrow, Alaska. DEMs of this area acquired from other remote sensing platforms such as Terrestrial Laser Scanning (TLS), Airborne Laser Scanning, and satellite imagery were also used in this study to determine accuracy and validity of results. DEMs interpolated using the KAP system were comparable to DEMs derived from the other platforms. For remotely sensing acre to kilometer square areas of interest, KAP has proven to be a low cost solution from which derived products that interface ground and satellite platforms can be developed by users with access to low-tech solutions and a limited knowledge of remote sensing.
Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Hongyu; Petra, Noemi; Stadler, Georg
We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less
Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model
Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...
2016-07-13
We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less
Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model
NASA Astrophysics Data System (ADS)
Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar
2016-07-01
We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.
NASA Astrophysics Data System (ADS)
Dumlao, Morphy C.; Xiao, Dan; Zhang, Daming; Fletcher, John; Donald, William A.
2017-04-01
Active capillary dielectric barrier discharge ionization (DBDI) is emerging as a compact, low-cost, and robust method to form intact ions of small molecules for detection in near real time by portable mass spectrometers. Here, we demonstrate that by using a 10 kHz, 2.5 kVp-p high-voltage square-wave alternating current plasma, active capillary DBDI can consume less than 1 μW of power. In contrast, the power consumed using a sine and triangle alternating current waveform is more than two orders of magnitude higher than that for the square waveform to obtain a similar voltage for plasma generation. Moreover, the plasma obtained using a square waveform can be significantly more homogenous than that obtained using sine and triangle waveforms. Protonated dimethyl methylphosphonate (DMMP) and deprotonated perfluorooctanoic acid (PFOA) can be detected at about the same or higher abundances using square-wave DBDI mass spectrometry compared with the use of sine and triangle waveforms. By use of benzylammonium thermometer ions, the extent of internal energy deposition using square, sine, or triangle waveform excited plasmas are essentially the same at the optimum voltages for ion detection. Using an H-bridge circuit driving a transformer optimized to reduce losses, square-wave active capillary DBDI can be continuously powered for 50 h by common 9 V-battery (PP3).
NASA Astrophysics Data System (ADS)
Smith, Eric Ryan; Farrow, Darcie A.; Jonas, David M.
2005-07-01
Four-wave-mixing nonlinear-response functions are given for intermolecular and intramolecular vibrations of a perpendicular dimer and intramolecular vibrations of a square-symmetric molecule containing a doubly degenerate state. A two-dimensional particle-in-a-box model is used to approximate the electronic wave functions and obtain harmonic potentials for nuclear motion. Vibronic interactions due to symmetry-lowering distortions along Jahn-Teller active normal modes are discussed. Electronic dephasing due to nuclear motion along both symmetric and asymmetric normal modes is included in these response functions, but population transfer between states is not. As an illustration, these response functions are used to predict the pump-probe polarization anisotropy in the limit of impulsive excitation.
Tirados, Inaki; Esterhuizen, Johan; Rayaisse, Jean Baptiste; Diarrassouba, Abdoulaye; Kaba, Dramane; Mpiana, Serge; Vale, Glyn A.; Solano, Philippe; Lehane, Michael J.; Torr, Stephen J.
2011-01-01
Palpalis-group tsetse, particularly the subspecies of Glossina palpalis and G. fuscipes, are the most important transmitters of human African trypanomiasis (HAT), transmitting >95% of cases. Traps and insecticide-treated targets are used to control tsetse but more cost-effective baits might be developed through a better understanding of the fly's host-seeking behaviour. Electrocuting grids were used to assess the numbers of G. palpalis palpalis and G. fuscipes quanzensis attracted to and landing on square or oblong targets of black cloth varying in size from 0.01 m2 to 1.0 m2. For both species, increasing the size of a square target from 0.01 m2 (dimensions = 0.1×0.1 m) to 1.0 m2 (1.0×1.0 m) increased the catch ∼4x however the numbers of tsetse killed per unit area of target declined with target size suggesting that the most cost efficient targets are not the largest. For G. f. quanzensis, horizontal oblongs, (1 m wide×0.5 m high) caught ∼1.8x more tsetse than vertical ones (0.5 m wide×1.0 m high) but the opposite applied for G. p. palpalis. Shape preference was consistent over the range of target sizes. For G. p. palpalis square targets caught as many tsetse as the oblong; while the evidence is less strong the same appears to apply to G. f. quanzensis. The results suggest that targets used to control G. p. palpalis and G. f. quanzensis should be square, and that the most cost-effective designs, as judged by the numbers of tsetse caught per area of target, are likely to be in the region of 0.25×0.25 m2. The preference of G. p. palpalis for vertical oblongs is unique amongst tsetse species, and it is suggested that this response might be related to its anthropophagic behaviour and hence importance as a vector of HAT. PMID:21829734
Tirados, Inaki; Esterhuizen, Johan; Rayaisse, Jean Baptiste; Diarrassouba, Abdoulaye; Kaba, Dramane; Mpiana, Serge; Vale, Glyn A; Solano, Philippe; Lehane, Michael J; Torr, Stephen J
2011-08-01
Palpalis-group tsetse, particularly the subspecies of Glossina palpalis and G. fuscipes, are the most important transmitters of human African trypanomiasis (HAT), transmitting >95% of cases. Traps and insecticide-treated targets are used to control tsetse but more cost-effective baits might be developed through a better understanding of the fly's host-seeking behaviour. Electrocuting grids were used to assess the numbers of G. palpalis palpalis and G. fuscipes quanzensis attracted to and landing on square or oblong targets of black cloth varying in size from 0.01 m(2) to 1.0 m(2). For both species, increasing the size of a square target from 0.01 m(2) (dimensions=0.1 × 0.1 m) to 1.0 m(2) (1.0 × 1.0 m) increased the catch ~4x however the numbers of tsetse killed per unit area of target declined with target size suggesting that the most cost efficient targets are not the largest. For G. f. quanzensis, horizontal oblongs, (1 m wide × 0.5 m high) caught ~1.8x more tsetse than vertical ones (0.5 m wide × 1.0 m high) but the opposite applied for G. p. palpalis. Shape preference was consistent over the range of target sizes. For G. p. palpalis square targets caught as many tsetse as the oblong; while the evidence is less strong the same appears to apply to G. f. quanzensis. The results suggest that targets used to control G. p. palpalis and G. f. quanzensis should be square, and that the most cost-effective designs, as judged by the numbers of tsetse caught per area of target, are likely to be in the region of 0.25 × 0.25 m(2). The preference of G. p. palpalis for vertical oblongs is unique amongst tsetse species, and it is suggested that this response might be related to its anthropophagic behaviour and hence importance as a vector of HAT.
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
Richards, Selena; Miller, Robert; Gemperline, Paul
2008-02-01
An extension to the penalty alternating least squares (P-ALS) method, called multi-way penalty alternating least squares (NWAY P-ALS), is presented. Optionally, hard constraints (no deviation from predefined constraints) or soft constraints (small deviations from predefined constraints) were applied through the application of a row-wise penalty least squares function. NWAY P-ALS was applied to the multi-batch near-infrared (NIR) data acquired from the base catalyzed esterification reaction of acetic anhydride in order to resolve the concentration and spectral profiles of l-butanol with the reaction constituents. Application of the NWAY P-ALS approach resulted in the reduction of the number of active constraints at the solution point, while the batch column-wise augmentation allowed hard constraints in the spectral profiles and resolved rank deficiency problems of the measurement matrix. The results were compared with the multi-way multivariate curve resolution (MCR)-ALS results using hard and soft constraints to determine whether any advantages had been gained through using the weighted least squares function of NWAY P-ALS over the MCR-ALS resolution.
Code of Federal Regulations, 2014 CFR
2014-10-01
... excess of double the square footage of the original facility and all physical improvements. Constructing... square footage of the original facility and all physical improvements. Department means the Department of...) Results in substantial functional limitation in 3 or more of the following major life activities: (1) Self...
Code of Federal Regulations, 2013 CFR
2013-10-01
... excess of double the square footage of the original facility and all physical improvements. Constructing... square footage of the original facility and all physical improvements. Department means the Department of...) Results in substantial functional limitation in 3 or more of the following major life activities: (1) Self...
Code of Federal Regulations, 2012 CFR
2012-10-01
... excess of double the square footage of the original facility and all physical improvements. Constructing... square footage of the original facility and all physical improvements. Department means the Department of...) Results in substantial functional limitation in 3 or more of the following major life activities: (1) Self...
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
NASA Astrophysics Data System (ADS)
Sein, Lawrence T.
2011-08-01
Hammett parameters σ' were determined from vertical ionization potentials, vertical electron affinities, adiabatic ionization potentials, adiabatic electron affinities, HOMO, and LUMO energies of a series of N, N' -bis (3',4'-substituted-phenyl)-1,4-quinonediimines computed at the B3LYP/6-311+G(2d,p) level on B3LYP/6-31G ∗ molecular geometries. These parameters were then least squares fit as a function of literature Hammett parameters. For N, N' -bis (4'-substituted-phenyl)-1,4-quinonediimines, the least squares fits demonstrated excellent linearity, with the square of Pearson's correlation coefficient ( r2) greater than 0.98 for all isomers. For N, N' -bis (3'-substituted-3'-aminophenyl)-1,4-quinonediimines, the least squares fits were less nearly linear, with r2 approximately 0.70 for all isomers when derived from calculated vertical ionization potentials, but those from calculated vertical electron affinities usually greater than 0.90.
ESTCP Cost and Performance Report (UX-9909)
2006-06-01
watts per square centimeter vii ACKNOWLEDGEMENTS We would like to express appreciation to John Schiavone of Sparta Inc, for his efforts...navsea.navy.mil Project Manager John Schiavone Sparta, Inc. 6000 Technology Drive, Building 3 Huntsville, AL 35805 (256) 837-5282, Ext. 2416 (256) 890
ERIC Educational Resources Information Center
Smith, J. McCree
Three methods for the preparation of maintenance budgets are discussed--(1) a traditional method, inconclusive and obsolete, based on gross square footage, (2) the formula approach method based on building classification (wood-frame, masonry-wood, masonry-concrete) with maintenance cost factors for each type plus custodial service rates by type of…
Active Engine Mount Technology for Automobiles
NASA Technical Reports Server (NTRS)
Rahman, Z.; Spanos, J.
1996-01-01
We present a narrow-band tracking control using a variant of the Least Mean Square (LMS) algorithm [1,2,3] for supressing automobile engine/drive-train vibration disturbances. The algorithm presented here has a simple structure and may be implemented in a low cost micro controller.
Regression Analysis: Instructional Resource for Cost/Managerial Accounting
ERIC Educational Resources Information Center
Stout, David E.
2015-01-01
This paper describes a classroom-tested instructional resource, grounded in principles of active learning and a constructivism, that embraces two primary objectives: "demystify" for accounting students technical material from statistics regarding ordinary least-squares (OLS) regression analysis--material that students may find obscure or…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meikle, T.; Ballek, L.; Briggs, B.
This study investigates the cost effectiveness of three separate reclamation methods utilized in the long-term establishment of Big Sage (Artemisia tridentata va. wyomingensis). Direct seeding and planting with four cubic inch and ten cubic inch containerized stock were compared using five 36 square meter plots per treatment within a fenced randomized block. Seed plots were hand broadcast at a rate of 2 kilograms per hectare and mulched with certified weed-free wheat straw. Containerized stock plots were planted at a density of one per square meter. Controls with no seeding or planting were established to differentiate actual plant production/reproduction from seedmore » bank recruitment and migration from replaced topsoil and surrounding native areas. Stem density (stem/m{sup 2}), plant height (cm), and plant reproduction (seedlings/m{sup 2}) data will be gathered every spring and fall for three years (1994-1997). Final analysis of the data will relate establishment success to cost efficiency. This initial report on the study reviews only seedling establishment based on first year data.« less
Investigation of low cost, high reliability sealing techniques for hybrid microcircuits, phase 1
NASA Technical Reports Server (NTRS)
Perkins, K. L.; Licari, J. J.
1976-01-01
A preliminary investigation was made to determine the feasibility of using adhesive package sealing for hybrid microcircuits. Major effort consisted of: (1) surveying representative hybrid manufacturers to assess the current use of adhesives for package sealing; (2) making a cost comparison of metallurgical versus adhesive package sealing; (3) determining the seal integrity of gold plated flatpack type packages sealed with selected adhesives, thermal shock, temperature cycling, mechanical shock, and constant acceleration test environments; and (4) defining a more comprehensive study to continue the evaluation of adhesives for package sealing. Results showed that 1.27 cm square gold plated flatpack type packages sealed with the film adhesives and the paste adhesive retained their seal integrity after all tests, and that similarly prepared 2.54 cm square packages retained their seal integrity after all tests except the 10,000 g's constant acceleration test. It is concluded that these results are encouraging, but by no means sufficient to establish the suitability of adhesives for sealing high reliability hybrid microcircuits.
NASA Astrophysics Data System (ADS)
Samuel, R.; Thacker, C. M.; Maricq, A. V.; Gale, B. K.
2014-09-01
We present a new fabrication protocol for fabricating pneumatically controlled microvalve arrays (consisting of 100 s of microvalves) in PDMS substrates. The protocol utilizes rapid and cost-effective fabrication of molds using laser cutting of adhesive vinyl tapes and replica molding of PDMS. Hence the protocol is fast, simple and avoids cleanroom use. The results show that effective doormat-style microvalves can be easily fabricated in arrays by manipulating the stiffness of the actuating membrane through varying the valve-chamber area/shape. Three frequently used valve-chamber shapes (circle, square and capsule) were tested and all showed advantages in different situations. Circular valve chambers were best for small valves, square valves were best for medium-sized valves, and the capsule valves were best for larger valves. An application of this protocol has been demonstrated in the fabrication of a microfluidic 32-well plate for high-throughput manipulation of C. elegans for biomedical research.
DEMONSTRATION AND TESTING OF AN EER OPTIMIZER SYSTEM FOR DX AIR-CONDITIONERS
2017-10-07
Performance-Based Maintenance PCS Power Current Sensor PLC Programmable Logic Controller ppm Parts Per Million PSIG Pounds per Square Inch Gauge PVS Power...all utilities and facilities at Patrick AFB, Cape Canaveral AFS, Jonathan Dickinson Military Tracking Annex, Malabar Annex, Ramey Solar Observatory...Cost 8,057 0 Annual O&M Cost 453 1191 Annual FD&D Monitoring 880 ‐ BLCC LIFE CYCLE RESULTS Energy Savings $12,317 O&M Net Savings $493 PV Life Cycle
Laviana, Aaron A; Tan, Hung-Jui; Hu, Jim C; Weizer, Alon Z; Chang, Sam S; Barocas, Daniel A
2018-03-01
To perform a bicenter, retrospective study of perioperative outcomes of retroperitoneal versus transperitoneal robotic-assisted laparoscopic partial nephrectomy (RALPN) and assess costs using time-driven activity-based costing (TDABC). We identified 355 consecutive patients who underwent RALPN at University of California Los Angeles and the University of Michigan during 2009-2016. We matched according to RENAL nephrometry score, date, and institution for 78 retroperitoneal versus 78 transperitoneal RALPN. Unadjusted analyses were performed using McNemar's Chi-squared or paired t test, and adjusted analyses were performed using multivariable repeated measures regression analysis. From multivariable models, predicted probabilities were derived according to approach. Cost analysis was performed using TDABC. Patients treated with retroperitoneal versus transperitoneal RALPN were similar in age (P = 0.490), sex (P = 0.715), BMI (P = 0.273), and comorbidity (P = 0.393). Most tumors were posterior or lateral in both the retroperitoneal (92.3%) and transperitoneal (85.9%) groups. Retroperitoneal RALPN was associated with shorter operative times (167.0 versus 191.1 min, P = 0.001) and length of stay (LOS) (1.8 versus 2.7 days, P < 0.001). There were no differences in renal function preservation or cancer control. In adjusted analyses, retroperitoneal RALPN was 17.6-min shorter (P < 0.001) and had a 76% lower probability of LOS at least 2 days (P < 0.001). Utilizing TDABC, transperitoneal RALPN added $2337 in cost when factoring in disposable equipment, operative time, LOS, and personnel. In two high-volume, tertiary centers, retroperitoneal RALPN is associated with reduced operative times and shortened LOS in posterior and lateral tumors, whereas sharing similar clinicopathologic outcomes, which may translate into lower healthcare costs. Further investigation into anterior tumors is needed.
Confirmatory factor analysis of the female sexual function index.
Opperman, Emily A; Benson, Lindsay E; Milhausen, Robin R
2013-01-01
The Female Sexual Functioning Index (Rosen et al., 2000 ) was designed to assess the key dimensions of female sexual functioning using six domains: desire, arousal, lubrication, orgasm, satisfaction, and pain. A full-scale score was proposed to represent women's overall sexual function. The fifth revision to the Diagnostic and Statistical Manual (DSM) is currently underway and includes a proposal to combine desire and arousal problems. The objective of this article was to evaluate and compare four models of the Female Sexual Functioning Index: (a) single-factor model, (b) six-factor model, (c) second-order factor model, and (4) five-factor model combining the desire and arousal subscales. Cross-sectional and observational data from 85 women were used to conduct a confirmatory factor analysis on the Female Sexual Functioning Index. Local and global goodness-of-fit measures, the chi-square test of differences, squared multiple correlations, and regression weights were used. The single-factor model fit was not acceptable. The original six-factor model was confirmed, and good model fit was found for the second-order and five-factor models. Delta chi-square tests of differences supported best fit for the six-factor model validating usage of the six domains. However, when revisions are made to the DSM-5, the Female Sexual Functioning Index can adapt to reflect these changes and remain a valid assessment tool for women's sexual functioning, as the five-factor structure was also supported.
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
Optimisation of active suspension control inputs for improved vehicle ride performance
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2016-07-01
A collocation-type control variable optimisation method is used in the paper to analyse to which extent the fully active suspension (FAS) can improve the vehicle ride comfort while preserving the wheel holding ability. The method is first applied for a cosine-shaped bump road disturbance of different heights, and for both quarter-car and full 10 degree-of-freedom vehicle models. A nonlinear anti-wheel hop constraint is considered, and the influence of bump preview time period is analysed. The analysis is then extended to the case of square- or cosine-shaped pothole with different lengths, and the quarter-car model. In this case, the cost function is extended with FAS energy consumption and wheel damage resilience costs. The FAS action is found to be such to provide a wheel hop over the pothole, in order to avoid or minimise the damage at the pothole trailing edge. In the case of long pothole, when the FAS cannot provide the wheel hop, the wheel is travelling over the pothole bottom and then hops over the pothole trailing edge. The numerical optimisation results are accompanied by a simplified algebraic analysis.
Sampling for Soil Carbon Stock Assessment in Rocky Agricultural Soils
NASA Technical Reports Server (NTRS)
Beem-Miller, Jeffrey P.; Kong, Angela Y. Y.; Ogle, Stephen; Wolfe, David
2016-01-01
Coring methods commonly employed in soil organic C (SOC) stock assessment may not accurately capture soil rock fragment (RF) content or soil bulk density (rho (sub b)) in rocky agricultural soils, potentially biasing SOC stock estimates. Quantitative pits are considered less biased than coring methods but are invasive and often cost-prohibitive. We compared fixed-depth and mass-based estimates of SOC stocks (0.3-meters depth) for hammer, hydraulic push, and rotary coring methods relative to quantitative pits at four agricultural sites ranging in RF content from less than 0.01 to 0.24 cubic meters per cubic meter. Sampling costs were also compared. Coring methods significantly underestimated RF content at all rocky sites, but significant differences (p is less than 0.05) in SOC stocks between pits and corers were only found with the hammer method using the fixed-depth approach at the less than 0.01 cubic meters per cubic meter RF site (pit, 5.80 kilograms C per square meter; hammer, 4.74 kilograms C per square meter) and at the 0.14 cubic meters per cubic meter RF site (pit, 8.81 kilograms C per square meter; hammer, 6.71 kilograms C per square meter). The hammer corer also underestimated rho (sub b) at all sites as did the hydraulic push corer at the 0.21 cubic meters per cubic meter RF site. No significant differences in mass-based SOC stock estimates were observed between pits and corers. Our results indicate that (i) calculating SOC stocks on a mass basis can overcome biases in RF and rho (sub b) estimates introduced by sampling equipment and (ii) a quantitative pit is the optimal sampling method for establishing reference soil masses, followed by rotary and then hydraulic push corers.
Marko, John F
2009-05-01
The Gauss linking number (Ca) of two flexible polymer rings which are tethered to one another is investigated. For ideal random walks, mean linking-squared varies with the square root of polymer length while for self-avoiding walks, linking-squared increases logarithmically with polymer length. The free-energy cost of linking of polymer rings is therefore strongly dependent on degree of self-avoidance, i.e., on intersegment excluded volume. Scaling arguments and numerical data are used to determine the free-energy cost of fixed linking number in both the fluctuation and large-Ca regimes; for ideal random walks, for |Ca|>N;{1/4} , the free energy of catenation is found to grow proportional, variant|Ca/N;{1/4}|;{4/3} . When excluded volume interactions between segments are present, the free energy rapidly approaches a linear dependence on Gauss linking (dF/dCa approximately 3.7k_{B}T) , suggestive of a novel "catenation condensation" effect. These results are used to show that condensation of long entangled polymers along their length, so as to increase excluded volume while decreasing number of statistical segments, can drive disentanglement if a mechanism is present to permit topology change. For chromosomal DNA molecules, lengthwise condensation is therefore an effective means to bias topoisomerases to eliminate catenations between replicated chromatids. The results for mean-square catenation are also used to provide a simple approximate estimate for the "knotting length," or number of segments required to have a knot along a single circular polymer, explaining why the knotting length ranges from approximately 300 for an ideal random walk to 10;{6} for a self-avoiding walk.
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Shang, Jianyuan; Geva, Eitan
2007-04-26
The quenching rate of a fluorophore attached to a macromolecule can be rather sensitive to its conformational state. The decay of the corresponding fluorescence lifetime autocorrelation function can therefore provide unique information on the time scales of conformational dynamics. The conventional way of measuring the fluorescence lifetime autocorrelation function involves evaluating it from the distribution of delay times between photoexcitation and photon emission. However, the time resolution of this procedure is limited by the time window required for collecting enough photons in order to establish this distribution with sufficient signal-to-noise ratio. Yang and Xie have recently proposed an approach for improving the time resolution, which is based on the argument that the autocorrelation function of the delay time between photoexcitation and photon emission is proportional to the autocorrelation function of the square of the fluorescence lifetime [Yang, H.; Xie, X. S. J. Chem. Phys. 2002, 117, 10965]. In this paper, we show that the delay-time autocorrelation function is equal to the autocorrelation function of the square of the fluorescence lifetime divided by the autocorrelation function of the fluorescence lifetime. We examine the conditions under which the delay-time autocorrelation function is approximately proportional to the autocorrelation function of the square of the fluorescence lifetime. We also investigate the correlation between the decay of the delay-time autocorrelation function and the time scales of conformational dynamics. The results are demonstrated via applications to a two-state model and an off-lattice model of a polypeptide.
Contragenic functions on spheroidal domains
NASA Astrophysics Data System (ADS)
García-Ancona, Raybel; Morais, Joao; Porter, R. Michael
2018-05-01
We construct bases of polynomials for the spaces of square-integrable harmonic functions which are orthogonal to the monogenic and antimonogenic $\\mathbb{R}^3$-valued functions defined in a prolate or oblate spheroid.
Environmental regulations and energy for home heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, A.S.; Fishelson, G.; Gardner, J.L.
1975-01-01
A cost/benefit study of environmental policies supports banning coal as an urban fuel. In an analysis of the Chicago area a coal ban resulted in costs exceeding benefits in only 16 of 172 square miles. In 54 areas benefits were double costs. Benefits include improved air quality, health, and savings on cleaning supplies, and showed no income or racial preferences. As coal use declines, natural gas and oil will increase in demand and price. Two methods for increasing natural gas price would be Federal deregulation of wellhead gas and a fuel policy allowing price increases in response to local shortages.more » (DCK)« less
Direct measurement of the resistivity weighting function
NASA Astrophysics Data System (ADS)
Koon, D. W.; Chan, Winston K.
1998-12-01
We have directly measured the resistivity weighting function—the sensitivity of a four-wire resistance measurement to local variations in resistivity—for a square specimen of photoconducting material. This was achieved by optically perturbing the local resistivity of the specimen while measuring the effect of this perturbation on its four-wire resistance. The weighting function we measure for a square geometry with electrical leads at its corners agrees well with calculated results, displaying two symmetric regions of negative weighting which disappear when van der Pauw averaging is performed.
Response functions for sine- and square-wave modulations of disparity.
NASA Technical Reports Server (NTRS)
Richards, W.
1972-01-01
Depth sensations cannot be elicited by modulations of disparity that are more rapid than about 6 Hz, regardless of the modulation amplitude. Vergence tracking also fails at similar modulation rates, suggesting that this portion of the oculomotor system is limited by the behavior of disparity detectors. For sinusoidal modulations of disparity between 1/2 to 2 deg of disparity, most depth-response functions exhibit a low-frequency decrease that is not observed with square-wave modulations of disparity.
Investigation of test methods, material properties and processes for solar cell encapsulants
NASA Technical Reports Server (NTRS)
Willis, P. B.; Baum, B.
1983-01-01
The goal of the program is to identify, test, evaluate and recommend encapsulation materials and processes for the fabrication of cost-effective and long life solar modules. Of the $18 (1948 $) per square meter allocated for the encapsulation components approximately 50% of the cost ($9/sq m) may be taken by the load bearing component. Due to the proportionally high cost of this element, lower costing materials were investigated. Wood based products were found to be the lowest costing structural materials for module construction, however, they require protection from rainwater and humidity in order to acquire dimensional stability. The cost of a wood product based substrate must, therefore, include raw material costs plus the cost of additional processing to impart hygroscopic inertness. This protection is provided by a two step, or split process in which a flexible laminate containing the cell string is prepared, first in a vacuum process and then adhesively attached with a back cover film to the hardboard in a subsequent step.
NASA Astrophysics Data System (ADS)
Nakano, Kousuke; Sakai, Tomohiro
2018-01-01
We report on the performance of density functional theory (DFT) with the Tran-Blaha modified Becke-Johnson exchange potential and the random phase approximation dielectric function for optical constants of semiconductors in the ultraviolet-visible (UV-Vis) light region. We calculate optical bandgaps Eg, refractive indices n, and extinction coefficients k of 70 semiconductors listed in the Handbook of Optical Constants of Solids [(Academic Press, 1985), Vol. 1; (Academic Press, 1991), Vol. 2; and (Academic Press, 1998), Vol. 3] and compare the results with experimental values. The results show that the calculated bandgaps and optical constants agree well with the experimental values to within 0.440 eV for Eg, 0.246-0.299 for n, and 0.207-0.598 for k in root mean squared error (RMSE). The small values of the RMSEs indicate that the optical constants of semiconductors in the UV-Vis region can be quantitatively predicted even by a low-cost DFT calculation of this type.
Sparse Regression as a Sparse Eigenvalue Problem
NASA Technical Reports Server (NTRS)
Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai
2008-01-01
We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization
Reclamation of Bay wetlands and disposal of dredge spoils: meeting two goals simultaneously
Hostettler, Frances D.; Pereira, Wilfred E.; Kvenvolden, Keith A.; Jones, David R.; Murphy, Fred
1997-01-01
San Francisco Bay is one of the world's largest urbanized estuarine systems with a watershed that drains about 40 percent of the State of California. Its freshwater and saltwater marshes comprise approximately 125 square kilometers (48 square miles), compared to 2,200 square kilometers (850 square miles) before California began rapid development in 1850. This staggering reduction in tidal wetlands of approximately 95 percent has resulted in significant loss . of habitat for many species of fish and wildlife. The need for wetlands is well documented- healthy and adequate wetlands are critical to the proper functioning of an estuarine ecosystem like San Francisco Bay.
Least-squares sequential parameter and state estimation for large space structures
NASA Technical Reports Server (NTRS)
Thau, F. E.; Eliazov, T.; Montgomery, R. C.
1982-01-01
This paper presents the formulation of simultaneous state and parameter estimation problems for flexible structures in terms of least-squares minimization problems. The approach combines an on-line order determination algorithm, with least-squares algorithms for finding estimates of modal approximation functions, modal amplitudes, and modal parameters. The approach combines previous results on separable nonlinear least squares estimation with a regression analysis formulation of the state estimation problem. The technique makes use of sequential Householder transformations. This allows for sequential accumulation of matrices required during the identification process. The technique is used to identify the modal prameters of a flexible beam.
NASA Astrophysics Data System (ADS)
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-08-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored, these include: battery state of charge (SoC), battery state of health (capcity fade determination, SoH), and state of function (power fade determination, SoF). In a series of two papers, we propose a system of algorithms based on a weighted recursive least quadratic squares parameter estimator, that is able to determine the battery impedance and diffusion parameters for accurate state estimation. The functionality was proven on different battery chemistries with different aging conditions. The first paper investigates the general requirements on BMS for HEV/EV applications. In parallel, the commonly used methods for battery monitoring are reviewed to elaborate their strength and weaknesses in terms of the identified requirements for on-line applications. Special emphasis will be placed on real-time capability and memory optimized code for cost-sensitive industrial or automotive applications in which low-cost microcontrollers must be used. Therefore, a battery model is presented which includes the influence of the Butler-Volmer kinetics on the charge-transfer process. Lastly, the mass transport process inside the battery is modeled in a novel state-space representation.
ERIC Educational Resources Information Center
Slessor, Catherine
2000-01-01
Discusses a joint venture project that succeeded in designing a large, new, award-winning addition to a college campus at a cost of just $30 per square foot. Design features, Floor plans, and photographs are included. (GR)
1961-1968 New Construction Report.
ERIC Educational Resources Information Center
National Association of Physical Plant Administrators of Universities and Colleges, Richmond, IN.
137 NAPPA colleges and universities provided data for this summary. Projects are summarized by thirteen building classifications. Under each classification the following information headings are used--(1) name of institution, (2) project completion date, (3) gross square feet, (4) net assignable area, (5) construction costs, (6) number of stories,…
ERIC Educational Resources Information Center
Bers, Trudy; Gelfman, Arnold; Knapp, Jolene
2008-01-01
This article describes several community colleges that are taking a more business-like approach, trimming costs, improving efficiencies, and pursuing next-generation innovation--all while keeping the focus squarely where it should be: on learning. At Florida Keys Community College (FKCC), John Keho says his college is taking some strong--though…
Abe, Takumi; Tsuji, Taishi; Kitano, Naruki; Muraki, Toshiaki; Hotta, Kazushi; Okura, Tomohiro
2015-01-01
The purpose of this study was to investigate whether the degree of improvement in cognitive function achieved with an exercise intervention in community-dwelling older Japanese women is affected by the participant's baseline cognitive function and age. Eighty-eight women (mean age: 70.5±4.2 years) participated in a prevention program for long-term care. They completed the Square-Stepping Exercise (SSE) program once a week, 120 minutes/session, for 11 weeks. We assessed participants' cognitive function using 5 cognitive tests (5-Cog) before and after the intervention. We defined cognitive function as the 5-Cog total score and defined the change in cognitive function as the 5-cog post-score minus the pre-score. We divided participants into four groups based on age (≤69 years or ≥70 years) and baseline cognitive function level (above vs. below the median cognitive function level). We conducted two-way analysis of variance. All 4 groups improved significantly in cognitive function after the intervention. There were no baseline cognitive function level×age interactions and no significant main effects of age, although significant main effects of baseline cognitive function level (P=0.004, η(2)=0.09) were observed. Square-Stepping Exercise is an effective exercise for improving cognitive function. These results suggest that older adults with cognitive decline are more likely to improve their cognitive function with exercise than if they start the intervention with high cognitive function. Furthermore, during an exercise intervention, baseline cognitive function level may have more of an effect than a participant's age on the degree of cognitive improvement.
Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares
Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai
2013-01-01
Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923
Optimal measurement of ice-sheet deformation from surface-marker arrays
NASA Astrophysics Data System (ADS)
Macayeal, D. R.
Surface strain rate is best observed by fitting a strain-rate ellipsoid to the measured movement of a stake network or other collection of surface features, using a least squares procedure. Error of the resulting fit varies as 1/(L delta t square root of N), where L is the stake separation, delta is the time period between initial and final stake survey, and n is the number of stakes in the network. This relation suggests that if n is sufficiently high, the traditional practice of revisiting stake-network sites on successive field seasons may be replaced by a less costly single year operation. A demonstration using Ross Ice Shelf data shows that reasonably accurate measurements are obtained from 12 stakes after only 4 days of deformation. It is possible for the least squares procedure to aid airborne photogrammetric surveys because reducing the time interval between survey and re-survey permits better surface feature recognition.
A square-plate ultrasonic linear motor operating in two orthogonal first bending modes.
Chen, Zhijiang; Li, Xiaotian; Chen, Jianguo; Dong, Shuxiang
2013-01-01
A novel square-plate piezoelectric ultrasonic linear motor operated in two orthogonal first bending vibration modes (B₁) is proposed. The piezoelectric vibrator of the linear motor is simply made of a single PZT ceramic plate (sizes: 15 x 15 x 2 mm) and poled in its thickness direction. The top surface electrode of the square ceramic plate was divided into four active areas along its two diagonal lines for exciting two orthogonal B₁ modes. The achieved driving force and speed from the linear motor are 1.8 N and 230 mm/s, respectively, under one pair orthogonal voltage drive of 150 V(p-p) at the resonance frequency of 92 kHz. The proposed linear motor has advantages over conventional ultrasonic linear motors, such as relatively larger driving force, very simple working mode and structure, and low fabrication cost.
Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei
2016-03-01
Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus . Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.
Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
NASA Technical Reports Server (NTRS)
Carrier, Alain C.; Aubrun, Jean-Noel
1993-01-01
New frequency response measurement procedures, on-line modal tuning techniques, and off-line modal identification algorithms are developed and applied to the modal identification of the Advanced Structures/Controls Integrated Experiment (ASCIE), a generic segmented optics telescope test-bed representative of future complex space structures. The frequency response measurement procedure uses all the actuators simultaneously to excite the structure and all the sensors to measure the structural response so that all the transfer functions are measured simultaneously. Structural responses to sinusoidal excitations are measured and analyzed to calculate spectral responses. The spectral responses in turn are analyzed as the spectral data become available and, which is new, the results are used to maintain high quality measurements. Data acquisition, processing, and checking procedures are fully automated. As the acquisition of the frequency response progresses, an on-line algorithm keeps track of the actuator force distribution that maximizes the structural response to automatically tune to a structural mode when approaching a resonant frequency. This tuning is insensitive to delays, ill-conditioning, and nonproportional damping. Experimental results show that is useful for modal surveys even in high modal density regions. For thorough modeling, a constructive procedure is proposed to identify the dynamics of a complex system from its frequency response with the minimization of a least-squares cost function as a desirable objective. This procedure relies on off-line modal separation algorithms to extract modal information and on least-squares parameter subset optimization to combine the modal results and globally fit the modal parameters to the measured data. The modal separation algorithms resolved modal density of 5 modes/Hz in the ASCIE experiment. They promise to be useful in many challenging applications.
Samalin, Ludovic; Boyer, Laurent; Murru, Andrea; Pacchiarotti, Isabella; Reinares, María; Bonnin, Caterina Mar; Torrent, Carla; Verdolini, Norma; Pancheri, Corinna; de Chazeron, Ingrid; Boucekine, Mohamed; Geoffroy, Pierre-Alexis; Bellivier, Frank; Llorca, Pierre-Michel; Vieta, Eduard
2017-03-01
Many patients with bipolar disorder (BD) experience residual symptoms during their inter-episodic periods. The study aimed to analyse the relationship between residual depressive symptoms, sleep disturbances and self-reported cognitive impairment as determinants of psychosocial functioning in a large sample of euthymic BD patients. This was a cross-sectional study of 468 euthymic BD outpatients. We evaluated the residual depressive symptoms with the Bipolar Depression Rating Scale, the sleep disturbances with the Pittsburgh Sleep Quality Index, the perceived cognitive performance using visual analogic scales and functioning with the Functioning Assessment Short Test. Structural equation modelling (SEM) was used to describe the relationships among the residual depressive symptoms, sleep disturbances, perceived cognitive performance and functioning. SEM showed good fit with normed chi square=2.46, comparative fit index=0.94, root mean square error of approximation=0.05 and standardized root mean square residuals=0.06. This model revealed that residual depressive symptoms (path coefficient =0.37) and perceived cognitive performance (path coefficient=0.27) were the most important features significantly related to psychosocial functioning. Sleep disturbances were indirectly associated with functioning via residual depressive symptoms and perceived cognitive performance (path coefficient=0.23). This study contributes to a better understanding of the determinants of psychosocial functioning during the inter-episodic periods of BD patients. These findings should facilitate decision-making in therapeutics to improve the functional outcomes of BD during this period. Copyright © 2017 Elsevier B.V. All rights reserved.
2015-07-01
the radius of gyration in detail as a function FIG. 5. Variation of the root mean square (RMS) displacement of the center of mass of the protein with...depends on the temperature. The global motion can be examined by analyzing the variation of the root mean square displacement (RMS) of the center of...and global physical quantities during the course of simula- tion, including the energy of each residue, its mobility, mean square displacement of the
Terahertz emission from thermally-managed square intrinsic Josephson junction microstrip antennas
NASA Astrophysics Data System (ADS)
Klemm, Richard; Davis, Andrew; Wang, Qing
We show for thin square microstrip antennas that the transverse magnetic electromagnetic cavity modes are greatly restricted in number due to the point group symmetry of a square. For the ten lowest frequency emissions, we present plots of the orthonormal wave functions and of the angular distributions of the emission power obtained from the uniform Josephson current source and from the excitation of an electromagnetic cavity mode excited in the intrinsic Josephson junctions between the layers of a highly anisotropic layered superconductor.
Mercier Franco, Luís Fernando; Castier, Marcelo; Economou, Ioannis G
2017-12-07
We show that the Zwanzig first-order perturbation theory can be obtained directly from a truncated Taylor series expansion of a two-body perturbation theory and that such truncation provides a more accurate prediction of thermodynamic properties than the full two-body perturbation theory. This unexpected result is explained by the quality of the resulting approximation for the fluid radial distribution function. We prove that the first-order and the two-body perturbation theories are based on different approximations for the fluid radial distribution function. To illustrate the calculations, the square-well fluid is adopted. We develop an analytical expression for the two-body perturbed Helmholtz free energy for the square-well fluid. The equation of state obtained using such an expression is compared to the equation of state obtained from the first-order approximation. The vapor-liquid coexistence curve and the supercritical compressibility factor of a square-well fluid are calculated using both equations of state and compared to Monte Carlo simulation data. Finally, we show that the approximation for the fluid radial distribution function given by the first-order perturbation theory provides closer values to the ones calculated via Monte Carlo simulations. This explains why such theory gives a better description of the fluid thermodynamic behavior.
Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting
2018-01-21
Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.
Griffiths, Robert I; Gleeson, Michelle L; Danese, Mark D; O'Hagan, Anthony
2012-01-01
To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data. By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months. The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups. IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.
Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante
2014-10-01
In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.
Kim, Dong-Ju; Shin, Hae-In; Ko, Eun-Hye; Kim, Ki-Hyun; Kim, Tae-Woong; Kim, Han-Ki
2016-01-01
We report fabrication of large area Ag nanowire (NW) film coated using a continuous roll-to-roll (RTR) slot die coater as a viable alternative to conventional ITO electrodes for cost-effective and large-area flexible touch screen panels (TSPs). By controlling the flow rate of shear-thinning Ag NW ink in the slot die, we fabricated Ag NW percolating network films with different sheet resistances (30–70 Ohm/square), optical transmittance values (89–90%), and haze (0.5–1%) percentages. Outer/inner bending, twisting, and rolling tests as well as dynamic fatigue tests demonstrated that the mechanical flexibility of the slot-die coated Ag NW films was superior to that of conventional ITO films. Using diamond-shape patterned Ag NW layer electrodes (50 Ohm/square, 90% optical transmittance), we fabricated 12-inch flexible film-film type and rigid glass-film-film type TSPs. Successful operation of flexible TSPs with Ag NW electrodes indicates that slot-die-coated large-area Ag NW films are promising low cost, high performance, and flexible transparent electrodes for cost-effective large-area flexible TSPs and can be substituted for ITO films, which have high sheet resistance and are brittle. PMID:27677410
Kim, Dong-Ju; Shin, Hae-In; Ko, Eun-Hye; Kim, Ki-Hyun; Kim, Tae-Woong; Kim, Han-Ki
2016-09-28
We report fabrication of large area Ag nanowire (NW) film coated using a continuous roll-to-roll (RTR) slot die coater as a viable alternative to conventional ITO electrodes for cost-effective and large-area flexible touch screen panels (TSPs). By controlling the flow rate of shear-thinning Ag NW ink in the slot die, we fabricated Ag NW percolating network films with different sheet resistances (30-70 Ohm/square), optical transmittance values (89-90%), and haze (0.5-1%) percentages. Outer/inner bending, twisting, and rolling tests as well as dynamic fatigue tests demonstrated that the mechanical flexibility of the slot-die coated Ag NW films was superior to that of conventional ITO films. Using diamond-shape patterned Ag NW layer electrodes (50 Ohm/square, 90% optical transmittance), we fabricated 12-inch flexible film-film type and rigid glass-film-film type TSPs. Successful operation of flexible TSPs with Ag NW electrodes indicates that slot-die-coated large-area Ag NW films are promising low cost, high performance, and flexible transparent electrodes for cost-effective large-area flexible TSPs and can be substituted for ITO films, which have high sheet resistance and are brittle.
NASA Astrophysics Data System (ADS)
Kim, Dong-Ju; Shin, Hae-In; Ko, Eun-Hye; Kim, Ki-Hyun; Kim, Tae-Woong; Kim, Han-Ki
2016-09-01
We report fabrication of large area Ag nanowire (NW) film coated using a continuous roll-to-roll (RTR) slot die coater as a viable alternative to conventional ITO electrodes for cost-effective and large-area flexible touch screen panels (TSPs). By controlling the flow rate of shear-thinning Ag NW ink in the slot die, we fabricated Ag NW percolating network films with different sheet resistances (30-70 Ohm/square), optical transmittance values (89-90%), and haze (0.5-1%) percentages. Outer/inner bending, twisting, and rolling tests as well as dynamic fatigue tests demonstrated that the mechanical flexibility of the slot-die coated Ag NW films was superior to that of conventional ITO films. Using diamond-shape patterned Ag NW layer electrodes (50 Ohm/square, 90% optical transmittance), we fabricated 12-inch flexible film-film type and rigid glass-film-film type TSPs. Successful operation of flexible TSPs with Ag NW electrodes indicates that slot-die-coated large-area Ag NW films are promising low cost, high performance, and flexible transparent electrodes for cost-effective large-area flexible TSPs and can be substituted for ITO films, which have high sheet resistance and are brittle.
An access alternative for mobile satellite networks
NASA Technical Reports Server (NTRS)
Wu, W. W.
1988-01-01
Conceptually, this paper discusses strategies of digital satellite communication networks for a very large number of low density traffic stations. These stations can be either aeronautical, land mobile, or maritime. The techniques can be applied to international, domestic, regional, and special purpose satellite networks. The applications can be commercial, scientific, military, emergency, navigational or educational. The key strategy is the use of a non-orthogonal access method, which tolerates overlapping signals. With n being either time or frequency partitions, and with a single overlapping signal allowed, a low cost mobile satellite system can be designed with n squared (n squared + n + 1) number of terminals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Jiangye
Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less
A Novel Low-Cost, Large Curvature Bend Sensor Based on a Bowden-Cable
Jeong, Useok; Cho, Kyu-Jin
2016-01-01
Bend sensors have been developed based on conductive ink, optical fiber, and electronic textiles. Each type has advantages and disadvantages in terms of performance, ease of use, and cost. This study proposes a new and low-cost bend sensor that can measure a wide range of accumulated bend angles with large curvatures. This bend sensor utilizes a Bowden-cable, which consists of a coil sheath and an inner wire. Displacement changes of the Bowden-cable’s inner wire, when the shape of the sheath changes, have been considered to be a position error in previous studies. However, this study takes advantage of this position error to detect the bend angle of the sheath. The bend angle of the sensor can be calculated from the displacement measurement of the sensing wire using a Hall-effect sensor or a potentiometer. Simulations and experiments have shown that the accumulated bend angle of the sensor is linearly related to the sensor signal, with an R-square value up to 0.9969 and a root mean square error of 2% of the full sensing range. The proposed sensor is not affected by a bend curvature of up to 80.0 m−1, unlike previous bend sensors. The proposed sensor is expected to be useful for various applications, including motion capture devices, wearable robots, surgical devices, or generally any device that requires an affordable and low-cost bend sensor. PMID:27347959
NASA Astrophysics Data System (ADS)
Bukoski, J. J.; Broadhead, J. S.; Donato, D.; Murdiyarso, D.; Gregoire, T. G.
2016-12-01
Mangroves provide extensive ecosystem services that support both local livelihoods and international environmental goals, including coastal protection, water filtration, biodiversity conservation and the sequestration of carbon (C). While voluntary C market projects that seek to preserve and enhance forest C stocks offer a potential means of generating finance for mangrove conservation, their implementation faces barriers due to the high costs of quantifying C stocks through measurement, reporting and verification (MRV) activities. To streamline MRV activities in mangrove C forestry projects, we develop predictive models for (i) biomass-based C stocks, and (ii) soil-based C stocks for the mangroves of the Asia-Pacific. We use linear mixed effect models to account for spatial correlation in modeling the expected C as a function of stand attributes. The most parsimonious biomass model predicts total biomass C stocks as a function of both basal area and the interaction between latitude and basal area, whereas the most parsimonious soil C model predicts soil C stocks as a function of the logarithmic transformations of both latitude and basal area. Random effects are specified by site for both models, and are found to explain a substantial proportion of variance within the estimation datasets. The root mean square error (RMSE) of the biomass C model is approximated at 24.6 Mg/ha (18.4% of mean biomass C in the dataset), whereas the RMSE of the soil C model is estimated at 4.9 mg C/cm 3 (14.1% of mean soil C). A substantial proportion of the variation in soil C, however, is explained by the random effects and thus the use of the SOC model may be most valuable for sites in which field measurements of soil C exist.
Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria
NASA Technical Reports Server (NTRS)
Shelton, Joey Dewayne
2004-01-01
The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.
Inverse optimal self-tuning PID control design for an autonomous underwater vehicle
NASA Astrophysics Data System (ADS)
Rout, Raja; Subudhi, Bidyadhar
2017-01-01
This paper presents a new approach to path following control design for an autonomous underwater vehicle (AUV). A NARMAX model of the AUV is derived first and then its parameters are adapted online using the recursive extended least square algorithm. An adaptive Propotional-Integral-Derivative (PID) controller is developed using the derived parameters to accomplish the path following task of an AUV. The gain parameters of the PID controller are tuned using an inverse optimal control technique, which alleviates the problem of solving Hamilton-Jacobian equation and also satisfies an error cost function. Simulation studies were pursued to verify the efficacy of the proposed control algorithm. From the obtained results, it is envisaged that the proposed NARMAX model-based self-tuning adaptive PID control provides good path following performance even in the presence of uncertainty arising due to ocean current or hydrodynamic parameter.
A Concept for a Mobile Remote Manipulator System
NASA Technical Reports Server (NTRS)
Mikulus, M. M., Jr.; Bush, H. G.; Wallsom, R. E.; Jensen, J. K.
1985-01-01
A conceptual design for a Mobile Remote Manipulator System (MRMS) is presented. This concept does not require continuous rails for mobility (only guide pins at truss hardpoints) and is very compact, being only one bay square. The MRMS proposed is highly maneuverable and is able to move in any direction along the orthogonal guide pin array under complete control at all times. The proposed concept would greatly enhance the safety and operational capabilities of astronauts performing EVA functions such as structural assembly, payload transport and attachment, space station maintenance, repair or modification, and future spacecraft construction or servicing. The MRMS drive system conceptual design presented is a reasonably simple mechanical device which can be designed to exhibit high reliability. Developmentally, all components of the proposed MRMS either exist or are considered to be completely state of the art designs requiring minimal development, features which should enhance reliability and minimize costs.
NASA Astrophysics Data System (ADS)
Singh, Mandeep; Khare, Kedar
2018-05-01
We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.
Interval Predictor Models for Data with Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Lacerda, Marcio J.; Crespo, Luis G.
2017-01-01
An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Practical low-cost stereo head-mounted display
NASA Astrophysics Data System (ADS)
Pausch, Randy; Dwivedi, Pramod; Long, Allan C., Jr.
1991-08-01
A high-resolution head-mounted display has been developed from substantially cheaper components than previous systems. Monochrome displays provide 720 by 280 monochrome pixels to each eye in a one-inch-square region positioned approximately one inch from each eye. The display hardware is the Private Eye, manufactured by Reflection Technologies, Inc. The tracking system uses the Polhemus Isotrak, providing (x,y,z, azimuth, elevation and roll) information on the user''s head position and orientation 60 times per second. In combination with a modified Nintendo Power Glove, this system provides a full-functionality virtual reality/simulation system. Using two host 80386 computers, real-time wire frame images can be produced. Other virtual reality systems require roughly 250,000 in hardware, while this one requires only 5,000. Stereo is particularly useful for this system because shading or occlusion cannot be used as depth cues.
Gain weighted eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, John B.; Andrisani, Dominick, II
1994-01-01
This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.
H∞ state estimation of stochastic memristor-based neural networks with time-varying delays.
Bao, Haibo; Cao, Jinde; Kurths, Jürgen; Alsaedi, Ahmed; Ahmad, Bashir
2018-03-01
This paper addresses the problem of H ∞ state estimation for a class of stochastic memristor-based neural networks with time-varying delays. Under the framework of Filippov solution, the stochastic memristor-based neural networks are transformed into systems with interval parameters. The present paper is the first to investigate the H ∞ state estimation problem for continuous-time Itô-type stochastic memristor-based neural networks. By means of Lyapunov functionals and some stochastic technique, sufficient conditions are derived to ensure that the estimation error system is asymptotically stable in the mean square with a prescribed H ∞ performance. An explicit expression of the state estimator gain is given in terms of linear matrix inequalities (LMIs). Compared with other results, our results reduce control gain and control cost effectively. Finally, numerical simulations are provided to demonstrate the efficiency of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Simons, Rainee N.
2002-01-01
The paper presents a novel on-wafer, antenna far field pattern measurement technique for microelectromechanical systems (MEMS) based reconfigurable patch antennas. The measurement technique significantly reduces the time and the cost associated with the characterization of printed antennas, fabricated on a semiconductor wafer or dielectric substrate. To measure the radiation patterns, the RF probe station is modified to accommodate an open-ended rectangular waveguide as the rotating linearly polarized sampling antenna. The open-ended waveguide is attached through a coaxial rotary joint to a Plexiglas(Trademark) arm and is driven along an arc by a stepper motor. Thus, the spinning open-ended waveguide can sample the relative field intensity of the patch as a function of the angle from bore sight. The experimental results include the measured linearly polarized and circularly polarized radiation patterns for MEMS-based frequency reconfigurable rectangular and polarization reconfigurable nearly square patch antennas, respectively.
Sieve estimation in semiparametric modeling of longitudinal data with informative observation times.
Zhao, Xingqiu; Deng, Shirong; Liu, Li; Liu, Lei
2014-01-01
Analyzing irregularly spaced longitudinal data often involves modeling possibly correlated response and observation processes. In this article, we propose a new class of semiparametric mean models that allows for the interaction between the observation history and covariates, leaving patterns of the observation process to be arbitrary. For inference on the regression parameters and the baseline mean function, a spline-based least squares estimation approach is proposed. The consistency, rate of convergence, and asymptotic normality of the proposed estimators are established. Our new approach is different from the usual approaches relying on the model specification of the observation scheme, and it can be easily used for predicting the longitudinal response. Simulation studies demonstrate that the proposed inference procedure performs well and is more robust. The analyses of bladder tumor data and medical cost data are presented to illustrate the proposed method.
NASA Astrophysics Data System (ADS)
Carraro, L.; Simonetta, M.; Benetti, G.; Tramonte, A.; Capelli, G.; Benedetti, M.; Randone, E. M.; Ylisaukko-oja, A.; Keränen, K.; Facchinetti, T.; Giuliani, G.
2017-02-01
LUMENTILE (LUMinous ElectroNic TILE) is a project funded by the European Commission with the goal of developing a luminous tile with novel functionalities, capable of changing its color and interact with the user. Applications include interior/exterior tile for walls and floors covering, high-efficiency luminaries, and advertising under the form of giant video screens. High overall electrical efficiency of the tile is mandatory, as several millions of square meters are foreseen to be installed each year. Demand is for high uniformity of the illumination of the top tile surface, and for high optical extraction efficiency. These features are achieved by smart light management, using a new approach based on light guiding slab and spatially selective light extraction obtained using both diffusion and/or reflection from the top and bottom interfaces of the optical layer. Planar and edge configurations for the RGB LEDs are considered and compared. A square shape with side length from 20cm to 60cm is considered for the tiles. The electronic circuit layout must optimize the electrical efficiency, and be compatible with low-cost roll-to-roll production on flexible substrates. LED heat management is tackled by using dedicated solutions that allow operation in thermally harsh environment. An approach based on OLEDs has also been considered, still needing improvement on emitted power and ruggedness.
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Costa, Márcio Holsbach
2017-12-01
Feedback cancellation in a hearing aid is essential for achieving high maximum stable gain to compensate for the losses in severe to profound hearing impaired people. The performance of adaptive feedback cancellers has been studied by assuming that the feedback path can be modeled as a linear system. However, limited dynamic range, low-cost loudspeakers, and nonlinear power amplifiers may distort the hearing aid output signal. In this way, linear-based predictions of the canceller performance may lead to significant deviations from its actual behavior. This work presents a theoretical performance analysis of a Least Mean Square based shadow filter that is applied to set up the coefficients of a feedback canceller, which is subject to a static saturation type nonlinearity at the output of the direct path. Deterministic recursive equations are derived to predict the mean square feedback error and the mean coefficient vector evolution between updates of the feedback canceller. These models are defined as functions of the canceller parameters and input signal statistics. Comparisons with Monte Carlo simulations show the provided models are highly accurate under the considered assumptions. The developed models allow inferences about the potential impact of an overdriven loudspeaker over the transient performance of the direct method feedback canceller, serving as insightful tools for understanding the involved mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, X. R.; Wang, X.
2016-03-01
When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.
Effectiveness of buffered propionic-acid preservatives for large hay packages
USDA-ARS?s Scientific Manuscript database
Most hay producers realize that hays packaged in large-round or large-square bales are particularly sensitive to spontaneous heating, dry matter losses, and negative changes in forage quality. During the last two decades, this has become an important dilemma for hay producers because the cost and av...
The Task Group on Limited War. Volume 3
1958-09-01
modern two-story building with over 350,000 square feet of work space. At present, the Laboratory emplots approximately 1,100 people, of whi&. 400...election, mishandling of various disabled veterans r and survivors r claims by the Ministry of National Defense cost the government votes in several
7 CFR 3550.117 - WWD grant purposes.
Code of Federal Regulations, 2013 CFR
2013-01-01
... (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells... for individuals to: (a) Extend service lines from the system to their residence. (b) Connect service lines to residence's plumbing. (c) Pay reasonable charges or fees for connecting to a system. (d) Pay...
7 CFR 3550.117 - WWD grant purposes.
Code of Federal Regulations, 2012 CFR
2012-01-01
... (48 square feet) in size. (f) Pay reasonable costs for closing abandoned septic tanks and water wells... for individuals to: (a) Extend service lines from the system to their residence. (b) Connect service lines to residence's plumbing. (c) Pay reasonable charges or fees for connecting to a system. (d) Pay...
Array automated assembly, phase 2
NASA Technical Reports Server (NTRS)
Taylor, W. E.
1978-01-01
An analysis was made of cost tradeoffs for shaping modified square wafers from cylindrical crystals. Tests were conducted of the effectiveness of texture etching for removal of surface damage on sawed wafers. A single step texturing etch appeared adequate for removal of surface damage on wafers cut with multiple blade reciprocating slurry saws.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
NASA Astrophysics Data System (ADS)
Roncoroni, Alan; Medo, Matus
2016-12-01
Models of spatial firm competition assume that customers are distributed in space and transportation costs are associated with their purchases of products from a small number of firms that are also placed at definite locations. It has been long known that the competition equilibrium is not guaranteed to exist if the most straightforward linear transportation costs are assumed. We show by simulations and also analytically that if periodic boundary conditions in a plane are assumed, the equilibrium exists for a pair of firms at any distance. When a larger number of firms is considered, we find that their total equilibrium profit is inversely proportional to the square root of the number of firms. We end with a numerical investigation of the system's behavior for a general transportation cost exponent.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
ERIC Educational Resources Information Center
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
Hansen, Ryan N; Pham, An T; Lovelace, Belinda; Balaban, Stela; Wan, George J
2017-10-01
Recovery from obstetrics and gynecology (OB/GYN) surgery, including hysterectomy and cesarean section delivery, aims to restore function while minimizing hospital length of stay (LOS) and medical expenditures. Our analyses compare OB/GYN surgery patients who received combination intravenous (IV) acetaminophen and IV opioid analgesia with those who received IV opioid-only analgesia and estimate differences in LOS, hospitalization costs, and opioid consumption. We performed a retrospective analysis of the Premier Database between January 2009 and June 2015, comparing OB/GYN surgery patients who received postoperative pain management with combination IV acetaminophen and IV opioids with those who received only IV opioids starting on the day of surgery and continuing up to the second postoperative day. We performed instrumental variable 2-stage least-squares regressions controlling for patient and hospital covariates to compare the LOS, hospitalization costs, and daily opioid doses (morphine equivalent dose) of IV acetaminophen recipients with that of opioid-only analgesia patients. We identified 225 142 OB/GYN surgery patients who were eligible for our study of whom 89 568 (40%) had been managed with IV acetaminophen and opioids. Participants averaged 36 years of age and were predominantly non-Hispanic Caucasians (60%). Multivariable regression models estimated statistically significant differences in hospitalization cost and opioid use with IV acetaminophen associated with $484.4 lower total hospitalization costs (95% CI = -$760.4 to -$208.4; P = 0.0006) and 8.2 mg lower daily opioid use (95% CI = -10.0 to -6.4), whereas the difference in LOS was not significant, at -0.09 days (95% CI = -0.19 to 0.01; P = 0.07). Compared with IV opioid-only analgesia, managing post-OB/GYN surgery pain with the addition of IV acetaminophen is associated with decreased hospitalization costs and reduced opioid use.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J. (Principal Investigator)
1982-01-01
The sample LANDSAT-4 TM tape (7 bands) of NE Arkansas/Tennessee area was received and displayed. Snow reflectance in all 6 TM reflective bands, i.e. 1, 2, 3, 4, 5, and 7 was simulated, using Wiscombe and Warren's (1980) delta-Eddington model. Snow reflectance in bands 4, 5, and 7 appear sensitive to grain size. One of the objectives is to interpret surface optical grain size of snow, for spectral extension of albedo. While TM data of the study area are not received, simulation results are encouraging. It also appears that the TM filters resemble a "square-wave" closely enough to permit assuming a square-wave in calculations. Integrated band reflectance over the actual response functions was simulated, using sensor data supplied by Santa Barbara Research Center. Differences between integrating over the actual response functions and the equivalent square wave were negligible.
Siudem, Grzegorz; Fronczak, Agata; Fronczak, Piotr
2016-10-10
In this paper, we provide the exact expression for the coefficients in the low-temperature series expansion of the partition function of the two-dimensional Ising model on the infinite square lattice. This is equivalent to exact determination of the number of spin configurations at a given energy. With these coefficients, we show that the ferromagnetic-to-paramagnetic phase transition in the square lattice Ising model can be explained through equivalence between the model and the perfect gas of energy clusters model, in which the passage through the critical point is related to the complete change in the thermodynamic preferences on the size of clusters. The combinatorial approach reported in this article is very general and can be easily applied to other lattice models.
Siudem, Grzegorz; Fronczak, Agata; Fronczak, Piotr
2016-01-01
In this paper, we provide the exact expression for the coefficients in the low-temperature series expansion of the partition function of the two-dimensional Ising model on the infinite square lattice. This is equivalent to exact determination of the number of spin configurations at a given energy. With these coefficients, we show that the ferromagnetic–to–paramagnetic phase transition in the square lattice Ising model can be explained through equivalence between the model and the perfect gas of energy clusters model, in which the passage through the critical point is related to the complete change in the thermodynamic preferences on the size of clusters. The combinatorial approach reported in this article is very general and can be easily applied to other lattice models. PMID:27721435
Algorithms for Nonlinear Least-Squares Problems
1988-09-01
O -,i(x) 2 , where each -,(x) is a smooth function mapping Rn to R. J - The m x n Jacobian matrix of f. ... x g - The gradient of the nonlinear least...V211f(X*)I112~ l~ l) J(xk)T J(xk) 2 + O(k - X*) For more convergence results and detailed convergence analysis for the Gauss-Newton method, see, e. g ...for a class of nonlinear least-squares problems that includes zero-residual prob- lems. The function Jt is the pseudo-inverse of Jk (see, e. g
Approximating a retarded-advanced differential equation that models human phonation
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2017-11-01
In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.
Four-Dimensional Golden Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.
2015-02-25
The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less
NASA Astrophysics Data System (ADS)
Klees, R.; Slobbe, D. C.; Farahani, H. H.
2018-03-01
The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.
Huang, Chenyu; Ogawa, Rei
2014-05-01
Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin.
Huang, Chenyu
2014-01-01
Background: Joint scar contractures are characterized by tight bands of soft tissue that bridge the 2 ends of the joint like a web. Classical treatment methods such as Z-plasties are mainly based on 2-dimensional designs. Our square flap method is an alternative surgical method that restores the span of the web in a stereometric fashion, thereby reconstructing joint function. Methods: In total, 20 Japanese patients with joint scar contractures on the axillary (n = 10) or first digital web (n = 10) underwent square flap surgery. The maximum range of motion and commissure length were measured before and after surgery. A theoretical stereometric geometrical model of the square flap was established to compare it to the classical single (60 degree), 4-flap (45 degree), and 5-flap (60 degree) Z-plasties in terms of theoretical web reconstruction efficacy. Results: All cases achieved 100% contracture release. The maximum range of motion and web space improved after square flap surgery (P = 0.001). Stereometric geometrical modeling revealed that the standard square flap (α = 45 degree; β = 90 degree) yields a larger flap area, length/width ratio, and postsurgical commissure length than the Z-plasties. It can also be adapted by varying angles α and β, although certain angle thresholds must be met to obtain the stereometric advantages of this method. Conclusions: When used to treat joint scar contractures, the square flap method can fully span the web space in a stereometric manner, thus yielding a close-to-original shape and function. Compared with the classical Z-plasties, it also provides sufficient anatomical blood supply while imposing the least physiological tension on the adjacent skin. PMID:25289342
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
A multilevel modelling approach to analysis of patient costs under managed care.
Carey, K
2000-07-01
The growth of the managed care model of health care delivery in the USA has led to broadened interest in the performance of health care providers. This paper uses multilevel modelling to analyse the effects of managed care penetration on patient level costs for a sample of 24 medical centres operated by the Veterans Health Administration (VHA). The appropriateness of a two level approach to this problem over ordinary least squares (OLS) is demonstrated. Results indicate a modicum of difference in institutions' performance after controlling for patient effects. Facilities more heavily penetrated by the managed care model may be more effective at controlling costs of their sicker patients. Copyright 2000 John Wiley & Sons, Ltd.
Solar space- and water-heating system at Stanford University. Central Food Services Building
NASA Astrophysics Data System (ADS)
1980-05-01
The closed-loop drain-back system is described as offering dependability of gravity drain-back freeze protection, low maintenance, minimal costs, and simplicity. The system features an 840 square-foot collector and storage capacity of 1550 gallons. The acceptance testing and the predicted system performance data are briefly described. Solar performance calculations were performed using a computer design program (FCHART). Bidding, costs, and economics of the system are reviewed. Problems are discussed and solutions and recommendations given. An operation and maintenance manual is given.
Feature Detection and Curve Fitting Using Fast Walsh Transforms for Shock Tracking: Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2017-01-01
Walsh functions form an orthonormal basis set consisting of square waves. Square waves make the system well suited for detecting and representing functions with discontinuities. Given a uniform distribution of 2p cells on a one-dimensional element, it has been proven that the inner product of the Walsh Root function for group p with every polynomial of degree < or = (p - 1) across the element is identically zero. It has also been proven that the magnitude and location of a discontinuous jump, as represented by a Heaviside function, are explicitly identified by its Fast Walsh Transform (FWT) coefficients. These two proofs enable an algorithm that quickly provides a Weighted Least Squares fit to distributions across the element that include a discontinuity. The detection of a discontinuity enables analytic relations to locally describe its evolution and provide increased accuracy. Time accurate examples are provided for advection, Burgers equation, and Riemann problems (diaphragm burst) in closed tubes and de Laval nozzles. New algorithms to detect up to two C0 and/or C1 discontinuities within a single element are developed for application to the Riemann problem, in which a contact discontinuity and shock wave form after the diaphragm bursts.
Showler, A T; Robinson, J R C
2008-10-01
The standard practice of two or three preemptive insecticide applications at the start of pinhead (1-2-mm-diameter) squaring followed by threshold-triggered (when 10% of randomly selected squares have oviposition punctures) insecticide applications for boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), control does not provide reliable protection of cotton, Gossypium hirsutum L., lint production. This study, conducted during 2004 and 2005, showed that three to six fewer spray applications in a "proactive" approach, in which spraying began at the start of large (5.5-8-mm-diameter) square formation and continued at approximately 7-d intervals while large squares were abundant, resulted in fewer infested squares and 1.4- to 1.7-fold more lint than the standard treatment. Fewer sprays and increased yield made proactive spraying significantly more profitable than the standard approach, which resulted in relatively low or negative economic returns. Harvest at 75% boll-split in the proactive spray regime of 2005 resulted in four-fold greater economic return than cotton harvested at 40% boll-split because of improved protection of large squares and the elimination of late-season sprays inherent to standard spray regime despite the cost of an extra irrigation in the 75% boll-split treatments. The earlier, 40% harvest trigger does not avoid high late-season boll weevil pressure, which exerts less impact on bolls, the predominant form of fruiting body at that time, than on squares. Proactive spraying and harvest timing are based on an important relationship between nutrition, boll weevil reproduction, and economic inputs; therefore, the tactic of combining proaction with harvest at 75% boll-split is applicable where boll weevils are problematic regardless of climate or region, or whether an eradication program is ongoing.
NASA Astrophysics Data System (ADS)
Horiuchi, Toshiyuki; Watanabe, Jun; Suzuki, Yuta; Iwasaki, Jun-ya
2017-05-01
Two dimensional code marks are often used for the production management. In particular, in the production lines of liquid-crystal-display panels and others, data on fabrication processes such as production number and process conditions are written on each substrate or device in detail, and they are used for quality managements. For this reason, lithography system specialized in code mark printing is developed. However, conventional systems using lamp projection exposure or laser scan exposure are very expensive. Therefore, development of a low-cost exposure system using light emitting diodes (LEDs) and optical fibers with squared ends arrayed in a matrix is strongly expected. In the past research, feasibility of such a new exposure system was demonstrated using a handmade system equipped with 100 LEDs with a central wavelength of 405 nm, a 10×10 matrix of optical fibers with 1 mm square ends, and a 10X projection lens. Based on these progresses, a new method for fabricating large-scale arrays of finer fibers with squared ends was developed in this paper. At most 40 plastic optical fibers were arranged in a linear gap of an arraying instrument, and simultaneously squared by heating them on a hotplate at 120°C for 7 min. Fiber sizes were homogeneous within 496+/-4 μm. In addition, average light leak was improved from 34.4 to 21.3% by adopting the new method in place of conventional one by one squaring method. Square matrix arrays necessary for printing code marks will be obtained by piling the newly fabricated linear arrays up.
Simulation-Based Approach to Determining Electron Transfer Rates Using Square-Wave Voltammetry.
Dauphin-Ducharme, Philippe; Arroyo-Currás, Netzahualcóyotl; Kurnik, Martin; Ortega, Gabriel; Li, Hui; Plaxco, Kevin W
2017-05-09
The efficiency with which square-wave voltammetry differentiates faradic and charging currents makes it a particularly sensitive electroanalytical approach, as evidenced by its ability to measure nanomolar or even picomolar concentrations of electroactive analytes. Because of the relative complexity of the potential sweep it uses, however, the extraction of detailed kinetic and mechanistic information from square-wave data remains challenging. In response, we demonstrate here a numerical approach by which square-wave data can be used to determine electron transfer rates. Specifically, we have developed a numerical approach in which we model the height and the shape of voltammograms collected over a range of square-wave frequencies and amplitudes to simulated voltammograms as functions of the heterogeneous rate constant and the electron transfer coefficient. As validation of the approach, we have used it to determine electron transfer kinetics in both freely diffusing and diffusionless surface-tethered species, obtaining electron transfer kinetics in all cases in good agreement with values derived using non-square-wave methods.
NASA Astrophysics Data System (ADS)
Dong, Shidu; Yang, Xiaofan; He, Bo; Liu, Guojin
2006-11-01
Radiance coming from the interior of an uncooled infrared camera has a significant effect on the measured value of the temperature of the object. This paper presents a three-phase compensation scheme for coping with this effect. The first phase acquires the calibration data and forms the calibration function by least square fitting. Likewise, the second phase obtains the compensation data and builds the compensation function by fitting. With the aid of these functions, the third phase determines the temperature of the object in concern from any given ambient temperature. It is known that acquiring the compensation data of a camera is very time-consuming. For the purpose of getting the compensation data at a reasonable time cost, we propose a transplantable scheme. The idea of this scheme is to calculate the ratio between the central pixel’s responsivity of the child camera to the radiance from the interior and that of the mother camera, followed by determining the compensation data of the child camera using this ratio and the compensation data of the mother camera Experimental results show that either of the child camera and the mother camera can measure the temperature of the object with an error of no more than 2°C.
Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods
NASA Astrophysics Data System (ADS)
Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.
2012-03-01
In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.
NASA Astrophysics Data System (ADS)
Becker, Roland; Vexler, Boris
2005-06-01
We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.
Realization of a thermal cloak-concentrator using a metamaterial transformer.
Liu, Ding-Peng; Chen, Po-Jung; Huang, Hsin-Haou
2018-02-06
By combining rotating squares with auxetic properties, we developed a metamaterial transformer capable of realizing metamaterials with tunable functionalities. We investigated the use of a metamaterial transformer-based thermal cloak-concentrator that can change from a cloak to a concentrator when the device configuration is transformed. We established that the proposed dual-functional metamaterial can either thermally protect a region (cloak) or focus heat flux in a small region (concentrator). The dual functionality was verified by finite element simulations and validated by experiments with a specimen composed of copper, epoxy, and rotating squares. This work provides an effective and efficient method for controlling the gradient of heat, in addition to providing a reference for other thermal metamaterials to possess such controllable functionalities by adapting the concept of a metamaterial transformer.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, Xin-ran; Wang, Xin
2017-04-01
When the genetic algorithm is used to solve the problem of too short-arc (TSA) orbit determination, due to the difference of computing process between the genetic algorithm and the classical method, the original method for outlier deletion is no longer applicable. In the genetic algorithm, the robust estimation is realized by introducing different loss functions for the fitness function, then the outlier problem of the TSA orbit determination is solved. Compared with the classical method, the genetic algorithm is greatly simplified by introducing in different loss functions. Through the comparison on the calculations of multiple loss functions, it is found that the least median square (LMS) estimation and least trimmed square (LTS) estimation can greatly improve the robustness of the TSA orbit determination, and have a high breakdown point.
NASA Astrophysics Data System (ADS)
Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.
2013-09-01
Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.
NASA Astrophysics Data System (ADS)
Parise, M.
2018-01-01
A highly accurate analytical solution is derived to the electromagnetic problem of a short vertical wire antenna located on a stratified ground. The derivation consists of three steps. First, the integration path of the integrals describing the fields of the dipole is deformed and wrapped around the pole singularities and the two vertical branch cuts of the integrands located in the upper half of the complex plane. This allows to decompose the radiated field into its three contributions, namely the above-surface ground wave, the lateral wave, and the trapped surface waves. Next, the square root terms responsible for the branch cuts are extracted from the integrands of the branch-cut integrals. Finally, the extracted square roots are replaced with their rational representations according to Newton's square root algorithm, and residue theorem is applied to give explicit expressions, in series form, for the fields. The rigorous integration procedure and the convergence of square root algorithm ensure that the obtained formulas converge to the exact solution. Numerical simulations are performed to show the validity and robustness of the developed formulation, as well as its advantages in terms of time cost over standard numerical integration procedures.
Bridging the District-Charter Divide to Help More Students Succeed
ERIC Educational Resources Information Center
Lake, Robin; Yatsko, Sarah; Gill, Sean; Opalka, Alice
2017-01-01
In cities where public charter schools serve a large share of students, the costs of ongoing sector divisions and hostility across district and charter lines fall squarely on students and families. Exercising choice and accessing good schools in "high-choice cities" can be difficult for many families, especially some of the most…
Sparse matrix methods based on orthogonality and conjugacy
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1973-01-01
A matrix having a high percentage of zero elements is called spares. In the solution of systems of linear equations or linear least squares problems involving large sparse matrices, significant saving of computer cost can be achieved by taking advantage of the sparsity. The conjugate gradient algorithm and a set of related algorithms are described.
Gonzaga, Fabiano Barbieri; Pasquini, Celio
2010-06-18
A low cost absorption spectrophotometer for the short wave near infrared spectral region (850-1050 nm) is described. The spectrophotometer is basically composed of a conventional dichroic lamp, a long-pass filter, a sample cell and a Czerny-Turner type polychromator coupled to a 1024 pixel non-cooled photodiode array. A preliminary evaluation of the spectrophotometer showed good repeatability of the first derivative of the spectra at a constant room temperature and the possibility of assigning some spectral regions to different C-H stretching third overtones. Finally, the spectrophotometer was successfully applied for the analysis of diesel samples and the determination of some of their quality parameters using partial least squares calibration models. The values found for the root mean square error of prediction using external validation were 0.5 for the cetane index and from 2.5 to 5.0 degrees C for the temperatures achieved during distillation when obtaining 10, 50, 85, and 90% (v/v) of the distilled sample, respectively. 2010 Elsevier B.V. All rights reserved.
PbSe Nanocrystal Solids for n- and p-Channel Thin Film Field-Effect Transistors
NASA Astrophysics Data System (ADS)
Talapin, Dmitri V.; Murray, Christopher B.
2005-10-01
Initially poorly conducting PbSe nanocrystal solids (quantum dot arrays or superlattices) can be chemically ``activated'' to fabricate n- and p-channel field effect transistors with electron and hole mobilities of 0.9 and 0.2 square centimeters per volt-second, respectively; with current modulations of about 103 to 104; and with current density approaching 3 × 104 amperes per square centimeter. Chemical treatments engineer the interparticle spacing, electronic coupling, and doping while passivating electronic traps. These nanocrystal field-effect transistors allow reversible switching between n- and p-transport, providing options for complementary metal oxide semiconductor circuits and enabling a range of low-cost, large-area electronic, optoelectronic, thermoelectric, and sensing applications.
Recent Developments: PKI Square Dish for the Soleras Project
NASA Technical Reports Server (NTRS)
Rogers, W. E.
1984-01-01
The Square Dish solar collectors are subjected to rigorous design attention regarding corrosion at the site, and certification of the collector structure. The microprocessor controls and tracking mechanisms are improved in the areas of fail safe operations, durability, and low parasitic power requirements. Prototype testing demonstrates performance efficiency of approximately 72% at 730 F outlet temperature. Studies are conducted that include developing formal engineering design studies, developing formal engineering design drawing and fabrication details, establishing subcontracts for fabrication of major components, and developing a rigorous quality control system. The improved design is more cost effective to product and the extensive manuals developed for assembly and operation/maintenance result in faster field assembly and ease of operation.
Recent developments: PKI square dish for the Soleras Project
NASA Astrophysics Data System (ADS)
Rogers, W. E.
1984-03-01
The Square Dish solar collectors are subjected to rigorous design attention regarding corrosion at the site, and certification of the collector structure. The microprocessor controls and tracking mechanisms are improved in the areas of fail safe operations, durability, and low parasitic power requirements. Prototype testing demonstrates performance efficiency of approximately 72% at 730 F outlet temperature. Studies are conducted that include developing formal engineering design studies, developing formal engineering design drawing and fabrication details, establishing subcontracts for fabrication of major components, and developing a rigorous quality control system. The improved design is more cost effective to product and the extensive manuals developed for assembly and operation/maintenance result in faster field assembly and ease of operation.
Some Results on Mean Square Error for Factor Score Prediction
ERIC Educational Resources Information Center
Krijnen, Wim P.
2006-01-01
For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…
ERIC Educational Resources Information Center
Osler, James Edward
2013-01-01
This paper discusses the implementation of the Tri-Squared Test as an advanced statistical measure used to verify and validate the research outcomes of Educational Technology software. A mathematical and epistemological rational is provided for the transformative process of qualitative data into quantitative outcomes through the Tri-Squared Test…
Vehicle Sprung Mass Estimation for Rough Terrain
2011-03-01
distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended
The Least-Squares Estimation of Latent Trait Variables.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi
This paper presents a new method for estimating a given latent trait variable by the least-squares approach. The beta weights are obtained recursively with the help of Fourier series and expressed as functions of item parameters of response curves. The values of the latent trait variable estimated by this method and by maximum likelihood method…
Squared Euclidean distance: a statistical test to evaluate plant community change
Raymond D. Ratliff; Sylvia R. Mori
1993-01-01
The concepts and a procedure for evaluating plant community change using the squared Euclidean distance (SED) resemblance function are described. Analyses are based on the concept that Euclidean distances constitute a sample from a population of distances between sampling units (SUs) for a specific number of times and SUs. With different times, the distances will be...
Innovative Use of Thighplasty to Improve Prosthesis Fit and Function in a Transfemoral Amputee.
Kuiken, Todd A; Fey, Nicholas P; Reissman, Timothy; Finucane, Suzanne B; Dumanian, Gregory A
2018-01-01
Excess residual limb fat is a common problem that can impair prosthesis control and negatively impact gait. In the general population, thighplasty and liposuction are commonly performed for cosmetic reasons but not specifically to improve function in amputees. The objective of this study was to determine if these procedures could enhance prosthesis fit and function in an overweight above-knee amputee. We evaluated the use of these techniques on a 50-year-old transfemoral amputee who was overweight. The patient underwent presurgical imaging and tests to measure her residual limb tissue distribution, socket-limb interface stiffness, residual femur orientation, lower-extremity function, and prosthesis satisfaction. A medial thighplasty procedure with circumferential liposuction was performed, during which 2,812 g (6.2 lbs.) of subcutaneous fat and skin was removed from her residual limb. Imaging was repeated 5 months postsurgery; functional assessments were repeated 9 months postsurgery. The patient demonstrated notable improvements in socket fit and in performing most functional and walking tests. Her comfortable walking speed increased 13.3%, and her scores for the Sit-to-Stand and Four Square Step tests improved over 20%. Femur alignment in her socket changed from 8.13 to 4.14 degrees, and analysis showed a marked increase in the socket-limb interface stiffness. This study demonstrates the potential of using a routine plastic surgery procedure to modify the intrinsic properties of the limb and to improve functional outcomes in overweight or obese transfemoral amputees. This technique is a potentially attractive option compared with multiple reiterations of sockets, which can be time-consuming and costly.
Anomalous structural transition of confined hard squares.
Gurin, Péter; Varga, Szabolcs; Odriozola, Gerardo
2016-11-01
Structural transitions are examined in quasi-one-dimensional systems of freely rotating hard squares, which are confined between two parallel walls. We find two competing phases: one is a fluid where the squares have two sides parallel to the walls, while the second one is a solidlike structure with a zigzag arrangement of the squares. Using transfer matrix method we show that the configuration space consists of subspaces of fluidlike and solidlike phases, which are connected with low probability microstates of mixed structures. The existence of these connecting states makes the thermodynamic quantities continuous and precludes the possibility of a true phase transition. However, thermodynamic functions indicate strong tendency for the phase transition and our replica exchange Monte Carlo simulation study detects several important markers of the first order phase transition. The distinction of a phase transition from a structural change is practically impossible with simulations and experiments in such systems like the confined hard squares.
NASA Astrophysics Data System (ADS)
Sturrock, P. A.
2008-01-01
Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.
Unusual square roots in the ghost-free theory of massive gravity
NASA Astrophysics Data System (ADS)
Golovnev, Alexey; Smirnov, Fedor
2017-06-01
A crucial building block of the ghost free massive gravity is the square root function of a matrix. This is a problematic entity from the viewpoint of existence and uniqueness properties. We accurately describe the freedom of choosing a square root of a (non-degenerate) matrix. It has discrete and (in special cases) continuous parts. When continuous freedom is present, the usual perturbation theory in terms of matrices can be critically ill defined for some choices of the square root. We consider the new formulation of massive and bimetric gravity which deals directly with eigenvalues (in disguise of elementary symmetric polynomials) instead of matrices. It allows for a meaningful discussion of perturbation theory in such cases, even though certain non-analytic features arise.
Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers
NASA Astrophysics Data System (ADS)
Samiei-Esfahany, Sami; Hanssen, Ramon F.
2012-01-01
The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Geodesic regression on orientation distribution functions with its application to an aging study.
Du, Jia; Goh, Alvina; Kushnarev, Sergey; Qiu, Anqi
2014-02-15
In this paper, we treat orientation distribution functions (ODFs) derived from high angular resolution diffusion imaging (HARDI) as elements of a Riemannian manifold and present a method for geodesic regression on this manifold. In order to find the optimal regression model, we pose this as a least-squares problem involving the sum-of-squared geodesic distances between observed ODFs and their model fitted data. We derive the appropriate gradient terms and employ gradient descent to find the minimizer of this least-squares optimization problem. In addition, we show how to perform statistical testing for determining the significance of the relationship between the manifold-valued regressors and the real-valued regressands. Experiments on both synthetic and real human data are presented. In particular, we examine aging effects on HARDI via geodesic regression of ODFs in normal adults aged 22 years old and above. © 2013 Elsevier Inc. All rights reserved.
Comparing least-squares and quantile regression approaches to analyzing median hospital charges.
Olsen, Cody S; Clark, Amy E; Thomas, Andrea M; Cook, Lawrence J
2012-07-01
Emergency department (ED) and hospital charges obtained from administrative data sets are useful descriptors of injury severity and the burden to EDs and the health care system. However, charges are typically positively skewed due to costly procedures, long hospital stays, and complicated or prolonged treatment for few patients. The median is not affected by extreme observations and is useful in describing and comparing distributions of hospital charges. A least-squares analysis employing a log transformation is one approach for estimating median hospital charges, corresponding confidence intervals (CIs), and differences between groups; however, this method requires certain distributional properties. An alternate method is quantile regression, which allows estimation and inference related to the median without making distributional assumptions. The objective was to compare the log-transformation least-squares method to the quantile regression approach for estimating median hospital charges, differences in median charges between groups, and associated CIs. The authors performed simulations using repeated sampling of observed statewide ED and hospital charges and charges randomly generated from a hypothetical lognormal distribution. The median and 95% CI and the multiplicative difference between the median charges of two groups were estimated using both least-squares and quantile regression methods. Performance of the two methods was evaluated. In contrast to least squares, quantile regression produced estimates that were unbiased and had smaller mean square errors in simulations of observed ED and hospital charges. Both methods performed well in simulations of hypothetical charges that met least-squares method assumptions. When the data did not follow the assumed distribution, least-squares estimates were often biased, and the associated CIs had lower than expected coverage as sample size increased. Quantile regression analyses of hospital charges provide unbiased estimates even when lognormal and equal variance assumptions are violated. These methods may be particularly useful in describing and analyzing hospital charges from administrative data sets. © 2012 by the Society for Academic Emergency Medicine.
NASA Technical Reports Server (NTRS)
Kelly, Kenneth C.; Huang, John
1999-01-01
A highly successful Earth orbiting synthetic antenna aperture radar (SAR) system, known as the SIR-C mission, was carried into orbit in 1994 on a U.S. Shuttle (Space Transportation System) mission. The radar system was mounted in the cargo bay with no need to fold, or in any other way reduce the size of the antennas for launch. Weight and size were not limited for the L-Band, C-Band, and X-Band radar systems of the SIR-C radar imaging mission; the set of antennas weighed 10,500 kg, the L-Band antenna having the major share of the weight. This paper treats designing an L-Band antenna functionally similar to that used for SIR-C, but at a fraction of the cost and at a weight in the order of 250 kg. Further, the antenna must be folded to fit into the small payload shroud of low cost booster rocket systems. Over 31 square meters of antenna area is required. This low weight, foldable, electronic scanning antenna is for the proposed LightSAR radar system which is to be placed in Earth orbit on a small, dedicated space craft at the lowest possible cost for an efficient L-Band radar imaging system. This LightSAR spacecraft radar is to be continuously available for at least five operational years, and have the ability to map or repeat-map any area on earth within a few days of any request. A microstrip patch array, with microstrip transmission lines heavily employed in the aperture and in the corporate feed network, was chosen as the low cost approach for this active dual-polarization, 80 MHz (6.4%) bandwidth antenna design.
NASA Technical Reports Server (NTRS)
Kelly, Kenneth C.; Huang, John
2000-01-01
A highly successful Earth orbiting synthetic antenna aperture radar (SAR) system, known as the SIR-C mission, was carried into orbit in 1994 on a U.S. Shuttle (Space Transportation System) mission. The radar system was mounted in the cargo bay with no need to fold, or in any other way reduce the size of the antennas for launch. Weight and size were not limited for the L-Band, C-Band, and X-Band radar systems of the SIR-C radar imaging mission; the set of antennas weighed 10,500 kg, the L-Band antenna having the major share of the weight. This paper treats designing an L-Band antenna functionally similar to that used for SIR-C, but at a fraction of the cost and at a weight in the order of 250 kg. Further, the antenna must be folded to fit into the small payload shroud of low cost booster rocket systems. Over 31 square meters of antenna area is required. This low weight, foldable, electronic scanning antenna is for the proposed LightSAR radar system which is to be placed in Earth orbit on a small, dedicated space craft at the lowest possible cost for an efficient L- Band radar imaging system. This LightSAR spacecraft radar is to be continuously available for at least five operational years, and have the ability to map or repeat-map any area on earth within a few days of any request. A microstrip patch array, with microstrip transmission lines heavily employed in the aperture and in the corporate feed network, was chosen as the low cost approach for this active dual-polarization, 80 MHz (6.4%) bandwidth antenna design.
Impact of a comprehensive population health management program on health care costs.
Grossmeier, Jessica; Seaverson, Erin L D; Mangen, David J; Wright, Steven; Dalal, Karl; Phalen, Chris; Gold, Daniel B
2013-06-01
Assess the influence of participation in a population health management (PHM) program on health care costs. A quasi-experimental study relied on logistic and ordinary least squares regression models to compare the costs of program participants with those of nonparticipants, while controlling for differences in health care costs and utilization, demographics, and health status. Propensity score models were developed and analyses were weighted by inverse propensity scores to control for selection bias. Study models yielded an estimated savings of $60.65 per wellness participant per month and $214.66 per disease management participant per month. Program savings were combined to yield an integrated return-on-investment of $3 in savings for every dollar invested. A PHM program yielded a positive return on investment after 2 years of wellness program and 1 year of integrated disease management program launch.
Aerodynamic parameter estimation via Fourier modulating function techniques
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1995-01-01
Parameter estimation algorithms are developed in the frequency domain for systems modeled by input/output ordinary differential equations. The approach is based on Shinbrot's method of moment functionals utilizing Fourier based modulating functions. Assuming white measurement noises for linear multivariable system models, an adaptive weighted least squares algorithm is developed which approximates a maximum likelihood estimate and cannot be biased by unknown initial or boundary conditions in the data owing to a special property attending Shinbrot-type modulating functions. Application is made to perturbation equation modeling of the longitudinal and lateral dynamics of a high performance aircraft using flight-test data. Comparative studies are included which demonstrate potential advantages of the algorithm relative to some well established techniques for parameter identification. Deterministic least squares extensions of the approach are made to the frequency transfer function identification problem for linear systems and to the parameter identification problem for a class of nonlinear-time-varying differential system models.
Fabrication Method for LOBSTER-Eye Optics in <110> Silicon
NASA Technical Reports Server (NTRS)
Chervenak, James; Collier, Michael; Mateo, Jennette
2013-01-01
Soft x-ray optics can use narrow slots to direct x-rays into a desirable pattern on a focal plane. While square-pack, square-pore, slumped optics exist for this purpose, they are costly. Silicon (Si) is being examined as a possible low-cost replacement. A fabrication method was developed for narrow slots in <110> Si demonstrating the feasibility of stacked slot optics to replace micropores. Current micropore optics exist that have 20-micron-square pores on 26-micron pitch in glass with a depth of 1 mm and an extent of several square centimeters. Among several proposals to emulate the square pore optics are stacked slot chips with etched vertical slots. When the slots in the stack are positioned orthogonally to each other, the component will approach the soft x-ray focusing observed in the micropore optics. A specific improvement Si provides is that it can have narrower sidewalls between slots to permit greater throughput of x-rays through the optics. In general, Si can have more variation in slot geometry (width, length). Further, the sidewalls can be coated with high-Z materials to enhance reflection and potentially reduce the surface roughness of the reflecting surface. Narrow, close-packed deep slots in <110> Si have been produced using potassium hydroxide (KOH) etching and a patterned silicon nitride (SiN) mask. The achieved slot geometries have sufficient wall smoothness, as observed through scanning electron microscope (SEM) imaging, to enable evaluation of these slot plates as an optical element for soft x-rays. Etches of different angles to the crystal plane of Si were evaluated to identify a specific range of etch angles that will enable low undercut slots in the Si <110> material. These slots with the narrow sidewalls are demonstrated to several hundred microns in depth, and a technical path to 500-micron deep slots in a precision geometry of narrow, closepacked slots is feasible. Although intrinsic stress in ultrathin wall Si is observed, slots with walls approaching 1.5 microns can be achieved (a significant improvement over the 6-micron walls in micro - pore optics). The major advantages of this technique are the potential for higher x-ray throughout (due to narrow slot walls) and lower cost over the existing slumped micropore glass plates. KOH etching of smooth sidewalls has been demonstrated for many applications, suggesting its feasibility for implementation in x-ray optics. Si cannot be slumped like the micropore optics, so the focusing will be achieved with millimeter-scale slot plates that populate a spherical dome. The possibility for large-scale production exists for Si parts that is more difficult to achieve in micropore parts.
Ramaswamy, Sai K; Mosher, Gretchen A
2017-07-31
Workplace injuries in the grain handling industry are common, yet little research has characterized worker injuries in grain elevators across all hazard types. Learning from past injuries is essential for preventing future occurrences, but the lack of injury information for the grain handling industry hinders this effort. The present study addresses this knowledge gap by using data from over 7000 workers' compensation claims reported from 2008 to 2016 by commercial grain handling facilities in the U.S. to characterize injury costs and severity. The total amount paid for each claim was used as a measure of injury severity. The effects of employee age and tenure, cause of injury, and body part injured on the cost of work-related injuries were investigated. Contingency tables were used to classify the variable pairs. The chi-square test and chi-square residuals were employed to evaluate the relationship between the variable pairs and identify the at-risk groups. Results showed that the employee age and tenure, cause of injury, and body part injured have a significant influence on the cost paid for the claim. Several at-risk groups were identified as a result of the analyses. Findings from the study will assist commercial grain elevators in the development of targeted safety interventions and assist grain elevator safety managers in mitigating financial and social losses from occupational injuries. Copyright© by the American Society of Agricultural Engineers.
Demonstration of Minimally Machined Honeycomb Silicon Carbide Mirrors
NASA Technical Reports Server (NTRS)
Goodman, William
2012-01-01
Honeycomb silicon carbide composite mirrors are made from a carbon fiber preform that is molded into a honeycomb shape using a rigid mold. The carbon fiber honeycomb is densified by using polymer infiltration pyrolysis, or through a reaction with liquid silicon. A chemical vapor deposit, or chemical vapor composite (CVC), process is used to deposit a polishable silicon or silicon carbide cladding on the honeycomb structure. Alternatively, the cladding may be replaced by a freestanding, replicated CVC SiC facesheet that is bonded to the honeycomb. The resulting carbon fiber-reinforced silicon carbide honeycomb structure is a ceramic matrix composite material with high stiffness and mechanical strength, high thermal conductivity, and low CTE (coefficient of thermal expansion). This innovation enables rapid, inexpensive manufacturing. The web thickness of the new material is less than 1 millimeter, and core geometries tailored. These parameters are based on precursor carbon-carbon honeycomb material made and patented by Ultracor. It is estimated at the time of this reporting that the HoneySiC(Trademark) will have a net production cost on the order of $38,000 per square meter. This includes an Ultracor raw material cost of about $97,000 per square meter, and a Trex silicon carbide deposition cost of $27,000 per square meter. Even at double this price, HoneySiC would beat NASA's goal of $100,000 per square meter. Cost savings are estimated to be 40 to 100 times that of current mirror technologies. The organic, rich prepreg material has a density of 56 kilograms per cubic meter. A charred carbon-carbon panel (volatile organics burnt off) has a density of 270 kilograms per cubic meter. Therefore, it is estimated that a HoneySiC panel would have a density of no more than 900 kilograms per cubic meter, which is about half that of beryllium and about onethird the density of bulk silicon carbide. It is also estimated that larger mirrors could be produced in a matter of weeks. Each cell is completely uniform, maintaining the shape of the inserted mandrel. Furthermore, the layup creates pressure that insures node bond strength. Each node is a composite laminate using only the inherent resin system to form the bond. This contrasts starkly with the other known method of producing composite honeycomb, in which individual corrugations are formed, cured, and then bonded together in a secondary process. By varying the size of the mandrels within the layup, varying degrees of density can be achieved. Typical sizes are 3/8 and 3/16 in. (approximately 10 and 5 millimeters). Cell sizes up to 1 in. (approximately 25 millimeters) have been manufactured. Similarly, the shape of the core can be altered for a flexible honeycomb structure.
Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization
Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996
Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.
Compact and low-cost humanoid hand powered by nylon artificial muscles.
Wu, Lianjun; Jung de Andrade, Monica; Saharan, Lokesh Kumar; Rome, Richard Steven; Baughman, Ray H; Tadesse, Yonas
2017-02-03
This paper focuses on design, fabrication and characterization of a biomimetic, compact, low-cost and lightweight 3D printed humanoid hand (TCP Hand) that is actuated by twisted and coiled polymeric (TCP) artificial muscles. The TCP muscles were recently introduced and provided unprecedented strain, mechanical work, and lifecycle (Haines et al 2014 Science 343 868-72). The five-fingered humanoid hand is under-actuated and has 16 degrees of freedom (DOF) in total (15 for fingers and 1 at the palm). In the under-actuated hand designs, a single actuator provides coupled motions at the phalanges of each finger. Two different designs are presented along with the essential elements consisting of actuators, springs, tendons and guide systems. Experiments were conducted to investigate the performance of the TCP muscles in response to the power input (power magnitude, type of wave form such as pulsed or square wave, and pulse duration) and the resulting actuation stroke and force generation. A kinematic model of the flexor tendons was developed to simulate the flexion motion and compare with experimental results. For fast finger movements, short high-power pulses were employed. Finally, we demonstrated the grasping of various objects using the humanoid TCP hand showing an array of functions similar to a natural hand.
Ellingwood, Nathan D; Yin, Youbing; Smith, Matthew; Lin, Ching-Long
2016-04-01
Faster and more accurate methods for registration of images are important for research involved in conducting population-based studies that utilize medical imaging, as well as improvements for use in clinical applications. We present a novel computation- and memory-efficient multi-level method on graphics processing units (GPU) for performing registration of two computed tomography (CT) volumetric lung images. We developed a computation- and memory-efficient Diffeomorphic Multi-level B-Spline Transform Composite (DMTC) method to implement nonrigid mass-preserving registration of two CT lung images on GPU. The framework consists of a hierarchy of B-Spline control grids of increasing resolution. A similarity criterion known as the sum of squared tissue volume difference (SSTVD) was adopted to preserve lung tissue mass. The use of SSTVD consists of the calculation of the tissue volume, the Jacobian, and their derivatives, which makes its implementation on GPU challenging due to memory constraints. The use of the DMTC method enabled reduced computation and memory storage of variables with minimal communication between GPU and Central Processing Unit (CPU) due to ability to pre-compute values. The method was assessed on six healthy human subjects. Resultant GPU-generated displacement fields were compared against the previously validated CPU counterpart fields, showing good agreement with an average normalized root mean square error (nRMS) of 0.044±0.015. Runtime and performance speedup are compared between single-threaded CPU, multi-threaded CPU, and GPU algorithms. Best performance speedup occurs at the highest resolution in the GPU implementation for the SSTVD cost and cost gradient computations, with a speedup of 112 times that of the single-threaded CPU version and 11 times over the twelve-threaded version when considering average time per iteration using a Nvidia Tesla K20X GPU. The proposed GPU-based DMTC method outperforms its multi-threaded CPU version in terms of runtime. Total registration time reduced runtime to 2.9min on the GPU version, compared to 12.8min on twelve-threaded CPU version and 112.5min on a single-threaded CPU. Furthermore, the GPU implementation discussed in this work can be adapted for use of other cost functions that require calculation of the first derivatives. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
C/SCSC overview: approach, implementation, use. [Cost/Schedule Control Systems Criteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turf, Larry
1979-01-01
An overview of the Cost/Schedule Control System Criteria, known as C/SCS or C/S Squared is pesented. In the mid-1960s, several DOD service agencies embarked on a new performance measurement concept to track cost and schedule performance on major DOD programs. The performance measurement concept of C/SCS has expanded from DOD use to the Department of Energy (PMS), NASA (533 reports), and private industry such as shipbuilding, utilities, and construction. This paper describes the C/SCSC with the events leading to the C/SCS requirement, how to approach the requirement, and discusses implementing and using the system. Many government publications, directives, and instructionsmore » on the subject are listed in the publication.« less
What do you measure when you measure the Hall effect?
NASA Astrophysics Data System (ADS)
Koon, D. W.; Knickerbocker, C. J.
1993-02-01
A formalism for calculating the sensitivity of Hall measurements to local inhomogeneities of the sample material or the magnetic field is developed. This Hall weighting function g(x,y) is calculated for various placements of current and voltage probes on square and circular laminar samples. Unlike the resistivity weighting function, it is nonnegative throughout the entire sample, provided all probes lie at the edge of the sample. Singularities arise in the Hall weighting function near the current and voltage probes except in the case where these probes are located at the corners of a square. Implications of the results for cross, clover, and bridge samples, and the implications of our results for metal-insulator transition and quantum Hall studies are discussed.
Estimating gene function with least squares nonnegative matrix factorization.
Wang, Guoli; Ochs, Michael F
2007-01-01
Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.
Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data
NASA Astrophysics Data System (ADS)
Lurcock, P. C.; Channell, J. E.; Lee, D.
2012-12-01
The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.
NASA Astrophysics Data System (ADS)
Gsponer, Andre
2009-01-01
The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the self-energy of a point electric charge is worked out in detail: the Coulomb potential and field are defined as Colombeau generalized functions, and integrals of nonlinear expressions corresponding to products of distributions (such as the square of the Coulomb field and the square of the delta function) are calculated. Finally, the methods introduced in Gsponer (2007 Eur. J. Phys. 28 267, 2007 Eur. J. Phys. 28 1021 and 2007 Eur. J. Phys. 28 1241), to deal with point-like singularities in classical electrodynamics are confirmed.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
Cost function approach for estimating derived demand for composite wood products
T. C. Marcin
1991-01-01
A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.
2014-01-01
Background Prior to the 2007/09 Canadian Health Measures Survey, there was no nationally representative clinical data on the oral health of Canadians experiencing cost barriers to dental care. The aim of this study was to determine the oral health status and dental treatment needs of Canadians reporting cost barriers to dental care. Methods A secondary data analysis of the 2007/09 Canadian Health Measures Survey was undertaken using a sample of 5,586 Canadians aged 6 to 79. Chi square tests were conducted to test the association between reporting cost barriers to care and oral health outcomes. Logistic regressions were conducted to identify predictors of reporting cost barriers. Results Individuals who reported cost barriers to dental care had poorer oral health and more treatment needs compared to their counterparts. Conclusions Avoiding dental care and/or foregoing recommended treatment because of cost may contribute to poor oral health. This study substantiates the potential likelihood of progressive dental problems caused by an inability to treat existing conditions due to financial barriers. PMID:24962622
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
26 CFR 1.42-16 - Eligible basis reduced by federal grants.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the owner has agreed to maintain as public housing units (PH-units) in the building; (2) Are made with... difference between the rents received from a building's PH-unit tenants and a pro rata portion of the building's actual operating costs that are reasonably allocable to the PH-units (based on square footage...
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Window Treatment Phase I and Other Energy II Conservation Measures.
ERIC Educational Resources Information Center
Donohue, Philip E.
Six different energy-saving treatments for large window areas were tested by Tompkins-Cortland Community College (TCCC) to coordinate energy saving with building design. The TCCC building has an open space design with 33,000 square feet of external glass and other features causing heating problems and high energy costs. Phase I of the…
An autosampler was built to pull cotton swab heads mounted into a 3-foot long, square Al rod in ambient air through the He ionizing beam of a Direct Analysis in Real Time (DART) ion source interfaced to an orthogonal acceleration, time-of-flight mass spectrometer. The cost of th...
2005-01-01
non - steroidal anti - inflammatory drugs ( NSAIDs ), oral antihistamines, gastrointestinal agents, and oral...National Defense Authorization Act NSAID non - steroidal anti - inflammatory drug OLS ordinary least squares p-value probability value P&T Pharmacy...antihypertensives, non - steroidal anti - inflammatory drugs ( NSAIDs ), oral antihistamines, gastrointestinal agents, and oral hy-
Design and Evaluation of Energy Efficient Modular Classroom Structures.
ERIC Educational Resources Information Center
Brown, G. Z.; And Others
This paper describes a study that developed innovations that would enable modular builders to improve the energy performance of their classrooms without increasing their first cost. The Modern Building Systems' classroom building conforms to the stringent Oregon and Washington energy codes, and, at $18 per square foot, it is at the low end of the…
26 CFR 1.42-16 - Eligible basis reduced by federal grants.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the owner has agreed to maintain as public housing units (PH-units) in the building; (2) Are made with... difference between the rents received from a building's PH-unit tenants and a pro rata portion of the building's actual operating costs that are reasonably allocable to the PH-units (based on square footage...
26 CFR 1.42-16 - Eligible basis reduced by federal grants.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the owner has agreed to maintain as public housing units (PH-units) in the building; (2) Are made with... difference between the rents received from a building's PH-unit tenants and a pro rata portion of the building's actual operating costs that are reasonably allocable to the PH-units (based on square footage...
26 CFR 1.42-16 - Eligible basis reduced by federal grants.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the owner has agreed to maintain as public housing units (PH-units) in the building; (2) Are made with... difference between the rents received from a building's PH-unit tenants and a pro rata portion of the building's actual operating costs that are reasonably allocable to the PH-units (based on square footage...
26 CFR 1.42-16 - Eligible basis reduced by federal grants.
Code of Federal Regulations, 2014 CFR
2014-04-01
... the owner has agreed to maintain as public housing units (PH-units) in the building; (2) Are made with... difference between the rents received from a building's PH-unit tenants and a pro rata portion of the building's actual operating costs that are reasonably allocable to the PH-units (based on square footage...
System 6 alternatives: an economic analysis
Bruce G. Hansen; Hugh W. Reynolds; Hugh W. Reynolds
1984-01-01
Three System 6 mill-size alternatives were designed and evaluated to determine their overall economic potential for producing standard-size hardwood blanks. Internal rates of return ranged from about 15 to 35 percent after taxes. Cost per square foot of blanks ranged from about $0.88 to $1.19, depending on mill size and the amount of new investment required.
Closed-form analysis of fiber-matrix interface stresses under thermo-mechanical loadings
NASA Technical Reports Server (NTRS)
Naik, Rajiv A.; Crews, John H., Jr.
1992-01-01
Closed form techniques for calculating fiber matrix (FM) interface stresses, using repeating square and diamond regular arrays, were presented for a unidirectional composite under thermo-mechanical loadings. An Airy's stress function micromechanics approach from the literature, developed for calculating overall composite moduli, was extended in the present study to compute FM interface stresses for a unidirectional graphite/epoxy (AS4/3501-6) composite under thermal, longitudinal, transverse, transverse shear, and longitudinal shear loadings. Comparison with finite element results indicate excellent agreement of the FM interface stresses for the square array. Under thermal and longitudinal loading, the square array has the same FM peak stresses as the diamond array. The square array predicted higher stress concentrations under transverse normal and longitudinal shear loadings than the diamond array. Under transverse shear loading, the square array had a higher stress concentration while the diamond array had a higher radial stress concentration. Stress concentration factors under transverse shear and longitudinal shear loadings were very sensitive to fiber volume fraction. The present analysis provides a simple way to calculate accurate FM interface stresses for both the square and diamond array configurations.
Fast function-on-scalar regression with penalized basis expansions.
Reiss, Philip T; Huang, Lei; Mennes, Maarten
2010-01-01
Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.
Möltgen, C-V; Herdling, T; Reich, G
2013-11-01
This study demonstrates an approach, using science-based calibration (SBC), for direct coating thickness determination on heart-shaped tablets in real-time. Near-Infrared (NIR) spectra were collected during four full industrial pan coating operations. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film up to a film thickness of 28 μm. The application of SBC permits the calibration of the NIR spectral data without using costly determined reference values. This is due to the fact that SBC combines classical methods to estimate the coating signal and statistical methods for the noise estimation. The approach enabled the use of NIR for the measurement of the film thickness increase from around 8 to 28 μm of four independent batches in real-time. The developed model provided a spectroscopic limit of detection for the coating thickness of 0.64 ± 0.03 μm root-mean square (RMS). In the commonly used statistical methods for calibration, such as Partial Least Squares (PLS), sufficiently varying reference values are needed for calibration. For thin non-functional coatings this is a challenge because the quality of the model depends on the accuracy of the selected calibration standards. The obvious and simple approach of SBC eliminates many of the problems associated with the conventional statistical methods and offers an alternative for multivariate calibration. Copyright © 2013 Elsevier B.V. All rights reserved.
A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields
NASA Astrophysics Data System (ADS)
Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang
2017-03-01
Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.
On a cost functional for H2/H(infinity) minimization
NASA Technical Reports Server (NTRS)
Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis
1990-01-01
A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.
Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-01-01
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-05-01
High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew, E-mail: andrew.karellas@umassmed.edu
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixelmore » pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.« less
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Kenny S K; Lee, Louis K Y; Xing, L
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less
Ouagal, M; Berkvens, D; Hendrikx, P; Fecher-Bourgeois, F; Saegerman, C
2012-12-01
In sub-Saharan Africa, most epidemiological surveillance networks for animal diseases were temporarily funded by foreign aid. It should be possible for national public funds to ensure the sustainability of such decision support tools. Taking the epidemiological surveillance network for animal diseases in Chad (REPIMAT) as an example, this study aims to estimate the network's cost by identifying the various costs and expenditures for each level of intervention. The network cost was estimated on the basis of an analysis of the operational organisation of REPIMAT, additional data collected in surveys and interviews with network field workers and a market price listing for Chad. These costs were then compared with those of other epidemiological surveillance networks in West Africa. The study results indicate that REPIMAT costs account for 3% of the State budget allocated to the Ministry of Livestock. In Chad in general, as in other West African countries, fixed costs outweigh variable costs at every level of intervention. The cost of surveillance principally depends on what is needed for surveillance at the local level (monitoring stations) and at the intermediate level (official livestock sectors and regional livestock delegations) and on the cost of the necessary equipment. In African countries, the cost of surveillance per square kilometre depends on livestock density.
Functional Relationships and Regression Analysis.
ERIC Educational Resources Information Center
Preece, Peter F. W.
1978-01-01
Using a degenerate multivariate normal model for the distribution of organismic variables, the form of least-squares regression analysis required to estimate a linear functional relationship between variables is derived. It is suggested that the two conventional regression lines may be considered to describe functional, not merely statistical,…
NASA Technical Reports Server (NTRS)
Aksay, Ilhan A. (Inventor); Pan, Shuyang (Inventor); Prud'Homme, Robert K. (Inventor)
2016-01-01
A nanocomposite composition having a silicone elastomer matrix having therein a filler loading of greater than 0.05 weight percentage, based on total nanocomposite weight, wherein the filler is functional graphene sheets (FGS) having a surface area of from 300 square meters per gram to 2630 square meters per gram; and a method for producing the nanocomposite and uses thereof.
An Analysis of Advertising Effectiveness for U.S. Navy Recruiting
1997-09-01
This thesis estimates the effect of Navy television advertising on enlistment rates of high quality male recruits (Armed Forces Qualification Test...Joint advertising is for all Armed Forces), Joint journal, and Joint direct mail advertising are explored. Enlistments are modeled as a function of...several factors including advertising , recruiters, and economic. Regression analyses (Ordinary Least Squares and Two Stage Least Squares) explore the