Weighted spline based integration for reconstruction of freeform wavefront.
Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra
2018-02-10
In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
Doran, Kara S.; Howd, Peter A.; Sallenger,, Asbury H.
2016-01-04
Recent studies, and most of their predecessors, use tide gage data to quantify SL acceleration, ASL(t). In the current study, three techniques were used to calculate acceleration from tide gage data, and of those examined, it was determined that the two techniques based on sliding a regression window through the time series are more robust compared to the technique that fits a single quadratic form to the entire time series, particularly if there is temporal variation in the magnitude of the acceleration. The single-fit quadratic regression method has been the most commonly used technique in determining acceleration in tide gage data. The inability of the single-fit method to account for time-varying acceleration may explain some of the inconsistent findings between investigators. Properly quantifying ASL(t) from field measurements is of particular importance in evaluating numerical models of past, present, and future SLR resulting from anticipated climate change.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
Neutron/Gamma-ray discrimination through measures of fit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek
2015-07-01
Statistical tests and their underlying measures of fit can be utilized to separate neutron/gamma-ray pulses in a mixed radiation field. In this article, first the application of a sample statistical test is explained. Fit measurement-based methods require true pulse shapes to be used as reference for discrimination. This requirement makes practical implementation of these methods difficult; typically another discrimination approach should be employed to capture samples of neutrons and gamma-rays before running the fit-based technique. In this article, we also propose a technique to eliminate this requirement. These approaches are applied to several sets of mixed neutron and gamma-ray pulsesmore » obtained through different digitizers using stilbene scintillator in order to analyze them and measure their discrimination quality. (authors)« less
The anatomy of floating shock fitting. [shock waves computation for flow field
NASA Technical Reports Server (NTRS)
Salas, M. D.
1975-01-01
The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.
Optical assembly of microsnap-fits fabricated by two-photon polymerization
NASA Astrophysics Data System (ADS)
Köhler, Jannis; Kutlu, Yunus; Zyla, Gordon; Ksouri, Sarah I.; Esen, Cemal; Gurevich, Evgeny L.; Ostendorf, Andreas
2017-10-01
To respond to current demands of nano- and microtechnologies, e.g., miniaturization and integration, different bottom-up strategies have been developed. These strategies are based on picking, placing, and assembly of multiple components to produce microsystems with desired features. This paper covers the fabrication of arbitrary-shaped microcomponents by two-photon polymerization and the trapping, moving, and aligning of these structures by the use of a holographic optical tweezer. The main focus is on the assembly technique based on a cantilever microsnap-fit. More precisely, mechanical properties are characterized by optical forces and a suitable geometry of the snap-fit is designed. As a result of these investigations, a fast and simple assembly technique is developed. Furthermore, disassembly is provided by an optimized design. These findings suggest that the microsnap-fit is suitable for the assembly of miniaturized systems and could broaden the application opportunities of bottom-up strategies.
Kwon, Seung Yong; Pham, Tuyen Danh; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo
2016-06-11
Fitness classification is a technique to assess the quality of banknotes in order to determine whether they are usable. Banknote classification techniques are useful in preventing problems that arise from the circulation of substandard banknotes (such as recognition failures, or bill jams in automated teller machines (ATMs) or bank counting machines). By and large, fitness classification continues to be carried out by humans, and this can cause the problem of varying fitness classifications for the same bill by different evaluators, and requires a lot of time. To address these problems, this study proposes a fuzzy system-based method that can reduce the processing time needed for fitness classification, and can determine the fitness of banknotes through an objective, systematic method rather than subjective judgment. Our algorithm was an implementation to actual banknote counting machine. Based on the results of tests on 3856 banknotes in United States currency (USD), 3956 in Korean currency (KRW), and 2300 banknotes in Indian currency (INR) using visible light reflection (VR) and near-infrared light transmission (NIRT) imaging, the proposed method was found to yield higher accuracy than prevalent banknote fitness classification methods. Moreover, it was confirmed that the proposed algorithm can operate in real time, not only in a normal PC environment, but also in an embedded system environment of a banknote counting machine.
Kwon, Seung Yong; Pham, Tuyen Danh; Park, Kang Ryoung; Jeong, Dae Sik; Yoon, Sungsoo
2016-01-01
Fitness classification is a technique to assess the quality of banknotes in order to determine whether they are usable. Banknote classification techniques are useful in preventing problems that arise from the circulation of substandard banknotes (such as recognition failures, or bill jams in automated teller machines (ATMs) or bank counting machines). By and large, fitness classification continues to be carried out by humans, and this can cause the problem of varying fitness classifications for the same bill by different evaluators, and requires a lot of time. To address these problems, this study proposes a fuzzy system-based method that can reduce the processing time needed for fitness classification, and can determine the fitness of banknotes through an objective, systematic method rather than subjective judgment. Our algorithm was an implementation to actual banknote counting machine. Based on the results of tests on 3856 banknotes in United States currency (USD), 3956 in Korean currency (KRW), and 2300 banknotes in Indian currency (INR) using visible light reflection (VR) and near-infrared light transmission (NIRT) imaging, the proposed method was found to yield higher accuracy than prevalent banknote fitness classification methods. Moreover, it was confirmed that the proposed algorithm can operate in real time, not only in a normal PC environment, but also in an embedded system environment of a banknote counting machine. PMID:27294940
A modified active appearance model based on an adaptive artificial bee colony.
Abdulameer, Mohammed Hasan; Sheikh Abdullah, Siti Norul Huda; Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition.
Activity Detection and Retrieval for Image and Video Data with Limited Training
2015-06-10
applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra; ...
2017-05-23
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCann, Cooper; Repasky, Kevin S.; Morin, Mikindra
Hyperspectral image analysis has benefited from an array of methods that take advantage of the increased spectral depth compared to multispectral sensors; however, the focus of these developments has been on supervised classification methods. Lack of a priori knowledge regarding land cover characteristics can make unsupervised classification methods preferable under certain circumstances. An unsupervised classification technique is presented in this paper that utilizes physically relevant basis functions to model the reflectance spectra. These fit parameters used to generate the basis functions allow clustering based on spectral characteristics rather than spectral channels and provide both noise and data reduction. Histogram splittingmore » of the fit parameters is then used as a means of producing an unsupervised classification. Unlike current unsupervised classification techniques that rely primarily on Euclidian distance measures to determine similarity, the unsupervised classification technique uses the natural splitting of the fit parameters associated with the basis functions creating clusters that are similar in terms of physical parameters. The data set used in this work utilizes the publicly available data collected at Indian Pines, Indiana. This data set provides reference data allowing for comparisons of the efficacy of different unsupervised data analysis. The unsupervised histogram splitting technique presented in this paper is shown to be better than the standard unsupervised ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. Finally, this improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA.« less
Changes in skill and physical fitness following training in talent-identified volleyball players.
Gabbett, Tim; Georgieff, Boris; Anderson, Steve; Cotton, Brad; Savovic, Darko; Nicholson, Lee
2006-02-01
This study investigated the effect of a skill-based training program on measurements of skill and physical fitness in talent-identified volleyball players. Twenty-six talented junior volleyball players (mean +/- SE age, 15.5 +/- 0.2 years) participated in an 8-week skill-based training program that included 3 skill-based court sessions per week. Skills sessions were designed to develop passing, setting, serving, spiking, and blocking technique and accuracy as well as game tactics and positioning skills. Coaches used a combination of technical and instructional coaching, coupled with skill-based games to facilitate learning. Subjects performed measurements of skill (passing, setting, serving, and spiking technique and accuracy), standard anthropometry (height, standing-reach height, body mass, and sum of 7 skinfolds), lower-body muscular power (vertical jump, spike jump), upper-body muscular power (overhead medicine-ball throw), speed (5- and 10-m sprint), agility (T-test), and maximal aerobic power (multistage fitness test) before and after training. Training induced significant (p < 0.05) improvements in spiking, setting, and passing accuracy and spiking and passing technique. Compared with pretraining, there were significant (p < 0.05) improvements in 5- and 10-m speed and agility. There were no significant differences between pretraining and posttraining for body mass, skinfold thickness, lower-body muscular power, upper-body muscular power, and maximal aerobic power. These findings demonstrate that skill-based volleyball training improves spiking, setting, and passing accuracy and spiking and passing technique, but has little effect on the physiological and anthropometric characteristics of players.
Video markers tracking methods for bike fitting
NASA Astrophysics Data System (ADS)
Rajkiewicz, Piotr; Łepkowska, Katarzyna; Cygan, Szymon
2015-09-01
Sports cycling is becoming increasingly popular over last years. Obtaining and maintaining a proper position on the bike has been shown to be crucial for performance, comfort and injury avoidance. Various techniques of bike fitting are available - from rough settings based on body dimensions to professional services making use of sophisticated equipment and expert knowledge. Modern fitting techniques use mainly joint angles as a criterion of proper position. In this work we examine performance of two proposed methods for dynamic cyclist position assessment based on video data recorded during stationary cycling. Proposed methods are intended for home use, to help amateur cyclist improve their position on the bike, and therefore no professional equipment is used. As a result of data processing, ranges of angles in selected joints are provided. Finally strengths and weaknesses of both proposed methods are discussed.
NASA Astrophysics Data System (ADS)
Dai, Qianwei; Lin, Fangpeng; Wang, Xiaoping; Feng, Deshan; Bayless, Richard C.
2017-05-01
An integrated geophysical investigation was performed at S dam located at Dadu basin in China to assess the condition of the dam curtain. The key methodology of the integrated technique used was flow-field fitting method, which allowed identification of the hydraulic connections between the dam foundation and surface water sources (upstream and downstream), and location of the anomalous leakage outlets in the dam foundation. Limitations of the flow-field fitting method were complemented with resistivity logging to identify the internal erosion which had not yet developed into seepage pathways. The results of the flow-field fitting method and resistivity logging were consistent when compared with data provided by seismic tomography, borehole television, water injection test, and rock quality designation.
Mostafavi, Kamal; Tutunea-Fatan, O Remus; Bordatchev, Evgueni V; Johnson, James A
2014-12-01
The strong advent of computer-assisted technologies experienced by the modern orthopedic surgery prompts for the expansion of computationally efficient techniques to be built on the broad base of computer-aided engineering tools that are readily available. However, one of the common challenges faced during the current developmental phase continues to remain the lack of reliable frameworks to allow a fast and precise conversion of the anatomical information acquired through computer tomography to a format that is acceptable to computer-aided engineering software. To address this, this study proposes an integrated and automatic framework capable to extract and then postprocess the original imaging data to a common planar and closed B-Spline representation. The core of the developed platform relies on the approximation of the discrete computer tomography data by means of an original two-step B-Spline fitting technique based on successive deformations of the control polygon. In addition to its rapidity and robustness, the developed fitting technique was validated to produce accurate representations that do not deviate by more than 0.2 mm with respect to alternate representations of the bone geometry that were obtained through different-contact-based-data acquisition or data processing methods. © IMechE 2014.
A Modified Active Appearance Model Based on an Adaptive Artificial Bee Colony
Othman, Zulaiha Ali
2014-01-01
Active appearance model (AAM) is one of the most popular model-based approaches that have been extensively used to extract features by highly accurate modeling of human faces under various physical and environmental circumstances. However, in such active appearance model, fitting the model with original image is a challenging task. State of the art shows that optimization method is applicable to resolve this problem. However, another common problem is applying optimization. Hence, in this paper we propose an AAM based face recognition technique, which is capable of resolving the fitting problem of AAM by introducing a new adaptive ABC algorithm. The adaptation increases the efficiency of fitting as against the conventional ABC algorithm. We have used three datasets: CASIA dataset, property 2.5D face dataset, and UBIRIS v1 images dataset in our experiments. The results have revealed that the proposed face recognition technique has performed effectively, in terms of accuracy of face recognition. PMID:25165748
Hybrid PSO-ASVR-based method for data fitting in the calibration of infrared radiometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Sen; Li, Chengwei, E-mail: heikuanghit@163.com
2016-06-15
The present paper describes a hybrid particle swarm optimization-adaptive support vector regression (PSO-ASVR)-based method for data fitting in the calibration of infrared radiometer. The proposed hybrid PSO-ASVR-based method is based on PSO in combination with Adaptive Processing and Support Vector Regression (SVR). The optimization technique involves setting parameters in the ASVR fitting procedure, which significantly improves the fitting accuracy. However, its use in the calibration of infrared radiometer has not yet been widely explored. Bearing this in mind, the PSO-ASVR-based method, which is based on the statistical learning theory, is successfully used here to get the relationship between the radiationmore » of a standard source and the response of an infrared radiometer. Main advantages of this method are the flexible adjustment mechanism in data processing and the optimization mechanism in a kernel parameter setting of SVR. Numerical examples and applications to the calibration of infrared radiometer are performed to verify the performance of PSO-ASVR-based method compared to conventional data fitting methods.« less
Research on modified the estimates of NOx emissions combined the OMI and ground-based DOAS technique
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Li*, Ang; Xie, Pinhua; Hu, Zhaokun; Wu, Fengcheng; Xu, Jin
2017-04-01
A new method to calibrate nitrogen dioxide (NO2) lifetimes and emissions from point sources using satellite measurements base on the mobile passive differential optical absorption spectroscopy (DOAS) and multi axis differential optical absorption spectroscopy (MAX-DOAS) is described. It is based on using the Exponentially-Modified Gaussian (EMG) fitting method to correct the line densities along the wind direction by fitting the mobile passive DOAS NO2 vertical column density (VCD). An effective lifetime and emission rate are then determined from the parameters of the fit. The obtained results were then compared with the results acquired by fitting OMI (Ozone Monitoring Instrument) NO2 using the above fitting method, the NOx emission rate was about 195.8mol/s, 160.6mol/s, respectively. The reason why the latter less than the former may be because the low spatial resolution of the satellite.
Dahl, Bjørn E; Dahl, Jon E; Rønold, Hans J
2018-02-01
Suboptimal adaptation of fixed dental prostheses (FDPs) can lead to technical and biological complications. It is unclear if the computer-aided design/computer-aided manufacturing (CAD/CAM) technique improves adaptation of FDPs compared with FDPs made using the lost-wax and metal casting technique. Three-unit FDPs were manufactured by CAD/CAM based on digital impression of a typodont model. The FDPs were made from one of five materials: pre-sintered zirconium dioxide; hot isostatic pressed zirconium dioxide; lithium disilicate glass-ceramic; milled cobalt-chromium; and laser-sintered cobalt-chromium. The FDPs made using the lost-wax and metal casting technique were used as reference. The fit of the FDPs was analysed using the triple-scan method. The fit was evaluated for both single abutments and three-unit FDPs. The average cement space varied between 50 μm and 300 μm. Insignificant differences in internal fit were observed between the CAD/CAM-manufactured FDPs, and none of the FPDs had cement spaces that were statistically significantly different from those of the reference FDP. For all FDPs, the cement space at a marginal band 0.5-1.0 mm from the preparation margin was less than 100 μm. The milled cobalt-chromium FDP had the closest fit. The cement space of FDPs produced using the CAD/CAM technique was similar to that of FDPs produced using the conventional lost-wax and metal casting technique. © 2017 Eur J Oral Sci.
Hwang, Dae-Hee; Shetty, Gautam M; Kim, Jong In; Kwon, Jae Ho; Song, Jae-Kwang; Muñoz, Michael; Lee, Jun Seop; Nha, Kyung-Wook
2013-01-01
The purpose of this prospective, randomized, computed tomography-based study was to investigate whether the press-fit technique reduces tunnel volume enlargement (TVE) and improves the clinical outcome after anterior cruciate ligament reconstruction at a minimum follow-up of 1 year compared with conventional technique. Sixty-nine patients undergoing primary ACL reconstruction using hamstring autografts were randomly allocated to either the press-fit technique group (group A) or conventional technique group (group B). All patients were evaluated for TVE and tunnel widening using computed tomography scanning, for functional outcome using International Knee Documentation Committee and Lysholm scores, for rotational stability using the pivot-shift test, and for anterior laxity using the KT-2000 arthrometer at a minimum of 1-year follow-up. There were no significant differences in TVE between the 2 groups. In group A, in which the press-fit technique was used, mean volume enlargement in the femoral tunnel was 65% compared with 71.5% in group B (P = .84). In group A, 57% (20 of 35) of patients developed femoral TVE compared with 67% (23 of 34) of patients in group B (P = .27). Both groups showed no significant difference for functional outcome (mean Lysholm score P = .73, International Knee Documentation Committee score P = .15), or knee laxity (anterior P = .78, rotational P = .22) at a minimum follow-up of 1 year. In a comparison of press-fit and conventional techniques, there were no significant differences in TVE and clinical outcome at short-term follow-up. Level II, therapeutic study, prospective randomized clinical trial. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria
2017-10-01
Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.
TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Shi, J; Yang, Y
Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less
Panzica, Martin; Janzik, Janne; Bobrowitsch, Evgenij; Krettek, Christian; Hawi, Nael; Hurschler, Christof; Jagodzinski, Michael
2015-11-01
To date, various surgical techniques to treat posterolateral knee instability have been described. Recent studies recommended an anatomical and isometric reconstruction of the posterolateral corner addressing the key structures, such as lateral collateral ligament (LCL), popliteus tendon (POP) and popliteofibular ligament (PFL). Two clinical established autologous respective local reconstruction methods of the posterolateral complex were tested for knot-bone cylinder press-fit fixation to assess efficacy of each reconstruction technique in comparison to the intact knee. The knot-bone cylinder press-fit fixation for both anatomic and isometric reconstruction techniques of the posterolateral complex shows equal biomechanical stability as the intact posterolateral knee structures. This was a controlled laboratory study. Two surgical techniques (Larson: fibula-based semitendinosus autograft for LCL and PFL reconstruction/Kawano: biceps femoris and iliotibial tract autograft for LCL, PFL and POP reconstruction) with press-fit fixation were used for restoration of posterolateral knee stability. Seven cadaveric knees (66 ± 3.4 years) were tested under three conditions: intact knee, sectioned state and reconstructed knee for each surgical technique. Biomechanical stress tests were performed for every state at 30° and 90° knee flexion for anterior-posterior translation (60 N), internal-external and varus-valgus rotation (5 Nm) at 0°, 30° and 90° using a kinemator (Kuka robot). At 30° and 90° knee flexion, no significant differences between the four knee states were registered for anterior-posterior translation loading. Internal-external and varus-valgus rotational loading showed significantly higher instability for the sectioned state than for the intact or reconstructed posterolateral structures (p < 0.05). There were no significant differences between the intact and reconstructed knee states for internal-external rotation, varus-valgus rotation and anterior-posterior translation at any flexion angles (p > 0.05). Comparing both reconstruction techniques, significant higher varus-/valgus stability was registered for the fibula-based Larson technique at 90° knee flexion (p < 0.05). Both PLC reconstructions showed equal biomechanical stability as the intact posterolateral knee structures when using knot-bone cylinder press-fit fixation. We registered restoration of the rotational and varus-valgus stability with both surgical techniques. The anterior-posterior translational stability was not influenced significantly. The Larson technique showed significant higher varus/valgus stability in 90° flexion. The latter is easier to perform and takes half the preparation time, but needs grafting of the semitendinosus tendon. The Kawano reconstruction technique is an interesting alternative in cases of missing autografts.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. The results of the t -test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness.
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
Passive fit and accuracy of three dental implant impression techniques.
Al Quran, Firas A; Rashdan, Bashar A; Zomar, AbdelRahman A Abu; Weiner, Saul
2012-02-01
To reassess the accuracy of three impression techniques relative to the passive fit of the prosthesis. An edentulous maxillary cast was fabricated in epoxy resin with four dental implants embedded and secured with heat-cured acrylic resin. Three techniques were tested: closed tray, open tray nonsplinted, and open tray splinted. One light-cured custom acrylic tray was fabricated for each impression technique, and transfer copings were attached to the implants. Fifteen impressions for each technique were prepared with medium-bodied consistency polyether. Subsequently, the impressions were poured in type IV die stone. The distances between the implants were measured using a digital micrometer. The statistical analysis of the data was performed with ANOVA and a one-sample t test at a 95% confidence interval. The lowest mean difference in dimensional accuracy was found within the direct (open tray) splinted technique. Also, the one-sample t test showed that the direct splinted technique has the least statistical significant difference from direct nonsplinted and indirect (closed tray) techniques. All discrepancies were less than 100 Μm. Within the limitations of this study, the best accuracy of the definitive prosthesis was achieved when the impression copings were splinted with autopolymerized acrylic resin, sectioned, and rejoined. However, the errors associated with all of these techniques were less than 100 Μm, and based on the current definitions of passive fit, they all would be clinically acceptable.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
Hongtao, Li; Shichao, Chen; Yanjun, Han; Yi, Luo
2013-01-14
A feedback method combined with fitting technique based on variable separation mapping is proposed to design freeform optical systems for an extended LED source with prescribed illumination patterns, especially with uniform illuminance distribution. Feedback process performs well with extended sources, while fitting technique contributes not only to the decrease of pieces of sub-surfaces in discontinuous freeform lenses which may cause loss in manufacture, but also the reduction in the number of feedback iterations. It is proved that light control efficiency can be improved by 5%, while keeping a high uniformity of 82%, with only two feedback iterations and one fitting operation can improve. Furthermore, the polar angle θ and azimuthal angle φ is used to specify the light direction from the light source, and the (θ,φ)-(x,y) based mapping and feedback strategy makes sure that even few discontinuous sections along the equi-φ plane exist in the system, they are perpendicular to the base plane, making it eligible for manufacturing the surfaces using injection molding.
Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad
2017-01-01
Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726
NASA Astrophysics Data System (ADS)
Gugsa, Solomon A.; Davies, Angela
2005-08-01
Characterizing an aspheric micro lens is critical for understanding the performance and providing feedback to the manufacturing. We describe a method to find the best-fit conic of an aspheric micro lens using a least squares minimization and Monte Carlo analysis. Our analysis is based on scanning white light interferometry measurements, and we compare the standard rapid technique where a single measurement is taken of the apex of the lens to the more time-consuming stitching technique where more surface area is measured. Both are corrected for tip/tilt based on a planar fit to the substrate. Four major parameters and their uncertainties are estimated from the measurement and a chi-square minimization is carried out to determine the best-fit conic constant. The four parameters are the base radius of curvature, the aperture of the lens, the lens center, and the sag of the lens. A probability distribution is chosen for each of the four parameters based on the measurement uncertainties and a Monte Carlo process is used to iterate the minimization process. Eleven measurements were taken and data is also chosen randomly from the group during the Monte Carlo simulation to capture the measurement repeatability. A distribution of best-fit conic constants results, where the mean is a good estimate of the best-fit conic and the distribution width represents the combined measurement uncertainty. We also compare the Monte Carlo process for the stitched data and the not stitched data. Our analysis allows us to analyze the residual surface error in terms of Zernike polynomials and determine uncertainty estimates for each coefficient.
Gunsoy, S; Ulusoy, M
2016-01-01
The purpose of this study was to evaluate the internal and marginal fit of chrome cobalt (Co-Cr) crowns were fabricated with laser sintering, computer-aided design (CAD) and computer-aided manufacturing, and conventional methods. Polyamide master and working models were designed and fabricated. The models were initially designed with a software application for three-dimensional (3D) CAD (Maya, Autodesk Inc.). All models were fabricated models were produced by a 3D printer (EOSINT P380 SLS, EOS). 128 1-unit Co-Cr fixed dental prostheses were fabricated with four different techniques: Conventional lost wax method, milled wax with lost-wax method (MWLW), direct laser metal sintering (DLMS), and milled Co-Cr (MCo-Cr). The cement film thickness of the marginal and internal gaps was measured by an observer using a stereomicroscope after taking digital photos in ×24. Best fit rates according to mean and standard deviations of all measurements was in DLMS both in premolar (65.84) and molar (58.38) models in μm. A significant difference was found DLMS and the rest of fabrication techniques (P < 0.05). No significant difference was found between MCo-CR and MWLW in all fabrication techniques both in premolar and molar models (P > 0.05). DMLS was best fitting fabrication techniques for single crown based on the results.The best fit was found in marginal; the larger gap was found in occlusal.All groups were within the clinically acceptable misfit range.
High-precision gauging of metal rings
NASA Astrophysics Data System (ADS)
Carlin, Mats; Lillekjendlie, Bjorn
1994-11-01
Raufoss AS designs and produces air brake fittings for trucks and buses on the international market. One of the critical components in the fittings is a small, circular metal ring, which is going through 100% dimension control. This article describes a low-price, high accuracy solution developed at SINTEF Instrumentation based on image metrology and a subpixel resolution algorithm. The measurement system consists of a PC-plugg-in transputer video board, a CCD camera, telecentric optics and a machine vision strobe. We describe the measurement technique in some detail, as well as the robust statistical techniques found to be essential in the real life environment.
Tandon, Nikhil; Kalra, Sanjay; Balhara, Yatan Pal Singh; Baruah, Manash P.; Chadha, Manoj; Chandalia, Hemraj B.; Chowdhury, Subhankar; Jothydev, Kesavadev; Kumar, Prasanna K. M.; V., Madhu S.; Mithal, Ambrish; Modi, Sonal; Pitale, Shailesh; Sahay, Rakesh; Shukla, Rishi; Sundaram, Annamalai; Unnikrishnan, Ambika G.; Wangnoo, Subhash K.
2015-01-01
As injectable therapies such as human insulin, insulin analogs, and glucagon-like peptide-1 receptor agonists are used to manage diabetes, correct injection technique is vital for the achievement of glycemic control. The forum for injection technique India acknowledged this need for the first time in India and worked to develop evidence-based recommendations on insulin injection technique, to assist healthcare practitioners in their clinical practice. PMID:25932385
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.
Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Application of a first impression triage in the Japan railway west disaster.
Hashimoto, Atsunori; Ueda, Takahiro; Kuboyama, Kazutoshi; Yamada, Taihei; Terashima, Mariko; Miyawaki, Atsushi; Nakao, Atsunori; Kotani, Joji
2013-01-01
On April 25, 2005, a Japanese express train derailed into a building, resulting in 107 deaths and 549 injuries. We used "First Impression Triage (FIT)", our new triage strategy based on general inspection and palpation without counting pulse/respiratory rates, and determined the feasibility of FIT in the chaotic situation of treating a large number of injured people in a brief time period. The subjects included 39 patients who required hospitalization among 113 victims transferred to our hospital. After initial assessment with FIT by an emergency physician, patients were retrospectively reassessed with the preexisting the modified Simple Triage and Rapid Treatment (START) methodology, based on Injury Severity Score, probability of survival, and ICU stay. FIT resulted in shorter waiting time for triage. FIT designations comprised 11 red (immediate), 28 yellow (delayed), while START assigned six to red and 32 to yellow. There were no statistical differences between FIT and START in the accuracy rate calculated by means of probability of survival and ICU stay. Overall validity and reliability of FIT determined by outcome assessment were similar to those of START. FIT would be a simple and accurate technique to quickly triage a large number of patients.
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
NASA Astrophysics Data System (ADS)
McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.
2016-12-01
Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.
Techniques for estimating selected streamflow characteristics of rural unregulated streams in Ohio
Koltun, G.F.; Whitehead, Matthew T.
2002-01-01
This report provides equations for estimating mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and streamflow quartiles (the 25th-, 50th-, and 75th-percentile streamflows) as a function of selected basin characteristics for rural, unregulated streams in Ohio. The equations were developed from streamflow statistics and basin-characteristics data for as many as 219 active or discontinued streamflow-gaging stations on rural, unregulated streams in Ohio with 10 or more years of homogenous daily streamflow record. Streamflow statistics and basin-characteristics data for the 219 stations are presented in this report. Simple equations (based on drainage area only) and best-fit equations (based on drainage area and at least two other basin characteristics) were developed by means of ordinary least-squares regression techniques. Application of the best-fit equations generally involves quantification of basin characteristics that require or are facilitated by use of a geographic information system. In contrast, the simple equations can be used with information that can be obtained without use of a geographic information system; however, the simple equations have larger prediction errors than the best-fit equations and exhibit geographic biases for most streamflow statistics. The best-fit equations should be used instead of the simple equations whenever possible.
Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung
2018-01-01
In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447
The Accuracy of Four Impression-making Techniques in Angulated Implants Based on Vertical Gap
Saboury, Abolfazl; Neshandar Asli, Hamid; Dalili Kajan, Zahra
2017-01-01
Statement of the Problem: Precision of the impression taken from implant positions significantly determines accurate fit of implant-supported prostheses. An imprecise impression may produce prosthesis misfit. Purpose: This study aimed to evaluate the accuracy of four impression-making techniques for angulated implants by stereomicroscope through measuring the vertical marginal gaps between the cemented metal framework and the implant analog. Materials and Method: A definitive cast with two 15° mesially angulated implants served as the standard reference for making all the impressions and later for accuracy evaluation. Four groups of five samples were evaluated: (1) closed-tray snap-fit transfer, (2) open-tray nonsplinted impression coping, (3) metal splinted impression coping, and (4) fabricated acrylic resin transfer cap. A gold-palladium framework was fabricated over the angulated implant abutments, the fit of which was used as reference. The gaps between the metal framework and the implant analogs were measured in sample groups. Corresponding means for each technique and the definitive cast were compared by using ANOVA and post hoc tests. Results: The mean marginal gap was 38.16±0µm in definitive cast, 89±19.74µm in group 1, 78.66±20.63µm in group 2, 54.16±24.29µm in group 3, and 55.83±18.30µm in group 4. ANOVA revealed significant differences between the definitive cast and groups 1 and 2, but not with groups 3 and 4 (p< 0.05). Conclusion: Vertical gap measurements showed that metal splinted impression coping and fabricated acrylic resin transfer cap techniques produced quite more accurate impressions than closed-tray snap-fit transfer and open-tray nonsplinted impression coping techniques do. The fabricated acrylic resin transfer cap technique seems to be a reliable impression-making method. PMID:29201973
Fit of screw-retained fixed implant frameworks fabricated by different methods: a systematic review.
Abduo, Jaafar; Lyons, Karl; Bennani, Vincent; Waddell, Neil; Swain, Michael
2011-01-01
The aim of this study was to review the published literature investigating the accuracy of fit of fixed implant frameworks fabricated using different materials and methods. A comprehensive electronic search was performed through PubMed (MEDLINE) using Boolean operators to combine key words. The search was limited to articles written in English and published through May 2010. In addition, a manual search through articles and reference lists retrieved from the electronic search and peer-reviewed journals was also conducted. A total of 248 articles were retrieved, and 26 met the specified inclusion criteria for the review. The selected articles assessed the fit of fixed implant frameworks fabricated by different techniques. The investigated fabrication approaches were one-piece casting, sectioning and reconnection, spark erosion with an electric discharge machine, computer-aided design/computer-assisted manufacturing (CAD/CAM), and framework bonding to prefabricated abutment cylinders. Cast noble metal frameworks have a predictable fit, and additional fit refinement treatment is not indicated in well-controlled conditions. Base metal castings do not provide a satisfactory level of fit unless additional refinement treatment is performed, such as sectioning and laser welding or spark erosion. Spark erosion, framework bonding to prefabricated abutment cylinders, and CAD/CAM have the potential to provide implant frameworks with an excellent fit; CAD/CAM is the most consistent and least technique-sensitive of these methods.
A CAD System for Evaluating Footwear Fit
NASA Astrophysics Data System (ADS)
Savadkoohi, Bita Ture; de Amicis, Raffaele
With the great growth in footwear demand, the footwear manufacturing industry, for achieving commercial success, must be able to provide the footwear that fulfills consumer's requirement better than it's competitors. Accurate fitting for shoes is an important factor in comfort and functionality. Footwear fitter measurement have been using manual measurement for a long time, but the development of 3D acquisition devices and the advent of powerful 3D visualization and modeling techniques, automatically analyzing, searching and interpretation of the models have now made automatic determination of different foot dimensions feasible. In this paper, we proposed an approach for finding footwear fit within the shoe last data base. We first properly aligned the 3D models using "Weighted" Principle Component Analysis (WPCA). After solving the alignment problem we used an efficient algorithm for cutting the 3D model in order to find the footwear fit from shoe last data base.
NASA Technical Reports Server (NTRS)
Bedewi, Nabih E.; Yang, Jackson C. S.
1987-01-01
Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The mathematics of the technique is presented in addition to the results of computer simulations conducted to demonstrate the prediction of the response of the system and the random forcing function initially introduced to excite the system.
49 CFR 393.67 - Liquid fuel tanks.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., by brazing, by silver soldering, or by techniques which provide heat resistance and mechanical... soldering with a lead-based or other soft solder. (2) Fittings. The fuel tank body must have flanges or...
49 CFR 393.67 - Liquid fuel tanks.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., by brazing, by silver soldering, or by techniques which provide heat resistance and mechanical... soldering with a lead-based or other soft solder. (2) Fittings. The fuel tank body must have flanges or...
49 CFR 393.67 - Liquid fuel tanks.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., by brazing, by silver soldering, or by techniques which provide heat resistance and mechanical... soldering with a lead-based or other soft solder. (2) Fittings. The fuel tank body must have flanges or...
49 CFR 393.67 - Liquid fuel tanks.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., by brazing, by silver soldering, or by techniques which provide heat resistance and mechanical... soldering with a lead-based or other soft solder. (2) Fittings. The fuel tank body must have flanges or...
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
Anan, Mohammad Tarek M.; Al-Saadi, Mohannad H.
2015-01-01
Objective The aim of this study was to compare the fit accuracies of metal partial removable dental prosthesis (PRDP) frameworks fabricated by the traditional technique (TT) or the light-curing modeling material technique (LCMT). Materials and methods A metal model of a Kennedy class III modification 1 mandibular dental arch with two edentulous spaces of different spans, short and long, was used for the study. Thirty identical working casts were used to produce 15 PRDP frameworks each by TT and by LCMT. Every framework was transferred to a metal master cast to measure the gap between the metal base of the framework and the crest of the alveolar ridge of the cast. Gaps were measured at three points on each side by a USB digital intraoral camera at ×16.5 magnification. Images were transferred to a graphics editing program. A single examiner performed all measurements. The two-tailed t-test was performed at the 5% significance level. Results The mean gap value was significantly smaller in the LCMT group compared to the TT group. The mean value of the short edentulous span was significantly smaller than that of the long edentulous span in the LCMT group, whereas the opposite result was obtained in the TT group. Conclusion Within the limitations of this study, it can be concluded that the fit of the LCMT-fabricated frameworks was better than the fit of the TT-fabricated frameworks. The framework fit can differ according to the span of the edentate ridge and the fabrication technique for the metal framework. PMID:26236129
Örtorp, Anders; Jönsson, David; Mouhsen, Alaa; Vult von Steyern, Per
2011-04-01
This study sought to evaluate and compare the marginal and internal fit in vitro of three-unit FDPs in Co-Cr made using four fabrication techniques, and to conclude in which area the largest misfit is present. An epoxy resin master model was produced. The impression was first made with silicone, and master and working models were then produced. A total of 32 three-unit Co-Cr FDPs were fabricated with four different production techniques: conventional lost-wax method (LW), milled wax with lost-wax method (MW), milled Co-Cr (MC), and direct laser metal sintering (DLMS). Each of the four groups consisted of eight FDPs (test groups). The FDPs were cemented on their cast and standardised-sectioned. The cement film thickness of the marginal and internal gaps was measured in a stereomicroscope, digital photos were taken at 12× magnification and then analyzed using measurement software. Statistical analyses were performed with one-way ANOVA and Tukey's test. Best fit based on the means (SDs) in μm for all measurement points was in the DLMS group 84 (60) followed by MW 117 (89), LW 133 (89) and MC 166 (135). Significant differences were present between MC and DLMS (p<0.05). The regression analyses presented differences within the parameters: production technique, tooth size, position and measurement point (p < 0.05). Best fit was found in the DLMS group followed by MW, LW and MC. In all four groups, best fit in both abutments was along the axial walls and in the deepest part of the chamfer preparation. The greatest misfit was present occlusally in all specimens. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)
2002-01-01
We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.
The Helmet Fit Index--An intelligent tool for fit assessment and design customisation.
Ellena, Thierry; Subic, Aleksandar; Mustafa, Helmy; Pang, Toh Yen
2016-07-01
Helmet safety benefits are reduced if the headgear is poorly fitted on the wearer's head. At present, there are no industry standards available to assess objectively how a specific protective helmet fits a particular person. A proper fit is typically defined as a small and uniform distance between the helmet liner and the wearer's head shape, with a broad coverage of the head area. This paper presents a novel method to investigate and compare fitting accuracy of helmets based on 3D anthropometry, reverse engineering techniques and computational analysis. The Helmet Fit Index (HFI) that provides a fit score on a scale from 0 (excessively poor fit) to 100 (perfect fit) was compared with subjective fit assessments of surveyed cyclists. Results in this study showed that quantitative (HFI) and qualitative (participants' feelings) data were related when comparing three commercially available bicycle helmets. Findings also demonstrated that females and Asian people have lower fit scores than males and Caucasians, respectively. The HFI could provide detailed understanding of helmet efficiency regarding fit and could be used during helmet design and development phases. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Lukman, Iing; Ibrahim, Noor A.; Daud, Isa B.; Maarof, Fauziah; Hassan, Mohd N.
2002-03-01
Survival analysis algorithm is often applied in the data mining process. Cox regression is one of the survival analysis tools that has been used in many areas, and it can be used to analyze the failure times of aircraft crashed. Another survival analysis tool is the competing risks where we have more than one cause of failure acting simultaneously. Lunn-McNeil analyzed the competing risks in the survival model using Cox regression with censored data. The modified Lunn-McNeil technique is a simplify of the Lunn-McNeil technique. The Kalbfleisch-Prentice technique is involving fitting models separately from each type of failure, treating other failure types as censored. To compare the two techniques, (the modified Lunn-McNeil and Kalbfleisch-Prentice) a simulation study was performed. Samples with various sizes and censoring percentages were generated and fitted using both techniques. The study was conducted by comparing the inference of models, using Root Mean Square Error (RMSE), the power tests, and the Schoenfeld residual analysis. The power tests in this study were likelihood ratio test, Rao-score test, and Wald statistics. The Schoenfeld residual analysis was conducted to check the proportionality of the model through its covariates. The estimated parameters were computed for the cause-specific hazard situation. Results showed that the modified Lunn-McNeil technique was better than the Kalbfleisch-Prentice technique based on the RMSE measurement and Schoenfeld residual analysis. However, the Kalbfleisch-Prentice technique was better than the modified Lunn-McNeil technique based on power tests measurement.
Simultaneous fits in ISIS on the example of GRO J1008-57
NASA Astrophysics Data System (ADS)
Kühnel, Matthias; Müller, Sebastian; Kreykenbohm, Ingo; Schwarm, Fritz-Walter; Grossberger, Christoph; Dauser, Thomas; Pottschmidt, Katja; Ferrigno, Carlo; Rothschild, Richard E.; Klochkov, Dmitry; Staubert, Rüdiger; Wilms, Joern
2015-04-01
Parallel computing and steadily increasing computation speed have led to a new tool for analyzing multiple datasets and datatypes: fitting several datasets simultaneously. With this technique, physically connected parameters of individual data can be treated as a single parameter by implementing this connection into the fit directly. We discuss the terminology, implementation, and possible issues of simultaneous fits based on the X-ray data analysis tool Interactive Spectral Interpretation System (ISIS). While all data modeling tools in X-ray astronomy allow in principle fitting data from multiple data sets individually, the syntax used in these tools is not often well suited for this task. Applying simultaneous fits to the transient X-ray binary GRO J1008-57, we find that the spectral shape is only dependent on X-ray flux. We determine time independent parameters such as, e.g., the folding energy E_fold, with unprecedented precision.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
Apps to promote physical activity among adults: a review and content analysis.
Middelweerd, Anouk; Mollee, Julia S; van der Wal, C Natalie; Brug, Johannes; Te Velde, Saskia J
2014-07-25
In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. On average, the reviewed apps included 5 behavior change techniques (range 2-8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions.
Measurements of strain at plate boundaries using space based geodetic techniques
NASA Technical Reports Server (NTRS)
Robaudo, Stefano; Harrison, Christopher G. A.
1993-01-01
We have used the space based geodetic techniques of Satellite Laser Ranging (SLR) and VLBI to study strain along subduction and transform plate boundaries and have interpreted the results using a simple elastic dislocation model. Six stations located behind island arcs were analyzed as representative of subduction zones while 13 sites located on either side of the San Andreas fault were used for the transcurrent zones. The length deformation scale was then calculated for both tectonic margins by fitting the relative strain to an exponentially decreasing function of distance from the plate boundary. Results show that space-based data for the transcurrent boundary along the San Andreas fault help to define better the deformation length scale in the area while fitting nicely the elastic half-space earth model. For subduction type bonndaries the analysis indicates that there is no single scale length which uniquely describes the deformation. This is mainly due to the difference in subduction characteristics for the different areas.
NASA Astrophysics Data System (ADS)
Pinheiro da Silva, L.; Auvergne, M.; Toublanc, D.; Rowe, J.; Kuschnig, R.; Matthews, J.
2006-06-01
Context: .Fitting photometry algorithms can be very effective provided that an accurate model of the instrumental point spread function (PSF) is available. When high-precision time-resolved photometry is required, however, the use of point-source star images as empirical PSF models can be unsatisfactory, due to the limits in their spatial resolution. Theoretically-derived models, on the other hand, are limited by the unavoidable assumption of simplifying hypothesis, while the use of analytical approximations is restricted to regularly-shaped PSFs. Aims: .This work investigates an innovative technique for space-based fitting photometry, based on the reconstruction of an empirical but properly-resolved PSF. The aim is the exploitation of arbitrary star images, including those produced under intentional defocus. The cases of both MOST and COROT, the first space telescopes dedicated to time-resolved stellar photometry, are considered in the evaluation of the effectiveness and performances of the proposed methodology. Methods: .PSF reconstruction is based on a set of star images, periodically acquired and presenting relative subpixel displacements due to motion of the acquisition system, in this case the jitter of the satellite attitude. Higher resolution is achieved through the solution of the inverse problem. The approach can be regarded as a special application of super-resolution techniques, though a specialised procedure is proposed to better meet the PSF determination problem specificities. The application of such a model to fitting photometry is illustrated by numerical simulations for COROT and on a complete set of observations from MOST. Results: .We verify that, in both scenarios, significantly better resolved PSFs can be estimated, leading to corresponding improvements in photometric results. For COROT, indeed, subpixel reconstruction enabled the successful use of fitting algorithms despite its rather complex PSF profile, which could hardly be modeled otherwise. For MOST, whose direct-imaging PSF is closer to the ordinary, comparison to other models or photometry techniques were carried out and confirmed the potential of PSF reconstruction in real observational conditions.
Dynamics in the Fitness-Income plane: Brazilian states vs World countries
Operti, Felipe G.; Pugliese, Emanuele; Andrade, José S.; Pietronero, Luciano
2018-01-01
In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDPp), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index. PMID:29874265
Dynamics in the Fitness-Income plane: Brazilian states vs World countries.
Operti, Felipe G; Pugliese, Emanuele; Andrade, José S; Pietronero, Luciano; Gabrielli, Andrea
2018-01-01
In this paper we introduce a novel algorithm, called Exogenous Fitness, to calculate the Fitness of subnational entities and we apply it to the states of Brazil. In the last decade, several indices were introduced to measure the competitiveness of countries by looking at the complexity of their export basket. Tacchella et al (2012) developed a non-monetary metric called Fitness. In this paper, after an overview about Brazil as a whole and the comparison with the other BRIC countries, we introduce a new methodology based on the Fitness algorithm, called Exogenous Fitness. Combining the results with the Gross Domestic Product per capita (GDPp), we look at the dynamics of the Brazilian states in the Fitness-Income plane. Two regimes are distinguishable: one with high predictability and the other with low predictability, showing a deep analogy with the heterogeneous dynamics of the World countries. Furthermore, we compare the ranking of the Brazilian states according to the Exogenous Fitness with the ranking obtained through two other techniques, namely Endogenous Fitness and Economic Complexity Index.
NASA Astrophysics Data System (ADS)
Lasche, George; Coldwell, Robert; Metzger, Robert
2017-09-01
A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.
High-throughput electrical characterization for robust overlay lithography control
NASA Astrophysics Data System (ADS)
Devender, Devender; Shen, Xumin; Duggan, Mark; Singh, Sunil; Rullan, Jonathan; Choo, Jae; Mehta, Sohan; Tang, Teck Jung; Reidy, Sean; Holt, Jonathan; Kim, Hyung Woo; Fox, Robert; Sohn, D. K.
2017-03-01
Realizing sensitive, high throughput and robust overlay measurement is a challenge in current 14nm and advanced upcoming nodes with transition to 300mm and upcoming 450mm semiconductor manufacturing, where slight deviation in overlay has significant impact on reliability and yield1). Exponentially increasing number of critical masks in multi-patterning lithoetch, litho-etch (LELE) and subsequent LELELE semiconductor processes require even tighter overlay specification2). Here, we discuss limitations of current image- and diffraction- based overlay measurement techniques to meet these stringent processing requirements due to sensitivity, throughput and low contrast3). We demonstrate a new electrical measurement based technique where resistance is measured for a macro with intentional misalignment between two layers. Overlay is quantified by a parabolic fitting model to resistance where minima and inflection points are extracted to characterize overlay control and process window, respectively. Analyses using transmission electron microscopy show good correlation between actual overlay performance and overlay obtained from fitting. Additionally, excellent correlation of overlay from electrical measurements to existing image- and diffraction- based techniques is found. We also discuss challenges of integrating electrical measurement based approach in semiconductor manufacturing from Back End of Line (BEOL) perspective. Our findings open up a new pathway for accessing simultaneous overlay as well as process window and margins from a robust, high throughput and electrical measurement approach.
NASA Technical Reports Server (NTRS)
Chang, T. S.
1974-01-01
A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.
Santos, José; Monteagudo, Ángel
2017-03-27
The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.
The marginal fit of E.max Press and E.max CAD lithium disilicate restorations: A critical review.
Mounajjed, Radek; M Layton, Danielle; Azar, Basel
2016-12-01
This critical review aimed to assess the vertical marginal gap that was present when E.max lithium disilicate-based restoration (Press and CAD) are fabricated in-vitro. Published articles reporting vertical marginal gap measurements of in-vitro restorations that had been fabricated from E.Max lithium disilicate were sought with an electronic search of MEDLINE (PubMed) and hand search of selected dental journals. The outcomes were reviewed qualitatively. The majority of studies that compared the marginal fit of E.max press and E.max CAD restorations, found that the E.max lithium disilicate restorations fabricated with the press technique had significantly smaller marginal gaps than those fabricated with CAD technique. This research indicates that E.max lithium disilicate restorations fabricated with the press technique have measurably smaller marginal gaps when compared with those fabricated with CAD techniques within in-vitro environments. The marginal gaps achieved by the restorations across all groups were within a clinically acceptable range.
The aggregated unfitted finite element method for elliptic problems
NASA Astrophysics Data System (ADS)
Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.
2018-07-01
Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
Castillo-Oyagüe, Raquel; Lynch, Christopher D; Turrión, Andrés S; López-Lozano, José F; Torres-Lagares, Daniel; Suárez-García, María-Jesús
2013-01-01
This study evaluated the marginal misfit and microleakage of cement-retained implant-supported crown copings. Single crown structures were constructed with: (1) laser-sintered Co-Cr (LS); (2) vacuum-cast Co-Cr (CC) and (3) vacuum-cast Ni-Cr-Ti (CN). Samples of each alloy group were randomly luted in standard fashion onto machined titanium abutments using: (1) GC Fuji PLUS (FP); (2) Clearfil Esthetic Cement (CEC); (3) RelyX Unicem 2 Automix (RXU) and (4) DentoTemp (DT) (n=15 each). After 60 days of water ageing, vertical discrepancy was SEM-measured and cement microleakage was scored using a digital microscope. Misfit data were subjected to two-way ANOVA and Student-Newman-Keuls multiple comparisons tests. Kruskal-Wallis and Dunn's tests were run for microleakage analysis (α=0.05). Regardless of the cement type, LS samples exhibited the best fit, whilst CC and CN performed equally well. Despite the framework alloy and manufacturing technique, FP and DT provide comparably better fit and greater microleakage scores than did CEC and RXU, which showed no differences. DMLS of Co-Cr may be a reliable alternative to the casting of base metal alloys to obtain well-fitted implant-supported crowns, although all the groups tested were within the clinically acceptable range of vertical discrepancy. No strong correlations were found between misfit and microleakage. Notwithstanding the framework alloy, definitive resin-modified glass-ionomer (FP) and temporary acrylic/urethane-based (DT) cements demonstrated comparably better marginal fit and greater microleakage scores than did 10-methacryloxydecyl-dihydrogen phosphate-based (CEC) and self-adhesive (RXU) dual-cure resin agents. Copyright © 2012 Elsevier Ltd. All rights reserved.
Research and development of LANDSAT-based crop inventory techniques
NASA Technical Reports Server (NTRS)
Horvath, R.; Cicone, R. C.; Malila, W. A. (Principal Investigator)
1982-01-01
A wide spectrum of technology pertaining to the inventory of crops using LANDSAT without in situ training data is addressed. Methods considered include Bayesian based through-the-season methods, estimation technology based on analytical profile fitting methods, and expert-based computer aided methods. Although the research was conducted using U.S. data, the adaptation of the technology to the Southern Hemisphere, especially Argentina was considered.
Analysis of thickness dependent on crystallization kinetics in thin isotactic-polysterene films
NASA Astrophysics Data System (ADS)
Khairuddin
2016-11-01
Crystalliaztion kinetics of thin film of Isotactic Polysterene (it-PS) films has been studied. Thin PET films having thickness of 338, 533, 712, 1096, 1473, and 2185 A° were prepared by using spin-cast technique. The it-PS crystals were grown on Linkam-hostage in the temperature range 130-240°C with an interval of 10°C. The crystal growths are measured by optical microscopy in lateral direction. It was found that a substantial thickness dependence on crystallisation rate. The analysis using fitting technique based on theory crystal growth of Lauritzen-Hoffman showed that the fitting technique could not resolve to predict the mechanism controlling the thickness dependence on the rate of crystallisation. The possible reasons were due to the crystallisation rate varies with the type of crystals (smooth, rough, overgrowth terrace), and the crystallisation rate changes with the time of crystallisation.
Raster and vector processing for scanned linework
Greenlee, David D.
1987-01-01
An investigation of raster editing techniques, including thinning, filling, and node detecting, was performed by using specialized software. The techniques were based on encoding the state of the 3-by-3 neighborhood surrounding each pixel into a single byte. A prototypical method for converting the edited raster linkwork into vectors was also developed. Once vector representations of the lines were formed, they were formatted as a Digital Line Graph, and further refined by deletion of nonessential vertices and by smoothing with a curve-fitting technique.
Dahl, Bjørn Einar; Rønold, Hans Jacob; Dahl, Jon E
2017-03-01
Whether single crowns produced by computer-aided design and computer-aided manufacturing (CAD-CAM) have an internal fit comparable to crowns made by lost-wax metal casting technique is unknown. The purpose of this in vitro study was to compare the internal fit of single crowns produced with the lost-wax and metal casting technique with that of single crowns produced with the CAD-CAM technique. The internal fit of 5 groups of single crowns produced with the CAD-CAM technique was compared with that of single crowns produced in cobalt-chromium with the conventional lost-wax and metal casting technique. Comparison was performed using the triple-scan protocol; scans of the master model, the crown on the master model, and the intaglio of the crown were superimposed and analyzed with computer software. The 5 groups were milled presintered zirconia, milled hot isostatic pressed zirconia, milled lithium disilicate, milled cobalt-chromium, and laser-sintered cobalt-chromium. The cement space in both the mesiodistal and buccopalatal directions was statistically smaller (P<.05) for crowns made by the conventional lost-wax and metal casting technique compared with that of crowns produced by the CAD-CAM technique. Single crowns made using the conventional lost-wax and metal casting technique have better internal fit than crowns produced using the CAD-CAM technique. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Objective fitting of hemoglobin dynamics in traumatic bruises based on temperature depth profiling
NASA Astrophysics Data System (ADS)
Vidovič, Luka; Milanič, Matija; Majaron, Boris
2014-02-01
Pulsed photothermal radiometry (PPTR) allows noninvasive measurement of laser-induced temperature depth profiles. The obtained profiles provide information on depth distribution of absorbing chromophores, such as melanin and hemoglobin. We apply this technique to objectively characterize mass diffusion and decomposition rate of extravasated hemoglobin during the bruise healing process. In present study, we introduce objective fitting of PPTR data obtained over the course of the bruise healing process. By applying Monte Carlo simulation of laser energy deposition and simulation of the corresponding PPTR signal, quantitative analysis of underlying bruise healing processes is possible. Introduction of objective fitting enables an objective comparison between the simulated and experimental PPTR signals. In this manner, we avoid reconstruction of laser-induced depth profiles and thus inherent loss of information in the process. This approach enables us to determine the value of hemoglobin mass diffusivity, which is controversial in existing literature. Such information will be a valuable addition to existing bruise age determination techniques.
Kim, Eun-Ha; Lee, Du-Hyeong; Kwon, Sung-Min; Kwon, Tae-Yub
2017-03-01
Although new digital manufacturing techniques are attracting interest in dentistry, few studies have comprehensively investigated the marginal fit of fixed dental prostheses fabricated with such techniques. The purpose of this in vitro microcomputed tomography (μCT) study was to evaluate the marginal fit of cobalt-chromium (Co-Cr) alloy copings fabricated by casting and 3 different computer-aided design and computer-aided manufacturing (CAD-CAM)-based processing techniques and alloy systems. Single Co-Cr metal crowns were fabricated using 4 different manufacturing techniques: casting (control), milling, selective laser melting, and milling/sintering. Two different commercial alloy systems were used for each fabrication technique (a total of 8 groups; n=10 for each group). The marginal discrepancy and absolute marginal discrepancy of the crowns were determined with μCT. For each specimen, the values were determined from 4 different regions (sagittal buccal, sagittal lingual, coronal mesial, and coronal distal) by using imaging software and recorded as the average of the 4 readings. For each parameter, the results were statistically compared with 2-way analysis of variance and appropriate post hoc analysis (using Tukey or Student t test) (α=.05). The milling and selective laser melting groups showed significantly larger marginal discrepancies than the control groups (70.4 ±12.0 and 65.3 ±10.1 μm, respectively; P<.001), whereas the milling/sintering groups exhibited significantly smaller values than the controls (P=.004). The milling groups showed significantly larger absolute marginal discrepancy than the control groups (137.4 ±29.0 and 139.2 ±18.9 μm, respectively; P<.05). In the selective laser melting and milling/sintering groups, the absolute marginal discrepancy values were material-specific (P<.05). Nonetheless, the milling/sintering groups yielded statistically comparable (P=.935) or smaller (P<.001) absolute marginal discrepancies to the control groups. The findings of this in vitro μCT study showed that the marginal fit values of the Co-Cr alloy greatly depended on the fabrication methods and, occasionally, the alloy systems. Fixed dental prostheses produced by using the milling/sintering technique can be considered clinically acceptable in terms of marginal fit. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Apps to promote physical activity among adults: a review and content analysis
2014-01-01
Background In May 2013, the iTunes and Google Play stores contained 23,490 and 17,756 smartphone applications (apps) categorized as Health and Fitness, respectively. The quality of these apps, in terms of applying established health behavior change techniques, remains unclear. Methods The study sample was identified through systematic searches in iTunes and Google Play. Search terms were based on Boolean logic and included AND combinations for physical activity, healthy lifestyle, exercise, fitness, coach, assistant, motivation, and support. Sixty-four apps were downloaded, reviewed, and rated based on the taxonomy of behavior change techniques used in the interventions. Mean and ranges were calculated for the number of observed behavior change techniques. Using nonparametric tests, we compared the number of techniques observed in free and paid apps and in iTunes and Google Play. Results On average, the reviewed apps included 5 behavior change techniques (range 2–8). Techniques such as self-monitoring, providing feedback on performance, and goal-setting were used most frequently, whereas some techniques such as motivational interviewing, stress management, relapse prevention, self-talk, role models, and prompted barrier identification were not. No differences in the number of behavior change techniques between free and paid apps, or between the app stores were found. Conclusions The present study demonstrated that apps promoting physical activity applied an average of 5 out of 23 possible behavior change techniques. This number was not different for paid and free apps or between app stores. The most frequently used behavior change techniques in apps were similar to those most frequently used in other types of physical activity promotion interventions. PMID:25059981
Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin
2011-03-01
Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.
Directly manipulated free-form deformation image registration.
Tustison, Nicholas J; Avants, Brian B; Gee, James C
2009-03-01
Previous contributions to both the research and open source software communities detailed a generalization of a fast scalar field fitting technique for cubic B-splines based on the work originally proposed by Lee . One advantage of our proposed generalized B-spline fitting approach is its immediate application to a class of nonrigid registration techniques frequently employed in medical image analysis. Specifically, these registration techniques fall under the rubric of free-form deformation (FFD) approaches in which the object to be registered is embedded within a B-spline object. The deformation of the B-spline object describes the transformation of the image registration solution. Representative of this class of techniques, and often cited within the relevant community, is the formulation of Rueckert who employed cubic splines with normalized mutual information to study breast deformation. Similar techniques from various groups provided incremental novelty in the form of disparate explicit regularization terms, as well as the employment of various image metrics and tailored optimization methods. For several algorithms, the underlying gradient-based optimization retained the essential characteristics of Rueckert's original contribution. The contribution which we provide in this paper is two-fold: 1) the observation that the generic FFD framework is intrinsically susceptible to problematic energy topographies and 2) that the standard gradient used in FFD image registration can be modified to a well-understood preconditioned form which substantially improves performance. This is demonstrated with theoretical discussion and comparative evaluation experimentation.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
MOPET: a context-aware and user-adaptive wearable system for fitness training.
Buttussi, Fabio; Chittaro, Luca
2008-02-01
Cardiovascular disease, obesity, and lack of physical fitness are increasingly common and negatively affect people's health, requiring medical assistance and decreasing people's wellness and productivity. In the last years, researchers as well as companies have been increasingly investigating wearable devices for fitness applications with the aim of improving user's health, in terms of cardiovascular benefits, loss of weight or muscle strength. Dedicated GPS devices, accelerometers, step counters and heart rate monitors are already commercially available, but they are usually very limited in terms of user interaction and artificial intelligence capabilities. This significantly limits the training and motivation support provided by current systems, making them poorly suited for untrained people who are more interested in fitness for health rather than competitive purposes. To better train and motivate users, we propose the mobile personal trainer (MOPET) system. MOPET is a wearable system that supervises a physical fitness activity based on alternating jogging and fitness exercises in outdoor environments. By exploiting real-time data coming from sensors, knowledge elicited from a sport physiologist and a professional trainer, and a user model that is built and periodically updated through a guided autotest, MOPET can provide motivation as well as safety and health advice, adapted to the user and the context. To better interact with the user, MOPET also displays a 3D embodied agent that speaks, suggests stretching or strengthening exercises according to user's current condition, and demonstrates how to correctly perform exercises with interactive 3D animations. By describing MOPET, we show how context-aware and user-adaptive techniques can be applied to the fitness domain. In particular, we describe how such techniques can be exploited to train, motivate, and supervise users in a wearable personal training system for outdoor fitness activity.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method
Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.
2012-01-01
Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978
An expert fitness diagnosis system based on elastic cloud computing.
Tseng, Kevin C; Wu, Chia-Chuan
2014-01-01
This paper presents an expert diagnosis system based on cloud computing. It classifies a user's fitness level based on supervised machine learning techniques. This system is able to learn and make customized diagnoses according to the user's physiological data, such as age, gender, and body mass index (BMI). In addition, an elastic algorithm based on Poisson distribution is presented to allocate computation resources dynamically. It predicts the required resources in the future according to the exponential moving average of past observations. The experimental results show that Naïve Bayes is the best classifier with the highest accuracy (90.8%) and that the elastic algorithm is able to capture tightly the trend of requests generated from the Internet and thus assign corresponding computation resources to ensure the quality of service.
A fast, model-independent method for cerebral cortical thickness estimation using MRI.
Scott, M L J; Bromiley, P A; Thacker, N A; Hutchinson, C E; Jackson, A
2009-04-01
Several algorithms for measuring the cortical thickness in the human brain from MR image volumes have been described in the literature, the majority of which rely on fitting deformable models to the inner and outer cortical surfaces. However, the constraints applied during the model fitting process in order to enforce spherical topology and to fit the outer cortical surface in narrow sulci, where the cerebrospinal fluid (CSF) channel may be obscured by partial voluming, may introduce bias in some circumstances, and greatly increase the processor time required. In this paper we describe an alternative, voxel based technique that measures the cortical thickness using inversion recovery anatomical MR images. Grey matter, white matter and CSF are identified through segmentation, and edge detection is used to identify the boundaries between these tissues. The cortical thickness is then measured along the local 3D surface normal at every voxel on the inner cortical surface. The method was applied to 119 normal volunteers, and validated through extensive comparisons with published measurements of both cortical thickness and rate of thickness change with age. We conclude that the proposed technique is generally faster than deformable model-based alternatives, and free from the possibility of model bias, but suffers no reduction in accuracy. In particular, it will be applicable in data sets showing severe cortical atrophy, where thinning of the gyri leads to points of high curvature, and so the fitting of deformable models is problematic.
Calibrating White Dwarf Asteroseismic Fitting Techniques
NASA Astrophysics Data System (ADS)
Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.
2017-03-01
The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.
NASA Astrophysics Data System (ADS)
Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal
2014-06-01
This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Austen, Emily J.; Weis, Arthur E.
2016-01-01
Our understanding of selection through male fitness is limited by the resource demands and indirect nature of the best available genetic techniques. Applying complementary, independent approaches to this problem can help clarify evolution through male function. We applied three methods to estimate selection on flowering time through male fitness in experimental populations of the annual plant Brassica rapa: (i) an analysis of mating opportunity based on flower production schedules, (ii) genetic paternity analysis, and (iii) a novel approach based on principles of experimental evolution. Selection differentials estimated by the first method disagreed with those estimated by the other two, indicating that mating opportunity was not the principal driver of selection on flowering time. The genetic and experimental evolution methods exhibited striking agreement overall, but a slight discrepancy between the two suggested that negative environmental covariance between age at flowering and male fitness may have contributed to phenotypic selection. Together, the three methods enriched our understanding of selection on flowering time, from mating opportunity to phenotypic selection to evolutionary response. The novel experimental evolution method may provide a means of examining selection through male fitness when genetic paternity analysis is not possible. PMID:26911957
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torello, David; Kim, Jin-Yeon; Qu, Jianmin
2015-03-31
This research considers the effects of diffraction, attenuation, and the nonlinearity of generating sources on measurements of nonlinear ultrasonic Rayleigh wave propagation. A new theoretical framework for correcting measurements made with air-coupled and contact piezoelectric receivers for the aforementioned effects is provided based on analytical models and experimental considerations. A method for extracting the nonlinearity parameter β{sub 11} is proposed based on a nonlinear least squares curve-fitting algorithm that is tailored for Rayleigh wave measurements. Quantitative experiments are conducted to confirm the predictions for the nonlinearity of the piezoelectric source and to demonstrate the effectiveness of the curve-fitting procedure. Thesemore » experiments are conducted on aluminum 2024 and 7075 specimens and a β{sub 11}{sup 7075}/β{sub 11}{sup 2024} measure of 1.363 agrees well with previous literature and earlier work.« less
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.
1993-01-01
New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Reverse engineering the gap gene network of Drosophila melanogaster.
Perkins, Theodore J; Jaeger, Johannes; Reinitz, John; Glass, Leon
2006-05-01
A fundamental problem in functional genomics is to determine the structure and dynamics of genetic networks based on expression data. We describe a new strategy for solving this problem and apply it to recently published data on early Drosophila melanogaster development. Our method is orders of magnitude faster than current fitting methods and allows us to fit different types of rules for expressing regulatory relationships. Specifically, we use our approach to fit models using a smooth nonlinear formalism for modeling gene regulation (gene circuits) as well as models using logical rules based on activation and repression thresholds for transcription factors. Our technique also allows us to infer regulatory relationships de novo or to test network structures suggested by the literature. We fit a series of models to test several outstanding questions about gap gene regulation, including regulation of and by hunchback and the role of autoactivation. Based on our modeling results and validation against the experimental literature, we propose a revised network structure for the gap gene system. Interestingly, some relationships in standard textbook models of gap gene regulation appear to be unnecessary for or even inconsistent with the details of gap gene expression during wild-type development.
Treatment of oroantral fistulas using bony press-fit technique.
Er, Nuray; Tuncer, Hakan Yusuf; Karaca, Ciğdem; Copuroğlu, Seçil
2013-04-01
The objective of this study was to determine the effectiveness of the bony press-fit technique in closing oroantral communications (OACs) and oroantral fistulas (OAFs) and in identifying potential intraoral donor sites. Ten patients, 4 with OACs and 6 with OAFs, were treated with autogenous bone grafts using the bony press-fit technique. In 9 patients, dental extractions caused OACs or OAFs; in 1 patient, an OAC appeared after cyst enucleation. Donor sites included the chin (3 patients), buccal exostosis (1 patient), maxillary tuberosity (2 patients), ramus (1 patient), and the lateral wall of the maxillary sinus (3 patients). The preoperative evaluation of the patients, surgical technique, and postoperative management were examined. In all 10 patients, a stable press fit of the graft was achieved. Additional fixation methods were not needed. In 2 patients, mucosal dehiscence developed, but healed spontaneously. In 2 patients, dental implant surgery was performed in the grafted area. Treatment of 10 patients with OACs or OAFs was performed, with a 100% success rate. The bony press-fit technique can be used to safely close OACs or OAFs, and it presents some advantages compared with other techniques. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.
2016-05-01
The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa < 6. The methodology showed good reproducibility and stability with standard deviations below 5%. The nature of the groups was independent of small variations in experimental conditions, i.e. the mass of carbon dots titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
Fitting of the Thomson scattering density and temperature profiles on the COMPASS tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stefanikova, E.; Division of Fusion Plasma Physics, KTH Royal Institute of Technology, SE-10691 Stockholm; Peterka, M.
2016-11-15
A new technique for fitting the full radial profiles of electron density and temperature obtained by the Thomson scattering diagnostic in H-mode discharges on the COMPASS tokamak is described. The technique combines the conventionally used modified hyperbolic tangent function for the edge transport barrier (pedestal) fitting and a modification of a Gaussian function for fitting the core plasma. Low number of parameters of this combined function and their straightforward interpretability and controllability provide a robust method for obtaining physically reasonable profile fits. Deconvolution with the diagnostic instrument function is applied on the profile fit, taking into account the dependence onmore » the actual magnetic configuration.« less
[Tibial press-fit fixation of flexor tendons for reconstruction of the anterior cruciate ligament].
Ettinger, M; Liodakis, E; Haasper, C; Hurschler, C; Breitmeier, D; Krettek, C; Jagodzinski, M
2012-09-01
Press-fit fixation of hamstring tendon autografts for anterior cruciate ligament reconstruction is an interesting technique because no hardware is necessary. This study compares the biomechanical properties of press-fit fixations to an interference screw fixation. Twenty-eight human cadaveric knees were used for hamstring tendon explantation. An additional bone block was harvested from the tibia. We used 28 porcine femora for graft fixation. Constructs were cyclically stretched and then loaded until failure. Maximum load to failure, stiffness and elongation during failure testing and cyclic loading were investigated. The maximum load to failure was 970±83 N for the press-fit tape fixation (T), 572±151 N for the bone bridge fixation (TS), 544±109 N for the interference screw fixation (I), 402±77 N for the press-fit suture fixation (S) and 290±74 N for the bone block fixation technique (F). The T fixation had a significantly better maximum load to failure compared to all other techniques (p<0.001). This study demonstrates that a tibial press-fit technique which uses an additional bone block has better maximum load to failure results compared to a simple interference screw fixation.
The contemporary mindfulness movement and the question of nonself1.
Samuel, Geoffrey
2015-08-01
Mindfulness-based stress reduction (MBSR), mindfulness-based cognitive therapy (MBCT), and other "mindfulness"-based techniques have rapidly gained a significant presence within contemporary society. Clearly these techniques, which derive or are claimed to derive from Buddhist meditational practices, meet genuine human needs. However, questions are increasingly raised regarding what these techniques meant in their original context(s), how they have been transformed in relation to their new Western and global field of activity, what might have been lost (or gained) on the way, and how the entire contemporary mindfulness phenomenon might be understood. The article points out that first-generation mindfulness practices, such as MBSR and MBCT, derive from modernist versions of Buddhism, and omit or minimize key aspects of the Buddhist tradition, including the central Buddhist philosophical emphasis on the deconstruction of the self. Nonself (or no self) fits poorly into the contemporary therapeutic context, but is at the core of the Buddhist enterprise from which contemporary "mindfulness" has been abstracted. Instead of focussing narrowly on the practical efficacy of the first generation of mindfulness techniques, we might see them as an invitation to explore the much wider range of practices available in the traditions from which they originate. Rather, too, than simplifying and reducing these practices to fit current Western conceptions of knowledge, we might seek to incorporate more of their philosophical basis into our Western adaptations. This might lead to a genuine and productive expansion of both scientific knowledge and therapeutic possibilities. © The Author(s) 2014.
Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems
NASA Astrophysics Data System (ADS)
Hazra, Abhik; Das, Saborni; Basu, Mousumi
2018-06-01
This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.
Heat Transfer Search Algorithm for Non-convex Economic Dispatch Problems
NASA Astrophysics Data System (ADS)
Hazra, Abhik; Das, Saborni; Basu, Mousumi
2018-03-01
This paper presents Heat Transfer Search (HTS) algorithm for the non-linear economic dispatch problem. HTS algorithm is based on the law of thermodynamics and heat transfer. The proficiency of the suggested technique has been disclosed on three dissimilar complicated economic dispatch problems with valve point effect; prohibited operating zone; and multiple fuels with valve point effect. Test results acquired from the suggested technique for the economic dispatch problem have been fitted to that acquired from other stated evolutionary techniques. It has been observed that the suggested HTS carry out superior solutions.
[Arthroscopic reconstruction of anterior cruciate ligament with press-fit technique].
Halder, A M
2010-08-01
Problems related to the use of interference screws for fixation of bone-patellar tendon-bone grafts for anterior cruciate ligament (ACL) replacement have led to increasing interest in press-fit techniques. Most of the described techniques use press-fit fixation on either the femoral or tibial side. Therefore an arthroscopic technique was developed which achieves bone-patellar tendon-bone graft fixation by press-fit on both sides without the need for supplemental fixation material. The first consecutive 40 patients were examined clinically with a KT-1000 arthrometer and radiologically after a mean of 28.7 months (range 20-40 months) postoperatively. The mean difference in side-to-side laxity was 1.3 mm (SD 2.2 mm) and the results according to the International Knee Documentation Committee (IKDC) score were as follows: 7 A, 28 B, 5 C, 0 D. The presented press-fit technique avoids all complications related to the use of interference screws. It achieves primary stable fixation of the bone-patellar tendon-bone graft thereby allowing early functional rehabilitation. However, fixation strength depends on bone quality and the arthroscopic procedure is demanding. The results showed reliable stabilization of the operated knees.
High-accuracy peak picking of proteomics data using wavelet techniques.
Lange, Eva; Gröpl, Clemens; Reinert, Knut; Kohlbacher, Oliver; Hildebrandt, Andreas
2006-01-01
A new peak picking algorithm for the analysis of mass spectrometric (MS) data is presented. It is independent of the underlying machine or ionization method, and is able to resolve highly convoluted and asymmetric signals. The method uses the multiscale nature of spectrometric data by first detecting the mass peaks in the wavelet-transformed signal before a given asymmetric peak function is fitted to the raw data. In an optional third stage, the resulting fit can be further improved using techniques from nonlinear optimization. In contrast to currently established techniques (e.g. SNAP, Apex) our algorithm is able to separate overlapping peaks of multiply charged peptides in ESI-MS data of low resolution. Its improved accuracy with respect to peak positions makes it a valuable preprocessing method for MS-based identification and quantification experiments. The method has been validated on a number of different annotated test cases, where it compares favorably in both runtime and accuracy with currently established techniques. An implementation of the algorithm is freely available in our open source framework OpenMS.
Sabonghy, Eric Peter; Wood, Robert Michael; Ambrose, Catherine Glauber; McGarvey, William Christopher; Clanton, Thomas Oscar
2003-03-01
Tendon transfer techniques in the foot and ankle are used for tendon ruptures, deformities, and instabilities. This fresh cadaver study compares the tendon fixation strength in 10 paired specimens by performing a tendon to tendon fixation technique or using 7 x 20-25 mm bioabsorbable interference-fit screw tendon fixation technique. Load at failure of the tendon to tendon fixation method averaged 279N (Standard Deviation 81N) and the bioabsorbable screw 148N (Standard Deviation 72N) [p = 0.0008]. Bioabsorbable interference-fit screws in these specimens show decreased fixation strength relative to the traditional fixation technique. However, the mean bioabsorbable screw fixation strength of 148N provides physiologic strength at the tendon-bone interface.
A study of data analysis techniques for the multi-needle Langmuir probe
NASA Astrophysics Data System (ADS)
Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.
2018-06-01
In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.
1980-09-01
HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the
Choi, Jung-Han
2011-01-01
This study aimed to evaluate the effect of different screw-tightening sequences, torques, and methods on the strains generated on an internal-connection implant (Astra Tech) superstructure with good fit. An edentulous mandibular master model and a metal framework directly connected to four parallel implants with a passive fit to each other were fabricated. Six stone casts were made from a dental stone master model by a splinted impression technique to represent a well-fitting situation with the metal framework. Strains generated by four screw-tightening sequences (1-2-3-4, 4-3-2-1, 2-4-3-1, and 2-3-1-4), two torques (10 and 20 Ncm), and two methods (one-step and two-step) were evaluated. In the two-step method, screws were tightened to the initial torque (10 Ncm) in a predetermined screw-tightening sequence and then to the final torque (20 Ncm) in the same sequence. Strains were recorded twice by three strain gauges attached to the framework (superior face midway between abutments). Deformation data were analyzed using multiple analysis of variance at a .05 level of statistical significance. In all stone casts, strains were produced by connection of the superstructure, regardless of screw-tightening sequence, torque, and method. No statistically significant differences in superstructure strains were found based on screw-tightening sequences (range, -409.8 to -413.8 μm/m), torques (-409.7 and -399.1 μm/m), or methods (-399.1 and -410.3 μm/m). Within the limitations of this in vitro study, screw-tightening sequence, torque, and method were not critical factors for the strain generated on a well-fitting internal-connection implant superstructure by the splinted impression technique. Further studies are needed to evaluate the effect of screw-tightening techniques on the preload stress in various different clinical situations.
Community Involvement Components in Culturally-Oriented Teacher Preparation.
ERIC Educational Resources Information Center
Mahan, James M.
At Indiana University, preservice teachers participate in required community-based multicultural programs that allow them to become directly involved with community characteristics, values, needs, and achievements. It is hoped that this experience will help them to adapt curriculum and instructional techniques to fit community realities and…
Gaikwad, Bhushan Satish; Nazirkar, Girish; Dable, Rajani; Singh, Shailendra
2018-01-01
The present study aims to compare and evaluate the marginal fit and axial wall adaptability of Co-Cr copings fabricated by metal laser sintering (MLS) and lost-wax (LW) techniques using a stereomicroscope. A stainless steel master die assembly was fabricated simulating a prepared crown; 40 replicas of master die were fabricated in gypsum type IV and randomly divided in two equal groups. Group A coping was fabrication by LW technique and the Group B coping fabrication by MLS technique. The copings were seated on their respective gypsum dies and marginal fit was measured using stereomicroscope and image analysis software. For evaluation of axial wall adaptability, the coping and die assembly were embedded in autopolymerizing acrylic resin and sectioned vertically. The discrepancies between the dies and copings were measured along the axial wall on each halves. The data were subjected to statistical analysis using unpaired t -test. The mean values of marginal fit for copings in Group B (MLS) were lower (24.6 μm) than the copings in Group A (LW) (39.53 μm), and the difference was statistically significant ( P < 0.05). The mean axial wall discrepancy value was lower for Group B (31.03 μm) as compared with Group A (54.49 μm) and the difference was statistically significant ( P < 0.05). The copings fabricated by MLS technique had better marginal fit and axial wall adaptability in comparison with copings fabricated by the LW technique. However, the values of marginal fit of copings fabricated that the two techniques were within the clinically acceptable limit (<50 μm).
Response Surface Methods for Spatially-Resolved Optical Measurement Techniques
NASA Technical Reports Server (NTRS)
Danehy, P. M.; Dorrington, A. A.; Cutler, A. D.; DeLoach, R.
2003-01-01
Response surface methods (or methodology), RSM, have been applied to improve data quality for two vastly different spatial ly-re solved optical measurement techniques. In the first application, modern design of experiments (MDOE) methods, including RSM, are employed to map the temperature field in a direct-connect supersonic combustion test facility at NASA Langley Research Center. The laser-based measurement technique known as coherent anti-Stokes Raman spectroscopy (CARS) is used to measure temperature at various locations in the combustor. RSM is then used to develop temperature maps of the flow. Even though the temperature fluctuations at a single point in the flowfield have a standard deviation on the order of 300 K, RSM provides analytic fits to the data having 95% confidence interval half width uncertainties in the fit as low as +/-30 K. Methods of optimizing future CARS experiments are explored. The second application of RSM is to quantify the shape of a 5-meter diameter, ultra-light, inflatable space antenna at NASA Langley Research Center.
Structure, Nanomechanics and Dynamics of Dispersed Surfactant-Free Clay Nanocomposite Films
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Zhao, Jing; Snyder, Chad; Karim, Alamgir; National Institute of Standards; Technology Collaboration
Natural Montmorillonite particles were dispersed as tactoids in thin films of polycaprolactone (PCL) through a flow coating technique assisted by ultra-sonication. Wide angle X-ray scattering (WAXS), Grazing-incidence wide angle X-ray scattering (GI-WAXS), and transmission electron microscopy (TEM) were used to confirm the level of dispersion. These characterization techniques are in conjunction with its nanomechanical properties via strain-induced buckling instability for modulus measurements (SIEBIMM), a high throughput technique to characterize thin film mechanical properties. The linear strengthening trend of the elastic modulus enhancements was fitted with Halpin-Tsai (HT) model, correlating the nanoparticle geometric effects and mechanical behaviors based on continuum theories. The overall aspect ratio of dispersed tactoids obtained through HT model fitting is in reasonable agreement with digital electron microscope image analysis. Moreover, glass transition behaviors of the composites were characterized using broadband dielectric relaxation spectroscopy. The segmental relaxation behaviors indicate that the associated mechanical property changes are due to the continuum filler effect rather than the interfacial confinement effect.
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
Benchmarking of state-of-the-art needle detection algorithms in 3D ultrasound data volumes
NASA Astrophysics Data System (ADS)
Pourtaherian, Arash; Zinger, Svitlana; de With, Peter H. N.; Korsten, Hendrikus H. M.; Mihajlovic, Nenad
2015-03-01
Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.
a Genetic Algorithm Based on Sexual Selection for the Multidimensional 0/1 Knapsack Problems
NASA Astrophysics Data System (ADS)
Varnamkhasti, Mohammad Jalali; Lee, Lai Soon
In this study, a new technique is presented for choosing mate chromosomes during sexual selection in a genetic algorithm. The population is divided into groups of males and females. During the sexual selection, the female chromosome is selected by the tournament selection while the male chromosome is selected based on the hamming distance from the selected female chromosome, fitness value or active genes. Computational experiments are conducted on the proposed technique and the results are compared with some selection mechanisms commonly used for solving multidimensional 0/1 knapsack problems published in the literature.
Proceedings of the NASA Workshop on Surface Fitting
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1982-01-01
Surface fitting techniques and their utilization are addressed. Surface representation, approximation, and interpolation are discussed. Along with statistical estimation problems associated with surface fitting.
Actinide electronic structure and atomic forces
NASA Astrophysics Data System (ADS)
Albers, R. C.; Rudin, Sven P.; Trinkle, Dallas R.; Jones, M. D.
2000-07-01
We have developed a new method[1] of fitting tight-binding parameterizations based on functional forms developed at the Naval Research Laboratory.[2] We have applied these methods to actinide metals and report our success using them (see below). The fitting procedure uses first-principles local-density-approximation (LDA) linear augmented plane-wave (LAPW) band structure techniques[3] to first calculate an electronic-structure band structure and total energy for fcc, bcc, and simple cubic crystal structures for the actinide of interest. The tight-binding parameterization is then chosen to fit the detailed energy eigenvalues of the bands along symmetry directions, and the symmetry of the parameterization is constrained to agree with the correct symmetry of the LDA band structure at each eigenvalue and k-vector that is fit to. By fitting to a range of different volumes and the three different crystal structures, we find that the resulting parameterization is robust and appears to accurately calculate other crystal structures and properties of interest.
Nelson, Neha; K S, Jyothi; Sunny, Kiran
2017-03-01
The margins of copings for crowns and retainers of fixed partial dentures affect the progress of microleakage and dental caries. Failures occur due to altered fit which is also influenced by the method of fabrication. An in-vitro study was conducted to determine among the cast base metal, copy milled zirconia, computer aided designing computer aided machining/manufacturing zirconia and direct metal laser sintered copings which showed best marginal accuracy and internal fit. Forty extracted maxillary premolars were mounted on an acrylic model and reduced occlusally using a milling machine up to a final tooth height of 4 mm from the cementoenamel junction. Axial reduction was accomplished on a surveyor and a chamfer finish line was given. The impressions and dies were made for fabrication of copings which were luted on the prepared teeth under standardized loading, embedded in self-cure acrylic resin, sectioned and observed using scanning electron microscope for internal gap and marginal accuracy. The copings fabricated using direct metal laser sintering technique exhibited best marginal accuracy and internal fit. Comparison of mean between the four groups by ANOVA and post-hoc Tukey HSD tests showed a statistically significant difference between all the groups (p⟨0.05). It was concluded that the copings fabricated using direct metal laser sintering technique exhibited best marginal accuracy and internal fit. Additive digital technologies such as direct metal laser sintering could be cost-effective for the clinician, minimize failures related to fit and increase longevity of teeth and prostheses. Copyright© 2017 Dennis Barber Ltd.
Analysis technique for controlling system wavefront error with active/adaptive optics
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conley, A.; Goldhaber, G.; Wang, L.
We present measurements of {Omega}{sub m} and {Omega}{sub {Lambda}} from a blind analysis of 21 high redshift supernovae using a new technique (CMAGIC) for fitting the multicolor lightcurves of Type Ia supernovae, first introduced in Wang et al. (2003). CMAGIC takes advantage of the remarkably simple behavior of Type Ia supernovae on color-magnitude diagrams, and has several advantages over current techniques based on maximum magnitudes. Among these are a reduced sensitivity to host galaxy dust extinction, a shallower luminosity-width relation, and the relative simplicity of the fitting procedure. This allows us to provide a cross check of previous supernova cosmologymore » results, despite the fact that current data sets were not observed in a manner optimized for CMAGIC. We describe the details of our novel blindness procedure, which is designed to prevent experimenter bias. The data are broadly consistent with the picture of an accelerating Universe, and agree with a at Universe within 1.7{sigma}, including systematics. We also compare the CMAGIC results directly with those of a maximum magnitude fit to the same SNe, finding that CMAGIC favors more acceleration at the 1.6{sigma} level, including systematics and the correlation between the two measurements. A fit for w assuming a at Universe yields a value which is consistent with a cosmological constant within 1.2{sigma}.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
Multiplexed absorption tomography with calibration-free wavelength modulation spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk
2014-04-14
We propose a multiplexed absorption tomography technique, which uses calibration-free wavelength modulation spectroscopy with tunable semiconductor lasers for the simultaneous imaging of temperature and species concentration in harsh combustion environments. Compared with the commonly used direct absorption spectroscopy (DAS) counterpart, the present variant enjoys better signal-to-noise ratios and requires no baseline fitting, a particularly desirable feature for high-pressure applications, where adjacent absorption features overlap and interfere severely. We present proof-of-concept numerical demonstrations of the technique using realistic phantom models of harsh combustion environments and prove that the proposed techniques outperform currently available tomography techniques based on DAS.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
NASA Astrophysics Data System (ADS)
Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin
2013-11-01
The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.
Alghamdi, Manal; Al-Mallah, Mouaz; Keteyian, Steven; Brawner, Clinton; Ehrman, Jonathan; Sakr, Sherif
2017-01-01
Machine learning is becoming a popular and important approach in the field of medical research. In this study, we investigate the relative performance of various machine learning methods such as Decision Tree, Naïve Bayes, Logistic Regression, Logistic Model Tree and Random Forests for predicting incident diabetes using medical records of cardiorespiratory fitness. In addition, we apply different techniques to uncover potential predictors of diabetes. This FIT project study used data of 32,555 patients who are free of any known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems between 1991 and 2009 and had a complete 5-year follow-up. At the completion of the fifth year, 5,099 of those patients have developed diabetes. The dataset contained 62 attributes classified into four categories: demographic characteristics, disease history, medication use history, and stress test vital signs. We developed an Ensembling-based predictive model using 13 attributes that were selected based on their clinical importance, Multiple Linear Regression, and Information Gain Ranking methods. The negative effect of the imbalance class of the constructed model was handled by Synthetic Minority Oversampling Technique (SMOTE). The overall performance of the predictive model classifier was improved by the Ensemble machine learning approach using the Vote method with three Decision Trees (Naïve Bayes Tree, Random Forest, and Logistic Model Tree) and achieved high accuracy of prediction (AUC = 0.92). The study shows the potential of ensembling and SMOTE approaches for predicting incident diabetes using cardiorespiratory fitness data.
Teaching Mitochondrial Genetics & Disease: A GENA Project Curriculum Intervention
ERIC Educational Resources Information Center
Reardon, Ryan A.; Sharer, J. Daniel
2012-01-01
This report describes a novel, inquiry-based learning plan developed as part of the GENA educational outreach project. Focusing on mitochondrial genetics and disease, this interactive approach utilizes pedigree analysis and laboratory techniques to address non-Mendelian inheritance. The plan can be modified to fit a variety of educational goals…
Material Database for Additive Manufacturing Techniques
2017-12-01
has been used to fabricate prototypes, tooling, fixtures, and forms to test design fit [2]. 3-D printing allows free complexity and integration of...2016. 2. Orndorff, W., “3D Printing saves maintainers money at Hill,” Ogden Air Logistics Complex, Hill Air Force Base News, 12 December 2014. 3. P
ERIC Educational Resources Information Center
Maruyama, Geoffrey
1992-01-01
A Lewinian orientation to educational problems fits current innovative thinking in education (e.g., models for making education multicultural), and provides the bases of important applied work on cooperative learning techniques and constructive ways of structuring conflict within educational settings. Lewinian field theory provides a broad…
Genetic programming based ensemble system for microarray data classification.
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.
Genetic Programming Based Ensemble System for Microarray Data Classification
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748
NASA Astrophysics Data System (ADS)
Hu, Jianqiang; Liu, Ahdi; Zhou, Chu; Zhang, Xiaohui; Wang, Mingyuan; Zhang, Jin; Feng, Xi; Li, Hong; Xie, Jinlin; Liu, Wandong; Yu, Changxuan
2017-08-01
A new integrated technique for fast and accurate measurement of the quasi-optics, especially for the microwave/millimeter wave diagnostic systems of fusion plasma, has been developed. Using the LabVIEW-based comprehensive scanning system, we can realize not only automatic but also fast and accurate measurement, which will help to eliminate the effects of temperature drift and standing wave/multi-reflection. With the Matlab-based asymmetric two-dimensional Gaussian fitting method, all the desired parameters of the microwave beam can be obtained. This technique can be used in the design and testing of microwave diagnostic systems such as reflectometers and the electron cyclotron emission imaging diagnostic systems of the Experimental Advanced Superconducting Tokamak.
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
The effect of various veneering techniques on the marginal fit of zirconia copings.
Torabi, Kianoosh; Vojdani, Mahroo; Giti, Rashin; Taghva, Masumeh; Pardis, Soheil
2015-06-01
This study aimed to evaluate the fit of zirconia ceramics before and after veneering, using 3 different veneering processes (layering, press-over, and CAD-on techniques). Thirty standardized zirconia CAD/CAM frameworks were constructed and divided into three groups of 10 each. The first group was veneered using the traditional layering technique. Press-over and CAD-on techniques were used to veneer second and third groups. The marginal gap of specimens was measured before and after veneering process at 18 sites on the master die using a digital microscope. Paired t-test was used to evaluate mean marginal gap changes. One-way ANOVA and post hoc tests were also employed for comparison among 3 groups (α=.05). Marginal gap of 3 groups was increased after porcelain veneering. The mean marginal gap values after veneering in the layering group (63.06 µm) was higher than press-over (50.64 µm) and CAD-on (51.50 µm) veneered groups (P<.001). Three veneering methods altered the marginal fit of zirconia copings. Conventional layering technique increased the marginal gap of zirconia framework more than pressing and CAD-on techniques. All ceramic crowns made through three different veneering methods revealed clinically acceptable marginal fit.
An interactive user-friendly approach to surface-fitting three-dimensional geometries
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1988-01-01
A surface-fitting technique has been developed which addresses two problems with existing geometry packages: computer storage requirements and the time required of the user for the initial setup of the geometry model. Coordinates of cross sections are fit using segments of general conic sections. The next step is to blend the cross-sectional curve-fits in the longitudinal direction using general conics to fit specific meridional half-planes. Provisions are made to allow the fitting of fuselages and wings so that entire wing-body combinations may be modeled. This report includes the development of the technique along with a User's Guide for the various menus within the program. Results for the modeling of the Space Shuttle and a proposed Aeroassist Flight Experiment geometry are presented.
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Fit Analysis of Different Framework Fabrication Techniques for Implant-Supported Partial Prostheses.
Spazzin, Aloísio Oro; Bacchi, Atais; Trevisani, Alexandre; Farina, Ana Paula; Dos Santos, Mateus Bertolini
2016-01-01
This study evaluated the vertical misfit of implant-supported frameworks made using different techniques to obtain passive fit. Thirty three-unit fixed partial dentures were fabricated in cobalt-chromium alloy (n = 10) using three fabrication methods: one-piece casting, framework cemented on prepared abutments, and laser welding. The vertical misfit between the frameworks and the abutments was evaluated with an optical microscope using the single-screw test. Data were analyzed using one-way analysis of variance and Tukey test (α = .05). The one-piece casted frameworks presented significantly higher vertical misfit values than those found for framework cemented on prepared abutments and laser welding techniques (P < .001 and P < .003, respectively). Laser welding and framework cemented on prepared abutments are effective techniques to improve the adaptation of three-unit implant-supported prostheses. These techniques presented similar fit.
PSO-tuned PID controller for coupled tank system via priority-based fitness scheme
NASA Astrophysics Data System (ADS)
Jaafar, Hazriq Izzuan; Hussien, Sharifah Yuslinda Syed; Selamat, Nur Asmiza; Abidin, Amar Faiz Zainal; Aras, Mohd Shahrieel Mohd; Nasir, Mohamad Na'im Mohd; Bohari, Zul Hasrizal
2015-05-01
The industrial applications of Coupled Tank System (CTS) are widely used especially in chemical process industries. The overall process is require liquids to be pumped, stored in the tank and pumped again to another tank. Nevertheless, the level of liquid in tank need to be controlled and flow between two tanks must be regulated. This paper presents development of an optimal PID controller for controlling the desired liquid level of the CTS. Two method of Particle Swarm Optimization (PSO) algorithm will be tested in optimizing the PID controller parameters. These two methods of PSO are standard Particle Swarm Optimization (PSO) and Priority-based Fitness Scheme in Particle Swarm Optimization (PFPSO). Simulation is conducted within Matlab environment to verify the performance of the system in terms of settling time (Ts), steady state error (SSE) and overshoot (OS). It has been demonstrated that implementation of PSO via Priority-based Fitness Scheme (PFPSO) for this system is potential technique to control the desired liquid level and improve the system performances compared with standard PSO.
Cai, Jing; Tyree, Melvin T
2010-07-01
The objective of this study was to quantify the relationship between vulnerability to cavitation and vessel diameter within a species. We measured vulnerability curves (VCs: percentage loss hydraulic conductivity versus tension) in aspen stems and measured vessel-size distributions. Measurements were done on seed-grown, 4-month-old aspen (Populus tremuloides Michx) grown in a greenhouse. VCs of stem segments were measured using a centrifuge technique and by a staining technique that allowed a VC to be constructed based on vessel diameter size-classes (D). Vessel-based VCs were also fitted to Weibull cumulative distribution functions (CDF), which provided best-fit values of Weibull CDF constants (c and b) and P(50) = the tension causing 50% loss of hydraulic conductivity. We show that P(50) = 6.166D(-0.3134) (R(2) = 0.995) and that b and 1/c are both linear functions of D with R(2) > 0.95. The results are discussed in terms of models of VCs based on vessel D size-classes and in terms of concepts such as the 'pit area hypothesis' and vessel pathway redundancy.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Biomechanical characterization of double-bundle femoral press-fit fixation techniques.
Ettinger, M; Haasper, C; Hankemeier, S; Hurschler, C; Breitmeier, D; Krettek, C; Jagodzinski, M
2011-03-01
Press-fit fixation of patellar tendon bone anterior cruciate ligament autografts is an interesting technique because no hardware is necessary. To date, no biomechanical data exist describing an implant-free double-bundle press-fit procedure. The purpose of this study was to characterize the biomechanical properties of three double-bundle press-fit fixations. In a controlled laboratory study, the patellar-, quadriceps- and hamstring tendons of 10 human cadavers (age: 49.2 ± 18.5 years) were used. An inside out press-fit fixation with a knot in the semitendinosus and gracilis tendons (SG) combined with an additional bone block, with two quadriceps tendon bone block grafts (QU) was compared with press-fit fixation of two bone patellar tendon bone block (PT) grafts in 30 porcine femora. Constructs were cyclically stretched and then loaded until failure. Maximum load to failure, stiffness and elongation during failure testing and cyclical loading were investigated. The maximum load to failure was 703 ± 136 N for SG fixation, 632 ± 130 N for QU and 656 ± 127 N for PT fixation. Stiffness of the constructs averaged 138 ± 26 N/mm for SG, 159 ± 74 N/mm for QU, and 154 ± 50 N/mm for PT fixation. Elongation during initial cyclical loading was 1.2 ± 1.4 mm for SG, 2.0 ± 1.4 mm for QU, and 1.0 ± 0.6 mm for PT (significantly larger for PT and QU between the first 5 cycles compared with cycles 15-20th, P < 0.01). All investigated double-bundle fixation techniques were equal in terms of maximum load to failure, stiffness, and elongation. Unlike with single-bundle press-fit fixation techniques that have been published, no difference was observed between pure tendon combined with an additional bone block and tendon bone grafts. All techniques exhibited larger elongation during initial cyclical loading. All three press-fit fixation techniques that were investigated exhibit comparable biomechanical properties. Preconditioning of the constructs is critical.
Editorial Commentary: Polyurethane Meniscal Scaffold: A Perfect Fit or Flop?
Barber, F Alan
2018-05-01
The goal of using a synthetic scaffold to establish a biomechanically functioning meniscus or provide an equivalent meniscus substitute is not achieved by the polycaprolactone-polyurethane Actifit scaffold. Recent research, that did not include a control group, shows that the revision rate is significant, and any improvements in patient outcomes could reflect the associated reconstructive surgery. Based on these data and similar published reports, it is premature to conclude that this implant is clinically indicated. The technique is currently more flop than fit. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation
Golkar, Ehsan; Prabuwono, Anton Satria; Patel, Ahmed
2012-01-01
This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously. PMID:23202186
A Review of Developments in Computer-Based Systems to Image Teeth and Produce Dental Restorations
Rekow, E. Dianne; Erdman, Arthur G.; Speidel, T. Michael
1987-01-01
Computer-aided design and manufacturing (CAD/CAM) make it possible to automate the creation of dental restorations. Currently practiced techniques are described. Three automated systems currently under development are described and compared. Advances in computer-aided design and computer-aided manufacturing (CAD/CAM) provide a new option for dentistry, creating an alternative technique for producing dental restorations. It is possible to create dental restorations that are automatically produced and meet or exceed current requirements for fit and occlusion.
Single-ended retroreflection sensors for absorption spectroscopy in high-temperature environments
NASA Astrophysics Data System (ADS)
Melin, Scott T.; Wang, Ze; Neal, Nicholas J.; Rothamer, David A.; Sanders, Scott T.
2017-04-01
Novel single-ended sensor arrangements are demonstrated for in situ absorption spectroscopy in combustion and related test articles. A single-ended optical access technique based on back-reflection from a polished test article surface is presented. H2O vapor absorption spectra were measured at 10 kHz in a homogeneous-charge compression-ignition engine using a sensor of this design collecting back-reflection from a polished piston surface. The measured spectra show promise for high-repetition-rate measurements in practical combustion devices. A second sensor was demonstrated based on a modification to this optical access technique. The sensor incorporates a nickel retroreflective surface as back-reflector to reduce sensitivity to beam steering and misalignment. In a propane-fired furnace, H2O vapor absorption spectra were obtained over the range 7315-7550 cm- 1 at atmospheric pressure and temperatures up to 775 K at 20 Hz using an external-cavity diode laser spectrometer. Gas properties of temperature and mole fraction were obtained from this furnace data using a band-shape spectral fitting technique. The temperature accuracy of the band-shape fitting was demonstrated to be ±1.3 K for furnace measurements at atmospheric pressure. These results should extend the range of applications in which absorption spectroscopy sensors are attractive candidates.
NASA Astrophysics Data System (ADS)
Riley, P.
2016-12-01
The southward component of the interplanetary magnetic field plays a key role in many space weather-related phenomena. However, thus far, it has proven difficult to predict it with any degree of fidelity. In this talk I outline the difficulties in making such forecasts, and describe several promising techniques that may ultimately prove successful. In particular, I focus on predictions of magnetic fields embedded within interplanetary coronal mass ejections (ICMEs), which are the cause of most large, non-recurrent geomagnetic storms. I discuss three specific techniques that are already producing modest, but promising results. First, a pattern recognition approach, which matches observed coherent rotations in the magnetic field with historical intervals of similar variations, then forecasts future variations based on the historical data. Second, a novel flux rope fitting technique that uses an MCMC algorithm to find a best fit to the partially observed ICME. And third, an empirical modular CME model (based on the approach outlined by N. Savani and colleagues), which links several ad hoc models of coronal properties of the flux rope, its kinematics and geometry in the corona, dynamic evolution, and time of transit to 1 AU. We highlight the uncertainties associated with these predictions, and, in particular, identify those that we believe can be reduced in the future.
Data to Pictures to Data: Outreach Imaging Software and Metadata
NASA Astrophysics Data System (ADS)
Levay, Z.
2011-07-01
A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.
Vasanawala, Shreyas S; Yu, Huanzhou; Shimakawa, Ann; Jeng, Michael; Brittain, Jean H
2012-01-01
MRI imaging of hepatic iron overload can be achieved by estimating T(2) values using multiple-echo sequences. The purpose of this work is to develop and clinically evaluate a weighted least squares algorithm based on T(2) Iterative Decomposition of water and fat with Echo Asymmetry and Least-squares estimation (IDEAL) technique for volumetric estimation of hepatic T(2) in the setting of iron overload. The weighted least squares T(2) IDEAL technique improves T(2) estimation by automatically decreasing the impact of later, noise-dominated echoes. The technique was evaluated in 37 patients with iron overload. Each patient underwent (i) a standard 2D multiple-echo gradient echo sequence for T(2) assessment with nonlinear exponential fitting, and (ii) a 3D T(2) IDEAL technique, with and without a weighted least squares fit. Regression and Bland-Altman analysis demonstrated strong correlation between conventional 2D and T(2) IDEAL estimation. In cases of severe iron overload, T(2) IDEAL without weighted least squares reconstruction resulted in a relative overestimation of T(2) compared with weighted least squares. Copyright © 2011 Wiley-Liss, Inc.
Wong, Martin C S; Ching, Jessica Y L; Chan, Victor C W; Sung, Joseph J Y
2015-09-04
Faecal immunochemical tests (FITs) and colonoscopy are two common screening tools for colorectal cancer(CRC). Most cost-effectiveness studies focused on survival as the outcome, and were based on modeling techniques instead of real world observational data. This study evaluated the cost-effectiveness of these two tests to detect colorectal neoplastic lesions based on data from a 5-year community screening service. The incremental cost-effectiveness ratio (ICER) was assessed based on the detection rates of neoplastic lesions, and costs including screening compliance, polypectomy, colonoscopy complications, and staging of CRC detected. A total of 5,863 patients received yearly FIT and 4,869 received colonoscopy. Compared with FIT, colonoscopy detected notably more adenomas (23.6% vs. 1.6%) and advanced lesions or cancer (4.2% vs. 1.2%). Using FIT as control, the ICER of screening colonoscopy in detecting adenoma, advanced adenoma, CRC and a composite endpoint of either advanced adenoma or stage I CRC was US$3,489, US$27,962, US$922,762 and US$23,981 respectively. The respective ICER was US$3,597, US$439,513, -US$2,765,876 and US$32,297 among lower-risk subjects; whilst the corresponding figure was US$3,153, US$14,852, US$184,162 and US$13,919 among higher-risk subjects. When compared to FIT, colonoscopy is considered cost-effective for screening adenoma, advanced neoplasia, and a composite endpoint of advanced neoplasia or stage I CRC.
Wong, Martin CS; Ching, Jessica YL; Chan, Victor CW; Sung, Joseph JY
2015-01-01
Faecal immunochemical tests (FITs) and colonoscopy are two common screening tools for colorectal cancer(CRC). Most cost-effectiveness studies focused on survival as the outcome, and were based on modeling techniques instead of real world observational data. This study evaluated the cost-effectiveness of these two tests to detect colorectal neoplastic lesions based on data from a 5-year community screening service. The incremental cost-effectiveness ratio (ICER) was assessed based on the detection rates of neoplastic lesions, and costs including screening compliance, polypectomy, colonoscopy complications, and staging of CRC detected. A total of 5,863 patients received yearly FIT and 4,869 received colonoscopy. Compared with FIT, colonoscopy detected notably more adenomas (23.6% vs. 1.6%) and advanced lesions or cancer (4.2% vs. 1.2%). Using FIT as control, the ICER of screening colonoscopy in detecting adenoma, advanced adenoma, CRC and a composite endpoint of either advanced adenoma or stage I CRC was US$3,489, US$27,962, US$922,762 and US$23,981 respectively. The respective ICER was US$3,597, US$439,513, -US$2,765,876 and US$32,297 among lower-risk subjects; whilst the corresponding figure was US$3,153, US$14,852, US$184,162 and US$13,919 among higher-risk subjects. When compared to FIT, colonoscopy is considered cost-effective for screening adenoma, advanced neoplasia, and a composite endpoint of advanced neoplasia or stage I CRC. PMID:26338314
Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2010-01-01
A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…
USDA-ARS?s Scientific Manuscript database
Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...
E-Learning Personalization Based on Hybrid Recommendation Strategy and Learning Style Identification
ERIC Educational Resources Information Center
Klasnja-Milicevic, Aleksandra; Vesin, Boban; Ivanovic, Mirjana; Budimac, Zoran
2011-01-01
Personalized learning occurs when e-learning systems make deliberate efforts to design educational experiences that fit the needs, goals, talents, and interests of their learners. Researchers had recently begun to investigate various techniques to help teachers improve e-learning systems. In this paper, we describe a recommendation module of a…
NASA Astrophysics Data System (ADS)
Parente, Mario; Makarewicz, Heather D.; Bishop, Janice L.
2011-04-01
This study advances curve-fitting modeling of absorption bands of reflectance spectra and applies this new model to spectra of Martian meteorites ALH 84001 and EETA 79001 and data from the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). This study also details a recently introduced automated parameter initialization technique. We assess the performance of this automated procedure by comparing it to the currently available initialization method and perform a sensitivity analysis of the fit results to variation in initial guesses. We explore the issues related to the removal of the continuum, offer guidelines for continuum removal when modeling the absorptions and explore different continuum-removal techniques. We further evaluate the suitability of curve fitting techniques using Gaussians/Modified Gaussians to decompose spectra into individual end-member bands. We show that nonlinear least squares techniques such as the Levenberg-Marquardt algorithm achieve comparable results to the MGM model ( Sunshine and Pieters, 1993; Sunshine et al., 1990) for meteorite spectra. Finally we use Gaussian modeling to fit CRISM spectra of pyroxene and olivine-rich terrains on Mars. Analysis of CRISM spectra of two regions show that the pyroxene-dominated rock spectra measured at Juventae Chasma were modeled well with low Ca pyroxene, while the pyroxene-rich spectra acquired at Libya Montes required both low-Ca and high-Ca pyroxene for a good fit.
KNGEOID14: A national hybrid geoid model in Korea
NASA Astrophysics Data System (ADS)
Kang, S.; Sung, Y. M.; KIM, H.; Kim, Y. S.
2016-12-01
This study describes in brief the construction of a national hybrid geoid model in Korea, KNGEOID14, which can be used as an accurate vertical datum in/around Korea. The hybrid geoid model should be determined by fitting the gravimetric geoid to the geometric geoid undulations from GNSS/Leveling data which were presented the local vertical level. For developing the gravimetric geoid model, we determined all frequency parts (long, middle and short-frequency) of gravimetric geoid using all available data with optimal remove-restore technique based on EGM2008 reference surface. In remove-restore technique, the EGM2008 model to degree 360, RTM reduction method were used for calculating the long, middle and short-frequency part of gravimetric geoid, respectively. A number of gravity data compiled for modeling the middle-frequency part, residual geoid, containing 8,866 points gravity data on land and ocean areas. And, the DEM data gridded by 100m×100m were used for short-frequency part, is the topographic effect on the geoid generated by RTM method. The accuracy of gravimetric geoid model were evaluated by comparison with GNSS/Leveling data was about -0.362m ± 0.055m. Finally, we developed the national hybrid geoid model in Korea, KNGEOID14, corrected to gravimetric geoid with the correction term by fitting the about 1,200 GNSS/Leveling data on Korean bench marks. The correction term is modeled using the difference between GNSS/Leveling derived geoidal heights and gravimetric geoidal heights. The stochastic model used in the calculation of correction term is the LSC technique based on second-order Markov covariance function. The post-fit error (mean and std. dev.) of the KNGEOID14 model was evaluated as 0.001m ± 0.033m. Concerning the result of this study, the accurate orthometric height at any points in Korea will be easily and precisely calculated by combining the geoidal height from KNGEOID14 and ellipsoidal height from GPS observation technique.
Age of the magnetically active WW Psa and TX Psa members of the β Pictoris association
NASA Astrophysics Data System (ADS)
Messina, S.; Santallo, R.; Tan, T. G.; Elliott, P.; Feiden, G. A.; Buccino, A.; Mauas, P.; Petrucci, R.; Jofré, E.
2017-05-01
Context. There are a variety of different techniques available to estimate the ages of pre-main-sequence stars. Components of physical pairs, thanks to their strict coevality and the mass difference, such as the binary system analyzed in this paper, are best suited to test the effectiveness of these different techniques. Aims: We consider the system WW Psa + TX Psa whose membership of the 25-Myr β Pictoris association has been well established by earlier works. We aim to investigate which age-dating technique provides the best agreement between the age of the system and that of the association. Methods: We have photometrically monitored WW Psa and TX Psa and measured their rotation periods as P = 2.37 d and P = 1.086 d, respectively. We have retrieved their Li equivalent widths from the literature and measured their effective temperatures and luminosities. We investigated whether the ages of these stars derived using three independent techniques, that is based on rotation, Li equivalent widths, and the position in the HR diagram are consistent with the age of the β Pictoris association. Results: We find that the rotation periods and the Li contents of both stars are consistent with the distribution of other bona fide members of the cluster. On the contrary, the isochronal fitting provides similar ages for both stars, but a factor of about four younger than the quoted age of the association, or about 30% younger when the effects of magnetic fields are included. Conclusions: We explore the origin of the discrepant age inferred from isochronal fitting, including the possibilities that either the two components may be unresolved binaries or that the basic stellar parameters of both components are altered by enhanced magnetic activity. The latter is found to be the more reasonable cause, suggesting that age estimates based on Li content are more reliable than isochronal fitting for pre-main-sequence stars with pronounced magnetic activity.
McCaig, Duncan; Bhatia, Sudeep; Elliott, Mark T; Walasek, Lukasz; Meyer, Caroline
2018-05-07
Text-mining offers a technique to identify and extract information from a large corpus of textual data. As an example, this study presents the application of text-mining to assess and compare interest in fitness tracking technology across eating disorder and health-related online communities. A list of fitness tracking technology terms was developed, and communities (i.e., 'subreddits') on a large online discussion platform (Reddit) were compared regarding the frequency with which these terms occurred. The corpus used in this study comprised all comments posted between May 2015 and January 2018 (inclusive) on six subreddits-three eating disorder-related, and three relating to either fitness, weight-management, or nutrition. All comments relating to the same 'thread' (i.e., conversation) were concatenated, and formed the cases used in this study (N = 377,276). Within the eating disorder-related subreddits, the findings indicated that a 'pro-eating disorder' subreddit, which is less recovery focused than the other eating disorder subreddits, had the highest frequency of fitness tracker terms. Across all subreddits, the weight-management subreddit had the highest frequency of the fitness tracker terms' occurrence, and MyFitnessPal was the most frequently mentioned fitness tracker. The technique exemplified here can potentially be used to assess group differences to identify at-risk populations, generate and explore clinically relevant research questions in populations who are difficult to recruit, and scope an area for which there is little extant literature. The technique also facilitates methodological triangulation of research findings obtained through more 'traditional' techniques, such as surveys or interviews. © 2018 Wiley Periodicals, Inc.
Accuracy of 3 different impression techniques for internal connection angulated implants.
Tsagkalidis, George; Tortopidis, Dimitrios; Mpikos, Pavlos; Kaisarlis, George; Koidis, Petros
2015-10-01
Making implant impressions with different angulations requires a more precise and time-consuming impression technique. The purpose of this in vitro study was to compare the accuracy of nonsplinted, splinted, and snap-fit impression techniques of internal connection implants with different angulations. An experimental device was used to allow a clinical simulation of impression making by means of open and closed tray techniques. Three different impression techniques (nonsplinted, acrylic-resin splinted, and indirect snap-fit) for 6 internal-connected implants at different angulations (0, 15, 25 degrees) were examined using polyether. Impression accuracy was evaluated by measuring the differences in 3-dimensional (3D) position deviations between the implant body/impression coping before the impression procedure and the coping/laboratory analog positioned within the impression, using a coordinate measuring machine. Data were analyzed by 2-way ANOVA. Means were compared with the least significant difference criterion at P<.05. Results showed that at 25 degrees of implant angulation, the highest accuracy was obtained with the splinted technique (mean ±SE: 0.39 ±0.05 mm) and the lowest with the snap-fit technique (0.85 ±0.09 mm); at 15 degrees of angulation, there were no significant differences among splinted (0.22 ±0.04 mm) and nonsplinted technique (0.15 ±0.02 mm) and the lowest accuracy obtained with the snap-fit technique (0.95 ±0.15 mm); and no significant differences were found between nonsplinted and splinted technique at 0 degrees of implant placement. Splinted impression technique exhibited a higher accuracy than the other techniques studied when increased implant angulations at 25 degrees were involved. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Empirical predictions of hypervelocity impact damage to the space station
NASA Technical Reports Server (NTRS)
Rule, W. K.; Hayashida, K. B.
1991-01-01
A family of user-friendly, DOS PC based, Microsoft BASIC programs written to provide spacecraft designers with empirical predictions of space debris damage to orbiting spacecraft is described. The spacecraft wall configuration is assumed to consist of multilayer insulation (MLI) placed between a Whipple style bumper and the pressure wall. Predictions are based on data sets of experimental results obtained from simulating debris impacts on spacecraft using light gas guns on Earth. A module of the program facilitates the creation of the data base of experimental results that are used by the damage prediction modules of the code. The user has the choice of three different prediction modules to predict damage to the bumper, the MLI, and the pressure wall. One prediction module is based on fitting low order polynomials through subsets of the experimental data. Another prediction module fits functions based on nondimensional parameters through the data. The last prediction technique is a unique approach that is based on weighting the experimental data according to the distance from the design point.
The effect of various veneering techniques on the marginal fit of zirconia copings
Torabi, Kianoosh; Vojdani, Mahroo; Giti, Rashin; Pardis, Soheil
2015-01-01
PURPOSE This study aimed to evaluate the fit of zirconia ceramics before and after veneering, using 3 different veneering processes (layering, press-over, and CAD-on techniques). MATERIALS AND METHODS Thirty standardized zirconia CAD/CAM frameworks were constructed and divided into three groups of 10 each. The first group was veneered using the traditional layering technique. Press-over and CAD-on techniques were used to veneer second and third groups. The marginal gap of specimens was measured before and after veneering process at 18 sites on the master die using a digital microscope. Paired t-test was used to evaluate mean marginal gap changes. One-way ANOVA and post hoc tests were also employed for comparison among 3 groups (α=.05). RESULTS Marginal gap of 3 groups was increased after porcelain veneering. The mean marginal gap values after veneering in the layering group (63.06 µm) was higher than press-over (50.64 µm) and CAD-on (51.50 µm) veneered groups (P<.001). CONCLUSION Three veneering methods altered the marginal fit of zirconia copings. Conventional layering technique increased the marginal gap of zirconia framework more than pressing and CAD-on techniques. All ceramic crowns made through three different veneering methods revealed clinically acceptable marginal fit. PMID:26140175
Zhang, Dongjing; Zheng, Xiaoying; Xi, Zhiyong; Bourtzis, Kostas; Gilles, Jeremie R. L.
2015-01-01
The mosquito species Aedes albopictus is a major vector of the human diseases dengue and chikungunya. Due to the lack of efficient and sustainable methods to control this mosquito species, there is an increasing interest in developing and applying the sterile insect technique (SIT) and the incompatible insect technique (IIT), separately or in combination, as population suppression approaches. Ae. albopictus is naturally double-infected with two Wolbachia strains, wAlbA and wAlbB. A new triple Wolbachia-infected strain (i.e., a strain infected with wAlbA, wAlbB, and wPip), known as HC and expressing strong cytoplasmic incompatibility (CI) in appropriate matings, was recently developed. In the present study, we compared several fitness traits of three Ae. albopictus strains (triple-infected, double-infected and uninfected), all of which were of the same genetic background (“Guangzhou City, China”) and were reared under the same conditions. Investigation of egg-hatching rate, survival of pupae and adults, sex ratio, duration of larval stages (development time from L1 to pupation), time to emergence (development time from L1 to adult emergence), wing length, female fecundity and adult longevity indicated that the presence of Wolbachia had only a minimal effect on host fitness. Based on this evidence, the HC strain is currently under consideration for mass rearing and application in a combined SIT-IIT strategy to control natural populations of Ae. albopictus in mainland China. PMID:25849812
NASA Astrophysics Data System (ADS)
Conley, A.; Goldhaber, G.; Wang, L.; Aldering, G.; Amanullah, R.; Commins, E. D.; Fadeyev, V.; Folatelli, G.; Garavini, G.; Gibbons, R.; Goobar, A.; Groom, D. E.; Hook, I.; Howell, D. A.; Kim, A. G.; Knop, R. A.; Kowalski, M.; Kuznetsova, N.; Lidman, C.; Nobili, S.; Nugent, P. E.; Pain, R.; Perlmutter, S.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Strovink, M.; Thomas, R. C.; Wood-Vasey, W. M.; Supernova Cosmology Project
2006-06-01
We present measurements of Ωm and ΩΛ from a blind analysis of 21 high-redshift supernovae using a new technique (CMAGIC) for fitting the multicolor light curves of Type Ia supernovae, first introduced by Wang and coworkers. CMAGIC takes advantage of the remarkably simple behavior of Type Ia supernovae on color-magnitude diagrams and has several advantages over current techniques based on maximum magnitudes. Among these are a reduced sensitivity to host galaxy dust extinction, a shallower luminosity-width relation, and the relative simplicity of the fitting procedure. This allows us to provide a cross-check of previous supernova cosmology results, despite the fact that current data sets were not observed in a manner optimized for CMAGIC. We describe the details of our novel blindness procedure, which is designed to prevent experimenter bias. The data are broadly consistent with the picture of an accelerating universe and agree with a flat universe within 1.7 σ, including systematics. We also compare the CMAGIC results directly with those of a maximum magnitude fit to the same supernovae, finding that CMAGIC favors more acceleration at the 1.6 σ level, including systematics and the correlation between the two measurements. A fit for w assuming a flat universe yields a value that is consistent with a cosmological constant within 1.2 σ.
Zimmermann, Moritz; Valcanaia, Andre; Neiva, Gisele; Mehl, Albert; Fasbinder, Dennis
2017-11-30
Several methods for the evaluation of fit of computer-aided design/computer-assisted manufacture (CAD/CAM)-fabricated restorations have been described. In the study, digital models were recorded with an intraoral scanning device and were measured using a new three-dimensional (3D) computer technique to evaluate restoration internal fit. The aim of the study was to evaluate the internal adaptation and fit of chairside CAD/CAM-fabricated zirconia-reinforced lithium silicate ceramic crowns fabricated with different post-milling protocols. The null hypothesis was that different post-milling protocols did not influence the fitting accuracy of zirconia-reinforced lithium silicate restorations. A master all-ceramic crown preparation was completed on a maxillary right first molar on a typodont. Twenty zirconia-reinforced lithium silicate ceramic crowns (Celtra Duo, Dentsply Sirona) were designed and milled using a chairside CAD/CAM system (CEREC Omnicam, Dentsply Sirona). The 20 crowns were randomly divided into two groups based on post-milling protocols: no manipulation after milling (Group MI) and oven fired-glazing after milling (Group FG). A 3D computer method was used to evaluate the internal adaptation of the crowns. This was based on a subtractive analysis of a digital scan of the crown preparation and a digital scan of the thickness of the cement space over the crown preparation as recorded by a polyvinylsiloxane (PVS) impression material. The preparation scan and PVS scan were matched in 3D and a 3D difference analysis was performed with a software program (OraCheck, Cyfex). Three areas of internal adaptation and fit were selected for analysis: margin (MA), axial wall (AX), and occlusal surface (OC). Statistical analysis was performed using 80% percentile and one-way ANOVA with post-hoc Scheffé test (P = .05). The closest internal adaptation of the crowns was measured at the axial wall with 102.0 ± 11.7 µm for group MI-AX and 106.3 ± 29.3 µm for group FG-AX. The largest internal adaptation of the crowns was measured for the occlusal surface with 258.9 ± 39.2 µm for group MI-OC and 260.6 ± 55.0 µm for group FG-OC. No statistically significant differences were found for the post-milling protocols (P > .05). The 3D difference pattern was visually analyzed for each area with a color-coded scheme. Post-milling processing did not affect the internal adaptation of zirconia-reinforced lithium silicate crowns fabricated with a chairside CAD/CAM technique. The new 3D computer technique for the evaluation of fit of restorations may be highly promising and has the opportunity to be applied to clinical studies.
Effect of various putty-wash impression techniques on marginal fit of cast crowns.
Nissan, Joseph; Rosner, Ofir; Bukhari, Mohammed Amin; Ghelfan, Oded; Pilo, Raphael
2013-01-01
Marginal fit is an important clinical factor that affects restoration longevity. The accuracy of three polyvinyl siloxane putty-wash impression techniques was compared by marginal fit assessment using the nondestructive method. A stainless steel master cast containing three abutments with three metal crowns matching the three preparations was used to make 45 impressions: group A = single-step technique (putty and wash impression materials used simultaneously), group B = two-step technique with a 2-mm relief (putty as a preliminary impression to create a 2-mm wash space followed by the wash stage), and group C = two-step technique with a polyethylene spacer (plastic spacer used with the putty impression followed by the wash stage). Accuracy was assessed using a toolmaker microscope to measure and compare the marginal gaps between each crown and finish line on the duplicated stone casts. Each abutment was further measured at the mesial, buccal, and distal aspects. One-way analysis of variance was used for statistical analysis. P values and Scheffe post hoc contrasts were calculated. Significance was determined at .05. One-way analysis of variance showed significant differences among the three impression techniques in all three abutments and at all three locations (P < .001). Group B yielded dies with minimal gaps compared to groups A and C. The two-step impression technique with 2-mm relief was the most accurate regarding the crucial clinical factor of marginal fit.
Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba
2014-11-01
Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity. However, the same techniques discussed could also have application to catastrophes resulting from other incidents, such as natural disasters or industrial accidents. Further, the high sample throughput enabled by the techniques discussed could be employed for conventional environmental studies and compliance monitoring, potentially decreasing costs and/or increasing the quantity of data available to decision-makers. Published by Elsevier Ltd.
Comments on Different techniques for finding best-fit parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.; Triplett, Laurie A.
2014-07-01
A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Fitting multidimensional splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
NASA Astrophysics Data System (ADS)
Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.
2016-04-01
Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan
2008-02-01
One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.
Kinematic modelling of disc galaxies using graphics processing units
NASA Astrophysics Data System (ADS)
Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.
2016-01-01
With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
Kulkarni, Nagraj S.; Bruce Warmack, Robert J.; Radhakrishnan, Bala; ...
2014-09-23
Tracer diffusivities provide the most fundamental information on diffusion in materials and are the foundation of robust diffusion databases. Compared to traditional radiotracer techniques that utilize radioactive isotopes, the secondary ion mass spectrometry (SIMS) based thin-film technique for tracer diffusion is based on the use of enriched stable isotopes that can be accurately profiled using SIMS. Experimental procedures & techniques that are utilized for the measurement of tracer diffusion coefficients are presented for pure magnesium, which presents some unique challenges due to the ease of oxidation. The development of a modified Shewmon-Rhines diffusion capsule for annealing Mg and an ultra-highmore » vacuum (UHV) system for sputter deposition of Mg isotopes are discussed. Optimized conditions for accurate SIMS depth profiling in polycrystalline Mg are provided. An automated procedure for the correction of heat-up and cool-down times during tracer diffusion annealing is discussed. The non-linear fitting of a SIMS depth profile data using the thin film Gaussian solution to obtain the tracer diffusivity along with the background tracer concentration and tracer film thickness is discussed. An Arrhenius fit of the Mg self-diffusion data obtained using the low-temperature SIMS measurements from this study and the high-temperature radiotracer measurements of Shewmon and Rhines (1954) was found to be a good representation of both types of diffusion data that cover a broad range of temperatures between 250 - 627° C (523 900 K).« less
Anatomic Double Bundle single tunnel Foreign Material Free ACL-Reconstruction – a technical note
Felmet, Gernot
2011-01-01
Summary The anterior cruciate ligament (ACL) consists of two bundles, the anteromedial (AM) and posterolateral bundle (PM). Double bundle reconstructions appear to give better rotational stability. The usual technique is to make two tunnels in the femur and two in the tibia. This is difficult and in small knees may not even be possible. We have developed a foreign material free press fit fixation for double bundle ACL reconstruction using a single femoral tunnel (R). This is based on the ALL PRESS FIT ACL reconstruction. It is suitable for the most common medium and, otherwise difficult, small sizes of knees. Method: Using diamond edged wet grinding hollow reamers, bone cylinders in different diameters are harvested from the implantation tunnels of the tibia and femur and used for the press fit fixation. Using the press fit technique the graft is first fixed in tibia. It is then similarly fixed under tension in the femoral side with the knee in 120 degree flexion. This is called Bottom To Top Fixation (BTT). On extending the knee the graft tension is self adapting. Depending on the size of the individual knee, the diameter of the femoral bone plug is varied from 8 to 13 mm to achieve an anatomic spread with a double bundle-like insertion. The tibia tunnel can be applied with two 7 or 8 mm diameter tunnels overlapping to a semi oval tunnel between 10 to 13 mm. Results: Since May 2003 we have carried out ACL-reconstructions with Hamstring grafts without foreign material using the ALL PRESS FIT technique. Initially, an 8 mm press fit fixation was used proximally with good results. Since April 2008, the range of diameters was increased up to 13 mm. The results of the Lachman tests have been good to excellent. Results of the Pivot shift test suggested more stability with femoral broader diameters of 9,5 to 13 mm. Conclusions: The foreign material free fixation of ham-string in the ALL PRESS FIT Bottom To Top Fixation is a successful method for ACL Reconstruction. The Diamond Instruments and tubed guiding devices are precise, reliable and easy to manage. On this basis a double bundle reconstruction is achieved using a single tunnel. A broad anatomic femoral insertion with autogenous bone plugs inserted near the cortex seems to improve rotational stability. PMID:23738263
Application of nonlinear regression in the development of a wide range formulation for HCFC-22
NASA Astrophysics Data System (ADS)
Kamei, A.; Beyerlein, S. W.; Jacobsen, R. T.
1995-09-01
An equation of state has been developed for HCFC-22 for temperatures from the triple point (115.73 K) to 550 K, at pressures up to 60 MPa. Based on comparisons between experimental data and calculated properties, the accuracy of the wide-range equation of state is ±0.1% in density, ±0.3% in speed of sound, and ±1.0% in isobaric heat capacity, except in the critical region. Nonlinear fitting techniques were used to fit a liquid equation of state based on P-ρ-T, speed of sound, and isobaric heat capacity data. Properties calculated from the liquid equation of state were then used to expand the range of validity of the wide range equation of state for HCFC-22.
Oyagüe, Raquel Castillo; Sánchez-Turrión, Andrés; López-Lozano, José Francisco; Montero, Javier; Albaladejo, Alberto; Suárez-García, María Jesús
2012-07-01
This study evaluated the vertical discrepancy of implant-fixed 3-unit structures. Frameworks were constructed with laser-sintered Co-Cr, and vacuum-cast Co-Cr, Ni-Cr-Ti, and Pd-Au. Samples of each alloy group were randomly luted in standard fashion using resin-modified glass-ionomer, self-adhesive, and acrylic/urethane-based cements (n = 12 each). Discrepancies were SEM analyzed. Three-way ANOVA and Student-Newman-Keuls tests were run (P < 0.05). Laser-sintered structures achieved the best fit per cement tested. Within each alloy group, resin-modified glass-ionomer and acrylic/urethane-based cements produced comparably lower discrepancies than the self-adhesive agent. The abutment position did not yield significant differences. All misfit values could be considered clinically acceptable.
NASA Astrophysics Data System (ADS)
McCann, Cooper Patrick
Low-cost flight-based hyperspectral imaging systems have the potential to provide valuable information for ecosystem and environmental studies as well as aide in land management and land health monitoring. This thesis describes (1) a bootstrap method of producing mesoscale, radiometrically-referenced hyperspectral data using the Landsat surface reflectance (LaSRC) data product as a reference target, (2) biophysically relevant basis functions to model the reflectance spectra, (3) an unsupervised classification technique based on natural histogram splitting of these biophysically relevant parameters, and (4) local and multi-temporal anomaly detection. The bootstrap method extends standard processing techniques to remove uneven illumination conditions between flight passes, allowing the creation of radiometrically self-consistent data. Through selective spectral and spatial resampling, LaSRC data is used as a radiometric reference target. Advantages of the bootstrap method include the need for minimal site access, no ancillary instrumentation, and automated data processing. Data from a flight on 06/02/2016 is compared with concurrently collected ground based reflectance spectra as a means of validation achieving an average error of 2.74%. Fitting reflectance spectra using basis functions, based on biophysically relevant spectral features, allows both noise and data reductions while shifting information from spectral bands to biophysical features. Histogram splitting is used to determine a clustering based on natural splittings of these fit parameters. The Indian Pines reference data enabled comparisons of the efficacy of this technique to established techniques. The splitting technique is shown to be an improvement over the ISODATA clustering technique with an overall accuracy of 34.3/19.0% before merging and 40.9/39.2% after merging. This improvement is also seen as an improvement of kappa before/after merging of 24.8/30.5 for the histogram splitting technique compared to 15.8/28.5 for ISODATA. Three hyperspectral flights over the Kevin Dome area, covering 1843 ha, acquired 06/21/2014, 06/24/2015 and 06/26/2016 are examined with different methods of anomaly detection. Detection of anomalies within a single data set is examined to determine, on a local scale, areas that are significantly different from the surrounding area. Additionally, the detection and identification of persistent anomalies and non-persistent anomalies was investigated across multiple data sets.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Kattner, Florian; Cochrane, Aaron; Green, C Shawn
2017-09-01
The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
ERIC Educational Resources Information Center
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Reducing Router Forwarding Table Size Using Aggregation and Caching
ERIC Educational Resources Information Center
Liu, Yaoqing
2013-01-01
The fast growth of global routing table size has been causing concerns that the Forwarding Information Base (FIB) will not be able to fit in existing routers' expensive line-card memory, and upgrades will lead to a higher cost for network operators and customers. FIB Aggregation, a technique that merges multiple FIB entries into one, is probably…
Draenert, Florian Guy; Huetzen, Dominic; Kämmerer, Peer; Wagner, Wilfried
2011-09-01
Bone transplants are mostly prepared with cutting drills, chisels, and rasps. These techniques are difficult for unexperienced surgeons, and the implant interface is less precise due to unstandardized preparation. Cylindrical bone transplants are a known alternative. Current techniques include fixation methods with osteosynthesis screws or the dental implant. A new bone cylinder transplant technique is presented using a twin-drill principle resulting in a customized pressfit of the transplant without fixation devices and combining this with the superior grinding properties of a diamond coating. New cylindrical diamond hollow drills are used for customized press fit bone transplants in a case series of five patients for socket reconstruction in the front and molar region of maxilla and mandibula with and without simultaneous implant placement. The technical approach was successful without intra or postoperative complications during the acute healing phase. The customized press fit completes a technological trias of bone cylinder transplant techniques adding to the assisted press fit with either osteosynthesis screws or the dental implant itself. © 2009 Wiley Periodicals, Inc.
Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP
2016-01-01
Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806
Papaspyridakos, Panos; Hirayama, Hiroshi; Chen, Chun-Jung; Ho, Chung-Han; Chronopoulos, Vasilios; Weber, Hans-Peter
2016-09-01
The aim of this study was to assess the effect of connection type and impression technique on the accuracy of fit of implant-supported fixed complete-arch dental prostheses (IFCDPs). An edentulous mandibular cast with five implants was fabricated to serve as master cast (control) for both implant- and abutment-level baselines. A titanium one-piece framework for an IFCDP was milled at abutment level and used for accuracy of fit measurements. Polyether impressions were made using a splinted and non-splinted technique at the implant and abutment level leading to four test groups, n = 10 each. Hence, four groups of test casts were generated. The impression accuracy was evaluated indirectly by assessing the fit of the IFCDP framework on the generated casts of the test groups, clinically and radiographically. Additionally, the control and all test casts were digitized with a high-resolution reference scanner (IScan D103i, Imetric, Courgenay, Switzerland) and standard tessellation language datasets were generated and superimposed. Potential correlations between the clinical accuracy of fit data and the data from the digital scanning were investigated. To compare the accuracy of casts of the test groups versus the control at the implant and abutment level, Fisher's exact test was used. Of the 10 casts of test group I (implant-level splint), all 10 presented with accurate clinical fit when the framework was seated on its respective cast, while only five of 10 casts of test group II (implant-level non-splint) showed adequate fit. All casts of group III (abutment-level splint) presented with accurate fit, whereas nine of 10 of the casts of test group IV (abutment-level non-splint) were accurate. Significant 3D deviations (P < 0.05) were found between group II and the control. No statistically significant differences were found between groups I, III, and IV compared with the control. Implant connection type (implant level vs. abutment level) and impression technique did affect the 3D accuracy of implant impressions only with the non-splint technique (P < 0.05). For one-piece IFCDPs, the implant-level splinted impression technique showed to be more accurate than the non-splinted approach, whereas at the abutment-level, no difference in the accuracy was found. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Uncovering the Nutritional Landscape of Food
Kim, Seunghyeon; Sung, Jaeyun; Foo, Mathias; Jin, Yong-Su; Kim, Pan-Jun
2015-01-01
Recent progresses in data-driven analysis methods, including network-based approaches, are revolutionizing many classical disciplines. These techniques can also be applied to food and nutrition, which must be studied to design healthy diets. Using nutritional information from over 1,000 raw foods, we systematically evaluated the nutrient composition of each food in regards to satisfying daily nutritional requirements. The nutrient balance of a food was quantified and termed nutritional fitness; this measure was based on the food’s frequency of occurrence in nutritionally adequate food combinations. Nutritional fitness offers a way to prioritize recommendable foods within a global network of foods, in which foods are connected based on the similarities of their nutrient compositions. We identified a number of key nutrients, such as choline and α-linolenic acid, whose levels in foods can critically affect the nutritional fitness of the foods. Analogously, pairs of nutrients can have the same effect. In fact, two nutrients can synergistically affect the nutritional fitness, although the individual nutrients alone may not have an impact. This result, involving the tendency among nutrients to exhibit correlations in their abundances across foods, implies a hidden layer of complexity when exploring for foods whose balance of nutrients within pairs holistically helps meet nutritional requirements. Interestingly, foods with high nutritional fitness successfully maintain this nutrient balance. This effect expands our scope to a diverse repertoire of nutrient-nutrient correlations, which are integrated under a common network framework that yields unexpected yet coherent associations between nutrients. Our nutrient-profiling approach combined with a network-based analysis provides a more unbiased, global view of the relationships between foods and nutrients, and can be extended towards nutritional policies, food marketing, and personalized nutrition. PMID:25768022
Held, Jürgen; Manser, Tanja
2005-02-01
This article outlines how a Palm- or Newton-based PDA (personal digital assistant) system for online event recording was used to record and analyze concurrent events. We describe the features of this PDA-based system, called the FIT-System (flexible interface technique), and its application to the analysis of concurrent events in complex behavioral processes--in this case, anesthesia work processes. The patented FIT-System has a unique user interface design allowing the user to design an interface template with a pencil and paper or using a transparency film. The template usually consists of a drawing or sketch that includes icons or symbols that depict the observer's representation of the situation to be observed. In this study, the FIT-System allowed us to create a design for fast, intuitive online recording of concurrent events using a set of 41 observation codes. An analysis of concurrent events leads to a description of action density, and our results revealed a characteristic distribution of action density during the administration of anesthesia in the operating room. This distribution indicated the central role of the overlapping operations in the action sequences of medical professionals as they deal with the varying requirements of this complex task. We believe that the FIT-System for online recording of concurrent events in complex behavioral processes has the potential to be useful across a broad spectrum of research areas.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Lifting degeneracy in holographic characterization of colloidal particles using multi-color imaging.
Ruffner, David B; Cheong, Fook Chiong; Blusewicz, Jaroslaw M; Philips, Laura A
2018-05-14
Micrometer sized particles can be accurately characterized using holographic video microscopy and Lorenz-Mie fitting. In this work, we explore some of the limitations in holographic microscopy and introduce methods for increasing the accuracy of this technique with the use of multiple wavelengths of laser illumination. Large high index particle holograms have near degenerate solutions that can confuse standard fitting algorithms. Using a model based on diffraction from a phase disk, we explain the source of these degeneracies. We introduce multiple color holography as an effective approach to distinguish between degenerate solutions and provide improved accuracy for the holographic analysis of sub-visible colloidal particles.
Sakr, Sherif; Elshawi, Radwa; Ahmed, Amjad M; Qureshi, Waqas T; Brawner, Clinton A; Keteyian, Steven J; Blaha, Michael J; Al-Mallah, Mouaz H
2017-12-19
Prior studies have demonstrated that cardiorespiratory fitness (CRF) is a strong marker of cardiovascular health. Machine learning (ML) can enhance the prediction of outcomes through classification techniques that classify the data into predetermined categories. The aim of this study is to present an evaluation and comparison of how machine learning techniques can be applied on medical records of cardiorespiratory fitness and how the various techniques differ in terms of capabilities of predicting medical outcomes (e.g. mortality). We use data of 34,212 patients free of known coronary artery disease or heart failure who underwent clinician-referred exercise treadmill stress testing at Henry Ford Health Systems Between 1991 and 2009 and had a complete 10-year follow-up. Seven machine learning classification techniques were evaluated: Decision Tree (DT), Support Vector Machine (SVM), Artificial Neural Networks (ANN), Naïve Bayesian Classifier (BC), Bayesian Network (BN), K-Nearest Neighbor (KNN) and Random Forest (RF). In order to handle the imbalanced dataset used, the Synthetic Minority Over-Sampling Technique (SMOTE) is used. Two set of experiments have been conducted with and without the SMOTE sampling technique. On average over different evaluation metrics, SVM Classifier has shown the lowest performance while other models like BN, BC and DT performed better. The RF classifier has shown the best performance (AUC = 0.97) among all models trained using the SMOTE sampling. The results show that various ML techniques can significantly vary in terms of its performance for the different evaluation metrics. It is also not necessarily that the more complex the ML model, the more prediction accuracy can be achieved. The prediction performance of all models trained with SMOTE is much better than the performance of models trained without SMOTE. The study shows the potential of machine learning methods for predicting all-cause mortality using cardiorespiratory fitness data.
NASA Astrophysics Data System (ADS)
Levay, Z. G.
2004-12-01
A new, freely-available accessory for Adobe's widely-used Photoshop image editing software makes it much more convenient to produce presentable images directly from FITS data. It merges a fully-functional FITS reader with an intuitive user interface and includes fully interactive flexibility in scaling data. Techniques for producing attractive images from astronomy data using the FITS plugin will be presented, including the assembly of full-color images. These techniques have been successfully applied to producing colorful images for public outreach with data from the Hubble Space Telescope and other major observatories. Now it is much less cumbersome for students or anyone not experienced with specialized astronomical analysis software, but reasonably familiar with digital photography, to produce useful and attractive images.
NASA Astrophysics Data System (ADS)
Pahlavani, P.; Gholami, A.; Azimi, S.
2017-09-01
This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.
NASA Technical Reports Server (NTRS)
Brooks, D. R.
1980-01-01
Orbit dynamics of the solar occultation technique for satellite measurements of the Earth's atmosphere are described. A one-year mission is simulated and the orbit and mission design implications are discussed in detail. Geographical coverage capabilities are examined parametrically for a range of orbit conditions. The hypothetical mission is used to produce a simulated one-year data base of solar occultation measurements; each occultation event is assumed to produce a single number, or 'measurement' and some statistical properties of the data set are examined. A simple model is fitted to the data to demonstrate a procedure for examining global distributions of atmospheric constitutents with the solar occultation technique.
Burnette, Dylan T; Sengupta, Prabuddha; Dai, Yuhai; Lippincott-Schwartz, Jennifer; Kachar, Bechara
2011-12-27
Superresolution imaging techniques based on the precise localization of single molecules, such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), achieve high resolution by fitting images of single fluorescent molecules with a theoretical Gaussian to localize them with a precision on the order of tens of nanometers. PALM/STORM rely on photoactivated proteins or photoswitching dyes, respectively, which makes them technically challenging. We present a simple and practical way of producing point localization-based superresolution images that does not require photoactivatable or photoswitching probes. Called bleaching/blinking assisted localization microscopy (BaLM), the technique relies on the intrinsic bleaching and blinking behaviors characteristic of all commonly used fluorescent probes. To detect single fluorophores, we simply acquire a stream of fluorescence images. Fluorophore bleach or blink-off events are detected by subtracting from each image of the series the subsequent image. Similarly, blink-on events are detected by subtracting from each frame the previous one. After image subtractions, fluorescence emission signals from single fluorophores are identified and the localizations are determined by fitting the fluorescence intensity distribution with a theoretical Gaussian. We also show that BaLM works with a spectrum of fluorescent molecules in the same sample. Thus, BaLM extends single molecule-based superresolution localization to samples labeled with multiple conventional fluorescent probes.
Multi-filter spectrophotometry of quasar environments
NASA Technical Reports Server (NTRS)
Craven, Sally E.; Hickson, Paul; Yee, Howard K. C.
1993-01-01
A many-filter photometric technique for determining redshifts and morphological types, by fitting spectral templates to spectral energy distributions, has good potential for application in surveys. Despite success in studies performed on simulated data, the results have not been fully reliable when applied to real, low signal-to-noise data. We are investigating techniques to improve the fitting process.
Barker, Fiona; Mackenzie, Emma; de Lusignan, Simon
2016-11-01
To observe and analyse the range and nature of behaviour change techniques (BCTs) employed by audiologists during hearing-aid fitting consultations to encourage and enable hearing-aid use. Non-participant observation and qualitative thematic analysis using the behaviour change technique taxonomy (version 1) (BCTTv1). Ten consultations across five English NHS audiology departments. Audiologists engage in behaviours to ensure the hearing-aid is fitted to prescription and is comfortable to wear. They provide information, equipment, and training in how to use a hearing-aid including changing batteries, cleaning, and maintenance. There is scope for audiologists to use additional BCTs: collaborating with patients to develop a behavioural plan for hearing-aid use that includes goal-setting, action-planning and problem-solving; involving significant others; providing information on the benefits of hearing-aid use or the consequences of non-use and giving advice about using prompts/cues for hearing-aid use. This observational study of audiologist behaviour in hearing-aid fitting consultations has identified opportunities to use additional behaviour change techniques that might encourage hearing-aid use. This information defines potential intervention targets for further research with the aim of improving hearing-aid use amongst adults with acquired hearing loss.
Jig-Shape Optimization of a Low-Boom Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2018-01-01
A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
NASA Astrophysics Data System (ADS)
Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.
1996-01-01
A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.
Jeon, Young-Chan; Jeong, Chang-Mo
2017-01-01
PURPOSE The purpose of this study was to compare the fit of cast gold crowns fabricated from the conventional and the digital impression technique. MATERIALS AND METHODS Artificial tooth in a master model and abutment teeth in ten patients were restored with cast gold crowns fabricated from the digital and the conventional impression technique. The forty silicone replicas were cut in three sections; each section was evaluated in nine points. The measurement was carried out by using a measuring microscope and I-Soultion. Data from the silicone replica were analyzed and all tests were performed with α-level of 0.05. RESULTS 1. The average gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. 2. In marginal and internal axial gap of cast gold crowns, no statistical differences were found between the two impression techniques. 3. The internal occlusal gaps of cast gold crowns fabricated from the digital impression technique were larger than those of the conventional impression technique significantly. CONCLUSION Both prostheses presented clinically acceptable results with comparing the fit. The prostheses fabricated from the digital impression technique showed more gaps, in respect of occlusal surface. PMID:28243386
Three-dimensional simulation of human teeth and its application in dental education and research.
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.
Three-dimensional simulation of human teeth and its application in dental education and research
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836
NASA Astrophysics Data System (ADS)
Maglevanny, I. I.; Smolar, V. A.
2016-01-01
We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called "data gaps" can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log-log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.
NASA Astrophysics Data System (ADS)
Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.
2017-04-01
Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.
NASA Astrophysics Data System (ADS)
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.
Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J
2016-06-01
The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Mitchell, James K.; Carter, William E.
2000-01-01
Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…
Parametrization of electron impact ionization cross sections for CO, CO2, NH3 and SO2
NASA Technical Reports Server (NTRS)
Srivastava, Santosh K.; Nguyen, Hung P.
1987-01-01
The electron impact ionization and dissociative ionization cross section data of CO, CO2, CH4, NH3, and SO2, measured in the laboratory, were parameterized utilizing an empirical formula based on the Born approximation. For this purpose an chi squared minimization technique was employed which provided an excellent fit to the experimental data.
Next generation initiation techniques
NASA Technical Reports Server (NTRS)
Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans
1993-01-01
Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.
Genetic algorithm enhanced by machine learning in dynamic aperture optimization
NASA Astrophysics Data System (ADS)
Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert
2018-05-01
With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
Coorssen, Jens R; Yergey, Alfred L
2015-12-03
Molecular mechanisms underlying health and disease function at least in part based on the flexibility and fine-tuning afforded by protein isoforms and post-translational modifications. The ability to effectively and consistently resolve these protein species or proteoforms, as well as assess quantitative changes is therefore central to proteomic analyses. Here we discuss the pros and cons of currently available and developing analytical techniques from the perspective of the full spectrum of available tools and their current applications, emphasizing the concept of fitness-for-purpose in experimental design based on consideration of sample size and complexity; this necessarily also addresses analytical reproducibility and its variance. Data quality is considered the primary criterion, and we thus emphasize that the standards of Analytical Chemistry must apply throughout any proteomic analysis.
Time-Resolved Transposon Insertion Sequencing Reveals Genome-Wide Fitness Dynamics during Infection.
Yang, Guanhua; Billings, Gabriel; Hubbard, Troy P; Park, Joseph S; Yin Leung, Ka; Liu, Qin; Davis, Brigid M; Zhang, Yuanxing; Wang, Qiyao; Waldor, Matthew K
2017-10-03
Transposon insertion sequencing (TIS) is a powerful high-throughput genetic technique that is transforming functional genomics in prokaryotes, because it enables genome-wide mapping of the determinants of fitness. However, current approaches for analyzing TIS data assume that selective pressures are constant over time and thus do not yield information regarding changes in the genetic requirements for growth in dynamic environments (e.g., during infection). Here, we describe structured analysis of TIS data collected as a time series, termed pattern analysis of conditional essentiality (PACE). From a temporal series of TIS data, PACE derives a quantitative assessment of each mutant's fitness over the course of an experiment and identifies mutants with related fitness profiles. In so doing, PACE circumvents major limitations of existing methodologies, specifically the need for artificial effect size thresholds and enumeration of bacterial population expansion. We used PACE to analyze TIS samples of Edwardsiella piscicida (a fish pathogen) collected over a 2-week infection period from a natural host (the flatfish turbot). PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a cutoff at a terminal sampling point, and it identified subpopulations of mutants with distinct fitness profiles, one of which informed the design of new live vaccine candidates. Overall, PACE enables efficient mining of time series TIS data and enhances the power and sensitivity of TIS-based analyses. IMPORTANCE Transposon insertion sequencing (TIS) enables genome-wide mapping of the genetic determinants of fitness, typically based on observations at a single sampling point. Here, we move beyond analysis of endpoint TIS data to create a framework for analysis of time series TIS data, termed pattern analysis of conditional essentiality (PACE). We applied PACE to identify genes that contribute to colonization of a natural host by the fish pathogen Edwardsiella piscicida. PACE uncovered more genes that affect E. piscicida 's fitness in vivo than were detected using a terminal sampling point, and its clustering of mutants with related fitness profiles informed design of new live vaccine candidates. PACE yields insights into patterns of fitness dynamics and circumvents major limitations of existing methodologies. Finally, the PACE method should be applicable to additional "omic" time series data, including screens based on clustered regularly interspaced short palindromic repeats with Cas9 (CRISPR/Cas9). Copyright © 2017 Yang et al.
Irvine, Michael A; Hollingsworth, T Déirdre
2018-05-26
Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mede, Kyle; Brandt, Timothy D.
2017-03-01
We present the Exoplanet Simple Orbit Fitting Toolbox (ExoSOFT), a new, open-source suite to fit the orbital elements of planetary or stellar-mass companions to any combination of radial velocity and astrometric data. To explore the parameter space of Keplerian models, ExoSOFT may be operated with its own multistage sampling approach or interfaced with third-party tools such as emcee. In addition, ExoSOFT is packaged with a collection of post-processing tools to analyze and summarize the results. Although only a few systems have been observed with both radial velocity and direct imaging techniques, this number will increase, thanks to upcoming spacecraft and ground-based surveys. Providing both forms of data enables simultaneous fitting that can help break degeneracies in the orbital elements that arise when only one data type is available. The dynamical mass estimates this approach can produce are important when investigating the formation mechanisms and subsequent evolution of substellar companions. ExoSOFT was verified through fitting to artificial data and was implemented using the Python and Cython programming languages; it is available for public download at https://github.com/kylemede/ExoSOFT under GNU General Public License v3.
Impression of multiple implants using photogrammetry: Description of technique and case presentation
Peñarrocha-Oltra, David; Agustín-Panadero, Rubén; Bagán, Leticia; Giménez, Beatriz
2014-01-01
Aim: To describe a technique for registering the positions of multiple dental implants using a system based on photogrammetry. A case is presented in which a prosthetic treatment was performed using this technique. Study Design: Three Euroteknika® dental implants were placed to rehabilitate a 55-year-old male patient with right posterior maxillary edentulism. Three months later, the positions of the implants were registered using a photogrammetry-based stereo-camera (PICcamera®). After processing patient and implant data, special abutments (PICabutment®) were screwed onto each implant. The PICcamera® was then used to capture images of the implant positions, automatically taking 150 images in less than 60 seconds. From this information a file was obtained describing the relative positions – angles and distances – of each implant in vector form. Information regarding the soft tissues was obtained from an alginate impression that was cast in plaster and scanned. A Cr-Co structure was obtained using CAD/CAM, and its passive fit was verified in the patient’s mouth using the Sheffield test and the screw resistance test. Results and Conclusions: Twelve months after loading, peri-implant tissues were healthy and no marginal bone loss was observed. The clinical application of this new system using photogrammetry to record the position of multiple dental implants facilitated the rehabilitation of a patient with posterior maxillary edentulism by means of a prosthesis with optimal fit. The prosthetic process was accurate, fast, simple to apply and comfortable for the patient. Key words:Dental implants, photogrammetry, dental impression technique, CAD/CAM. PMID:24608216
Experimental characterization of wingtip vortices in the near field using smoke flow visualizations
NASA Astrophysics Data System (ADS)
Serrano-Aguilera, J. J.; García-Ortiz, J. Hermenegildo; Gallardo-Claros, A.; Parras, L.; del Pino, C.
2016-08-01
In order to predict the axial development of the wingtip vortices strength, an accurate theoretical model is required. Several experimental techniques have been used to that end, e.g. PIV or hot-wire anemometry, but they imply a significant cost and effort. For this reason, we have performed experiments using the smoke-wire technique to visualize smoke streaks in six planes perpendicular to the main stream flow direction. Using this visualization technique, we obtained quantitative information regarding the vortex velocity field by means of Batchelor's model for two chord-based Reynolds numbers, Re_c=3.33× 10^4 and 10^5. Therefore, this theoretical vortex model has been introduced in the integration of ordinary differential equations which describe the temporal evolution of streak lines as function of two parameters: the swirl number, S, and the virtual axial origin, overline{z_0}. We have applied two different procedures to minimize the distance between experimental and theoretical flow patterns: individual curve fitting at six different control planes in the streamwise direction and the global curve fitting which corresponds to all the control planes simultaneously. Both sets of results have been compared with those provided by del Pino et al. (Phys Fluids 23(013):602, 2011b. doi: 10.1063/1.3537791), finding good agreement. Finally, we have observed a weak influence of the Reynolds number on the values S and overline{z_0} at low-to-moderate Re_c. This experimental technique is proposed as a low cost alternative to characterize wingtip vortices based on flow visualizations.
Increased anteversion of press-fit femoral stems compared with anatomic femur.
Emerson, Roger H
2012-02-01
With contemporary canal-filling press-fit stems, there is no adjustability of stem position in the canal and therefore the canal anatomy determines stem version. Stem version will affect head/neck impingement, polyethylene wear from edge loading, and hip stability, but despite this, the postoperative version of a canal-filling press-fit stem is unclear. Is there a difference between the version of the nonoperated femur and the final version of a canal-filling press-fit femoral component? Could a difference create an alignment problem for the hip replacement? Sixty-four hips were studied with fluoroscopy and 46 nonarthritic and 41 arthritic hips were studied with MRI. A standardized fluoroscopic technique for determining preoperative and postoperative femoral version was developed with the patient supine on a fracture table undergoing supine total hip arthroplasty. To validate the methods, the results were compared with two selected series of axial MRI views of the hip comparing the version of the head with the version of the canal at the base of the neck. For the operated hips, the mean anatomic hip version was less than the stem version: 18.9° versus 27.0°. The difference on average was 8.1° of increased anteversion (SD, 7.4°). Both MRI series showed the femoral neck was more anteverted on average than the femoral head, thereby explaining the operative findings. With a canal-filling press-fit femoral component there is wide variation of postoperative component anteversion with most stems placed in increased anteversion compared with the anatomic head. The surgical technique may need to adjust for this if causing intraoperative impingement or instability.
Dong, Zhengchao; Zhang, Yudong; Liu, Feng; Duan, Yunsuo; Kangarlu, Alayar; Peterson, Bradley S
2014-11-01
Proton magnetic resonance spectroscopic imaging ((1) H MRSI) has been used for the in vivo measurement of intramyocellular lipids (IMCLs) in human calf muscle for almost two decades, but the low spectral resolution between extramyocellular lipids (EMCLs) and IMCLs, partially caused by the magnetic field inhomogeneity, has hindered the accuracy of spectral fitting. The purpose of this paper was to enhance the spectral resolution of (1) H MRSI data from human calf muscle using the SPREAD (spectral resolution amelioration by deconvolution) technique and to assess the influence of improved spectral resolution on the accuracy of spectral fitting and on in vivo measurement of IMCLs. We acquired MRI and (1) H MRSI data from calf muscles of three healthy volunteers. We reconstructed spectral lineshapes of the (1) H MRSI data based on field maps and used the lineshapes to deconvolve the measured MRS spectra, thereby eliminating the line broadening caused by field inhomogeneities and improving the spectral resolution of the (1) H MRSI data. We employed Monte Carlo (MC) simulations with 200 noise realizations to measure the variations of spectral fitting parameters and used an F-test to evaluate the significance of the differences of the variations between the spectra before SPREAD and after SPREAD. We also used Cramer-Rao lower bounds (CRLBs) to assess the improvements of spectral fitting after SPREAD. The use of SPREAD enhanced the separation between EMCL and IMCL peaks in (1) H MRSI spectra from human calf muscle. MC simulations and F-tests showed that the use of SPREAD significantly reduced the standard deviations of the estimated IMCL peak areas (p < 10(-8) ), and the CRLBs were strongly reduced (by ~37%). Copyright © 2014 John Wiley & Sons, Ltd.
3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.
Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef
2016-11-01
In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
NASA Astrophysics Data System (ADS)
Chen, Lei; Zhang, Liguo; Tang, Yixian; Zhang, Hong
2018-04-01
The principle of exponent Knothe model was introduced in detail and the variation process of mining subsidence with time was analysed based on the formulas of subsidence, subsidence velocity and subsidence acceleration in the paper. Five scenes of radar images and six levelling measurements were collected to extract ground deformation characteristics in one coal mining area in this study. Then the unknown parameters of exponent Knothe model were estimated by combined levelling data with deformation information along the line of sight obtained by InSAR technique. By compared the fitting and prediction results obtained by InSAR and levelling with that obtained only by levelling, it was shown that the accuracy of fitting and prediction combined with InSAR and levelling was obviously better than the other that. Therefore, the InSAR measurements can significantly improve the fitting and prediction accuracy of exponent Knothe model.
On Using Surrogates with Genetic Programming.
Hildebrandt, Torsten; Branke, Jürgen
2015-01-01
One way to accelerate evolutionary algorithms with expensive fitness evaluations is to combine them with surrogate models. Surrogate models are efficiently computable approximations of the fitness function, derived by means of statistical or machine learning techniques from samples of fully evaluated solutions. But these models usually require a numerical representation, and therefore cannot be used with the tree representation of genetic programming (GP). In this paper, we present a new way to use surrogate models with GP. Rather than using the genotype directly as input to the surrogate model, we propose using a phenotypic characterization. This phenotypic characterization can be computed efficiently and allows us to define approximate measures of equivalence and similarity. Using a stochastic, dynamic job shop scenario as an example of simulation-based GP with an expensive fitness evaluation, we show how these ideas can be used to construct surrogate models and improve the convergence speed and solution quality of GP.
Nickel speciation in several serpentine (ultramafic) topsoils via bulk synchrotron-based techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebecker, Matthew G.; Chaney, Rufus L.; Sparks, Donald L.
2017-07-01
Serpentine soils have elevated concentrations of trace metals including nickel, cobalt, and chromium compared to non-serpentine soils. Identifying the nickel bearing minerals allows for prediction of potential mobility of nickel. Synchrotron-based techniques can identify the solid-phase chemical forms of nickel with minimal sample treatment. Element concentrations are known to vary among soil particle sizes in serpentine soils. Sonication is a useful method to physically disperse sand, silt and clay particles in soils. Synchrotron-based techniques and sonication were employed to identify nickel species in discrete particle size fractions in several serpentine (ultramafic) topsoils to better understand solid-phase nickel geochemistry. Nickel commonlymore » resided in primary serpentine parent material such as layered-phyllosilicate and chain-inosilicate minerals and was associated with iron oxides. In the clay fractions, nickel was associated with iron oxides and primary serpentine minerals, such as lizardite. Linear combination fitting (LCF) was used to characterize nickel species. Total metal concentration did not correlate with nickel speciation and is not an indicator of the major nickel species in the soil. Differences in soil texture were related to different nickel speciation for several particle size fractionated samples. A discussion on LCF illustrates the importance of choosing standards based not only on statistical methods such as Target Transformation but also on sample mineralogy and particle size. Results from the F-test (Hamilton test), which is an underutilized tool in the literature for LCF in soils, highlight its usefulness to determine the appropriate number of standards to for LCF. EXAFS shell fitting illustrates that destructive interference commonly found for light and heavy elements in layered double hydroxides and in phyllosilicates also can occur in inosilicate minerals, causing similar structural features and leading to false positive results in LCF.« less
High-k shallow traps observed by charge pumping with varying discharging times
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Szu-Han; Chen, Ching-En; Tseng, Tseung-Yuen
2013-11-07
In this paper, we investigate the influence of falling time and base level time on high-k bulk shallow traps measured by charge pumping technique in n-channel metal-oxide-semiconductor field-effect transistors with HfO{sub 2}/metal gate stacks. N{sub T}-V{sub high} {sub level} characteristic curves with different duty ratios indicate that the electron detrapping time dominates the value of N{sub T} for extra contribution of I{sub cp} traps. N{sub T} is the number of traps, and I{sub cp} is charge pumping current. By fitting discharge formula at different temperatures, the results show that extra contribution of I{sub cp} traps at high voltage are inmore » fact high-k bulk shallow traps. This is also verified through a comparison of different interlayer thicknesses and different Ti{sub x}N{sub 1−x} metal gate concentrations. Next, N{sub T}-V{sub high} {sub level} characteristic curves with different falling times (t{sub falling} {sub time}) and base level times (t{sub base} {sub level}) show that extra contribution of I{sub cp} traps decrease with an increase in t{sub falling} {sub time}. By fitting discharge formula for different t{sub falling} {sub time}, the results show that electrons trapped in high-k bulk shallow traps first discharge to the channel and then to source and drain during t{sub falling} {sub time}. This current cannot be measured by the charge pumping technique. Subsequent measurements of N{sub T} by charge pumping technique at t{sub base} {sub level} reveal a remainder of electrons trapped in high-k bulk shallow traps.« less
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
Nuclear Electric Vehicle Optimization Toolset (NEVOT)
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Kos, Larry D.; Qualls, A. Lou; Greene, Sherrell
2004-01-01
The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major nuclear electric propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a genetic algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be considered through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.
Turbulence profiling for adaptive optics tomographic reconstructors
NASA Astrophysics Data System (ADS)
Laidlaw, Douglas J.; Osborn, James; Wilson, Richard W.; Morris, Timothy J.; Butterley, Timothy; Reeves, Andrew P.; Townson, Matthew J.; Gendron, Éric; Vidal, Fabrice; Morel, Carine
2016-07-01
To approach optimal performance advanced Adaptive Optics (AO) systems deployed on ground-based telescopes must have accurate knowledge of atmospheric turbulence as a function of altitude. Stereo-SCIDAR is a high-resolution stereoscopic instrument dedicated to this measure. Here, its profiles are directly compared to internal AO telemetry atmospheric profiling techniques for CANARY (Vidal et al. 20141), a Multi-Object AO (MOAO) pathfinder on the William Herschel Telescope (WHT), La Palma. In total twenty datasets are analysed across July and October of 2014. Levenberg-Marquardt fitting algorithms dubbed Direct Fitting and Learn 2 Step (L2S; Martin 20142) are used in the recovery of profile information via covariance matrices - respectively attaining average Pearson product-moment correlation coefficients with stereo-SCIDAR of 0.2 and 0.74. By excluding the measure of covariance between orthogonal Wavefront Sensor (WFS) slopes these results have revised values of 0.65 and 0.2. A data analysis technique that combines L2S and SLODAR is subsequently introduced that achieves a correlation coefficient of 0.76.
Compact multi-band fluorescent microscope with an electrically tunable lens for autofocusing
Wang, Zhaojun; Lei, Ming; Yao, Baoli; Cai, Yanan; Liang, Yansheng; Yang, Yanlong; Yang, Xibin; Li, Hui; Xiong, Daxi
2015-01-01
Autofocusing is a routine technique in redressing focus drift that occurs in time-lapse microscopic image acquisition. To date, most automatic microscopes are designed on the distance detection scheme to fulfill the autofocusing operation, which may suffer from the low contrast of the reflected signal due to the refractive index mismatch at the water/glass interface. To achieve high autofocusing speed with minimal motion artifacts, we developed a compact multi-band fluorescent microscope with an electrically tunable lens (ETL) device for autofocusing. A modified searching algorithm based on equidistant scanning and curve fitting is proposed, which no longer requires a single-peak focus curve and then efficiently restrains the impact of external disturbance. This technique enables us to achieve an autofocusing time of down to 170 ms and the reproductivity of over 97%. The imaging head of the microscope has dimensions of 12 cm × 12 cm × 6 cm. This portable instrument can easily fit inside standard incubators for real-time imaging of living specimens. PMID:26601001
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza
2018-03-02
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.
Experimental demonstration of deep frequency modulation interferometry.
Isleif, Katharina-Sophie; Gerberding, Oliver; Schwarze, Thomas S; Mehmet, Moritz; Heinzel, Gerhard; Cervantes, Felipe Guzmán
2016-01-25
Experiments for space and ground-based gravitational wave detectors often require a large dynamic range interferometric position readout of test masses with 1 pm/√Hz precision over long time scales. Heterodyne interferometer schemes that achieve such precisions are available, but they require complex optical set-ups, limiting their scalability for multiple channels. This article presents the first experimental results on deep frequency modulation interferometry, a new technique that combines sinusoidal laser frequency modulation in unequal arm length interferometers with a non-linear fit algorithm. We have tested the technique in a Michelson and a Mach-Zehnder Interferometer topology, respectively, demonstrated continuous phase tracking of a moving mirror and achieved a performance equivalent to a displacement sensitivity of 250 pm/Hz at 1 mHz between the phase measurements of two photodetectors monitoring the same optical signal. By performing time series fitting of the extracted interference signals, we measured that the linearity of the laser frequency modulation is on the order of 2% for the laser source used.
Investigation of non-linear contact for a clearance-fit bolt in a graphite/epoxy laminate
NASA Technical Reports Server (NTRS)
Prabhakaran, R.; Naik, R. A.
1986-01-01
Numerous analytical studies have been published for the nonlinear load-contact variations in clearance-fit bolted joints. In these studies, stress distributions have been obtained and failure predictions have been made. However, very little experimental work has been reported regarding the contact or the stresses. This paper describes a fiber-optic technique for measuring the angle of contact in a clearance-fit bolt-loaded hole. Measurements of the contact angle have been made in a quasi-isotropic graphite-epoxy laminate by the optical as well as an electrical technique, and the results have been compared with those obtained from a finite-element analysis. The results from the two experimental techniques show excellent agreement; the finite-element results show some discrepancy, probably due to the interfacial frictions.
NASA Astrophysics Data System (ADS)
Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert
2016-05-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.
Haselhuhn, Klaus; Marotti, Juliana; Tortamano, Pedro; Weiss, Claudia; Suleiman, Lubna; Wolfart, Stefan
2014-12-01
Passive fit of the prosthetic superstructure is important to avoid complications; however, evaluation of passive fit is not possible using conventional procedures. Thus, the aim of this study was to check and locate mechanical stress in bar restorations fabricated using two casting techniques. Fifteen patients received four implants in the interforaminal region of the mandible, and a bar was fabricated using either the cast-on abutment or lost-wax casting technique. The fit accuracy was checked according to the Sheffield's test criteria. Measurements were recorded on the master model with a gap-free, passive fit using foil strain gauges both before and after tightening the prosthetic screws. Data acquisition and processing was analyzed with computer software and submitted to statistical analysis (ANOVA). The greatest axial distortion was at position 42 with the cast-on abutment technique, with a mean distortion of 450 μm/m. The lowest axial distortion occurred at position 44 with the lost-wax casting technique, with a mean distortion of 100 μm/m. The minimal differences between the means of axial distortion do not indicate any significant differences between the techniques (P = 0.2076). Analysis of the sensor axial distortion in relation to the implant position produced a significant difference (P < 0.0001). Significantly higher measurements were recorded in the axial distortion analysis of the distal sensors of implants at the 34 and 44 regions than on the mesial positions at the 32 and 42 regions (P = 0.0481). The measuring technique recorded axial distortion in the implant-supported superstructures. Distortions were present at both casting techniques, with no significant difference between the sides.
Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection
NASA Astrophysics Data System (ADS)
Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei
Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
1991-01-01
Office: MICOM HUNTSVILLE, AL 35805 Contract #: DAAHO1-92-C-R150 Phone: (205) 876-7502 Pi: D. BRETI BEASLEY Title: INFRARED LASER DIODE BASED INFRARED ...TECHNIQUES WILL BE INVESTIGATED TO DESIGN A FORM FIT GIMBALL-MOUNTED 94 GHZ/ INFRARED FOCAL PLANE ARRAY DUAL-MODE MISSILE SEEKER SENSOR BASED ON LOW...RESOLUTION AT 94 GHZ AND A 128X128 ARRAY IR IMAGE PROCESSING FOR AUTONOMOUS TARGET RECOGNITION AND AIMPOINT SELECTION. THE 94 GHZ AND INFRARED ELECTRONICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholey, J. E.; Lin, L.; Ainsley, C. G.
2015-06-15
Purpose: To evaluate the accuracy and limitations of a commercially-available treatment planning system’s (TPS’s) dose calculation algorithm for proton pencil-beam scanning (PBS) and present a novel technique to efficiently derive a clinically-acceptable beam model. Methods: In-air fluence profiles of PBS spots were modeled in the TPS alternately as single-(SG) and double-Gaussian (DG) functions, based on fits to commissioning data. Uniform-fluence, single-energy-layer square fields of various sizes and energies were calculated with both beam models and delivered to water. Dose was measured at several depths. Motivated by observed discrepancies in measured-versus-calculated dose comparisons, a third model was constructed based on double-Gaussianmore » parameters contrived through a novel technique developed to minimize these differences (DGC). Eleven cuboid-dose-distribution-shaped fields with varying range/modulation and field size were subsequently generated in the TPS, using each of the three beam models described, and delivered to water. Dose was measured at the middle of each spread-out Bragg peak. Results: For energies <160 MeV, the DG model fit square-field measurements to <2% at all depths, while the SG model could disagree by >6%. For energies >160 MeV, both SG and DG models fit square-field measurements to <1% at <4 cm depth, but could exceed 6% deeper. By comparison, disagreement with the DGC model was always <3%. For the cuboid plans, calculation-versus-measured percent dose differences exceeded 7% for the SG model, being larger for smaller fields. The DG model showed <3% disagreement for all field sizes in shorter-range beams, although >5% differences for smaller fields persisted in longer-range beams. In contrast, the DGC model predicted measurements to <2% for all beams. Conclusion: Neither the TPS’s SG nor DG models, employed as intended, are ideally suited for routine clinical use. However, via a novel technique to be presented, its DG model can be tuned judiciously to yield acceptable results.« less
Kamensky, David; Hsu, Ming-Chen; Schillinger, Dominik; Evans, John A.; Aggarwal, Ankush; Bazilevs, Yuri; Sacks, Michael S.; Hughes, Thomas J. R.
2014-01-01
In this paper, we develop a geometrically flexible technique for computational fluid–structure interaction (FSI). The motivating application is the simulation of tri-leaflet bioprosthetic heart valve function over the complete cardiac cycle. Due to the complex motion of the heart valve leaflets, the fluid domain undergoes large deformations, including changes of topology. The proposed method directly analyzes a spline-based surface representation of the structure by immersing it into a non-boundary-fitted discretization of the surrounding fluid domain. This places our method within an emerging class of computational techniques that aim to capture geometry on non-boundary-fitted analysis meshes. We introduce the term “immersogeometric analysis” to identify this paradigm. The framework starts with an augmented Lagrangian formulation for FSI that enforces kinematic constraints with a combination of Lagrange multipliers and penalty forces. For immersed volumetric objects, we formally eliminate the multiplier field by substituting a fluid–structure interface traction, arriving at Nitsche’s method for enforcing Dirichlet boundary conditions on object surfaces. For immersed thin shell structures modeled geometrically as surfaces, the tractions from opposite sides cancel due to the continuity of the background fluid solution space, leaving a penalty method. Application to a bioprosthetic heart valve, where there is a large pressure jump across the leaflets, reveals shortcomings of the penalty approach. To counteract steep pressure gradients through the structure without the conditioning problems that accompany strong penalty forces, we resurrect the Lagrange multiplier field. Further, since the fluid discretization is not tailored to the structure geometry, there is a significant error in the approximation of pressure discontinuities across the shell. This error becomes especially troublesome in residual-based stabilized methods for incompressible flow, leading to problematic compressibility at practical levels of refinement. We modify existing stabilized methods to improve performance. To evaluate the accuracy of the proposed methods, we test them on benchmark problems and compare the results with those of established boundary-fitted techniques. Finally, we simulate the coupling of the bioprosthetic heart valve and the surrounding blood flow under physiological conditions, demonstrating the effectiveness of the proposed techniques in practical computations. PMID:25541566
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1990-01-01
The level of skill in predicting the size of the sunspot cycle is investigated for the two types of precursor techniques, single variate and bivariate fits, both applied to cycle 22. The present level of growth in solar activity is compared to the mean level of growth (cycles 10-21) and to the predictions based on the precursor techniques. It is shown that, for cycle 22, both single variate methods (based on geomagnetic data) and bivariate methods suggest a maximum amplitude smaller than that observed for cycle 19, and possibly for cycle 21. Compared to the mean cycle, cycle 22 is presently behaving as if it were a +2.6 sigma cycle (maximum amplitude of about 225), which means that either it will be the first cycle not to be reliably predicted by the combined precursor techniques or its deviation relative to the mean cycle will substantially decrease over the next 18 months.
Framework based on stochastic L-Systems for modeling IP traffic with multifractal behavior
NASA Astrophysics Data System (ADS)
Salvador, Paulo S.; Nogueira, Antonio; Valadas, Rui
2003-08-01
In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.
Fitness for Individuals Who Are Visually Impaired or Deafblind.
ERIC Educational Resources Information Center
Lieberman, Lauren J.
2002-01-01
This article discusses the importance of daily physical activity and examples of how individuals who are visually impaired or deaf-blind can access fitness. It describes techniques for running, bicycling, swimming, exercise training in a health club, aerobics, and fitness at home (jumping rope, yoga, and basketball). (Contains references.) (CR)
How many spectral lines are statistically significant?
NASA Astrophysics Data System (ADS)
Freund, J.
When experimental line spectra are fitted with least squares techniques one frequently does not know whether n or n + 1 lines may be fitted safely. This paper shows how an F-test can be applied in order to determine the statistical significance of including an extra line into the fitting routine.
Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques
Shyu, Conrad; Ytreberg, F. Marty
2010-01-01
This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657
Evaluation of Two Protocols to Measure the Accuracy of Fixed Dental Prostheses: An In Vitro Study.
Schönberger, Joana; Erdelt, Kurt-Jürgen; Bäumer, Daniel; Beuer, Florian
2017-02-02
The aim of this in vitro study was to compare two measurement protocols of the internal and marginal fit of three-unit zirconia fixed dental prostheses (FDPs). Forty-four FDPs were fabricated for standardized dies by two laboratory CAD/CAM systems: Cercon (n = 22) and Ceramill (n = 22). The fitting was tested using a replica technique (RT = technique 1) with a light-body silicone stabilized with heavy-body material. After producing the replicas, cross-sections were made in the buccolingual and mesiodistal directions. FDPs were cemented on definitive dies, embedded, and sectioned (CST = technique 2). The marginal and internal fits were measured under an optical microscope at 50x magnification with a special software program. Data evaluation was performed according to prior studies at a level of significance of 5%. The mean internal gap width was 51 ± 36 μm for the RT and 52 ± 35 μm for the cross-section technique (CST) (p = 0.74). The mean marginal gap width was 27 ± 18 μm for RT and 30 ± 19 μm for CST (p = 0.19). Statistical tests showed no significant differences (p > 0.05). Both techniques can be used for fit evaluation; however, the noninvasive RT is suitable for clinical use. © 2017 by the American College of Prosthodontists.
Fitting and Reconstruction of Thirteen Simple Coronal Mass Ejections
NASA Astrophysics Data System (ADS)
Al-Haddad, Nada; Nieves-Chinchilla, Teresa; Savani, Neel P.; Lugaz, Noé; Roussev, Ilia I.
2018-05-01
Coronal mass ejections (CMEs) are the main drivers of geomagnetic disturbances, but the effects of their interaction with Earth's magnetic field depend on their magnetic configuration and orientation. Fitting and reconstruction techniques have been developed to determine important geometrical and physical CME properties, such as the orientation of the CME axis, the CME size, and its magnetic flux. In many instances, there is disagreement between different methods but also between fitting from in situ measurements and reconstruction based on remote imaging. This could be due to the geometrical or physical assumptions of the models, but also to the fact that the magnetic field inside CMEs is only measured at one point in space as the CME passes over a spacecraft. In this article we compare three methods that are based on different assumptions for measurements by the Wind spacecraft for 13 CMEs from 1997 to 2015. These CMEs are selected from the interplanetary coronal mass ejections catalog on
Boeddinghaus, Moritz; Breloer, Eva Sabina; Rehmann, Peter; Wöstmann, Bernd
2015-11-01
The purpose of this clinical study was to compare the marginal fit of dental crowns based on three different intraoral digital and one conventional impression methods. Forty-nine teeth of altogether 24 patients were prepared to be treated with full-coverage restorations. Digital impressions were made using three intraoral scanners: Sirona CEREC AC Omnicam (OCam), Heraeus Cara TRIOS and 3M Lava True Definition (TDef). Furthermore, a gypsum model based on a conventional impression (EXA'lence, GC, Tokyo, Japan) was scanned with a standard laboratory scanner (3Shape D700). Based on the dataset obtained, four zirconia copings per tooth were produced. The marginal fit of the copings in the patient's mouth was assessed employing a replica technique. Overall, seven measurement copings did not fit and, therefore, could not be assessed. The marginal gap was 88 μm (68-136 μm) [median/interquartile range] for the TDef, 112 μm (94-149 μm) for the Cara TRIOS, 113 μm (81-157 μm) for the laboratory scanner and 149 μm (114-218 μm) for the OCam. There was a statistically significant difference between the OCam and the other groups (p < 0.05). Within the limitations of this study, it can be concluded that zirconia copings based on intraoral scans and a laboratory scans of a conventional model are comparable to one another with regard to their marginal fit. Regarding the results of this study, the digital intraoral impression can be considered as an alternative to a conventional impression with a consecutive digital workflow when the finish line is clearly visible and it is possible to keep it dry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de
We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less
Novel grid-based optical Braille conversion: from scanning to wording
NASA Astrophysics Data System (ADS)
Yoosefi Babadi, Majid; Jafari, Shahram
2011-12-01
Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.
Diaby, Vakaramoko; Adunlin, Georges; Montero, Alberto J
2014-02-01
Survival modeling techniques are increasingly being used as part of decision modeling for health economic evaluations. As many models are available, it is imperative for interested readers to know about the steps in selecting and using the most suitable ones. The objective of this paper is to propose a tutorial for the application of appropriate survival modeling techniques to estimate transition probabilities, for use in model-based economic evaluations, in the absence of individual patient data (IPD). An illustration of the use of the tutorial is provided based on the final progression-free survival (PFS) analysis of the BOLERO-2 trial in metastatic breast cancer (mBC). An algorithm was adopted from Guyot and colleagues, and was then run in the statistical package R to reconstruct IPD, based on the final PFS analysis of the BOLERO-2 trial. It should be emphasized that the reconstructed IPD represent an approximation of the original data. Afterwards, we fitted parametric models to the reconstructed IPD in the statistical package Stata. Both statistical and graphical tests were conducted to verify the relative and absolute validity of the findings. Finally, the equations for transition probabilities were derived using the general equation for transition probabilities used in model-based economic evaluations, and the parameters were estimated from fitted distributions. The results of the application of the tutorial suggest that the log-logistic model best fits the reconstructed data from the latest published Kaplan-Meier (KM) curves of the BOLERO-2 trial. Results from the regression analyses were confirmed graphically. An equation for transition probabilities was obtained for each arm of the BOLERO-2 trial. In this paper, a tutorial was proposed and used to estimate the transition probabilities for model-based economic evaluation, based on the results of the final PFS analysis of the BOLERO-2 trial in mBC. The results of our study can serve as a basis for any model (Markov) that needs the parameterization of transition probabilities, and only has summary KM plots available.
Fitting Flux Ropes to a Global MHD Solution: A Comparison of Techniques. Appendix 1
NASA Technical Reports Server (NTRS)
Riley, Pete; Linker, J. A.; Lionello, R.; Mikic, Z.; Odstrcil, D.; Hidalgo, M. A.; Cid, C.; Hu, Q.; Lepping, R. P.; Lynch, B. J.
2004-01-01
Flux rope fitting (FRF) techniques are an invaluable tool for extracting information about the properties of a subclass of CMEs in the solar wind. However, it has proven difficult to assess their accuracy since the underlying global structure of the CME cannot be independently determined from the data. In contrast, large-scale MHD simulations of CME evolution can provide both a global view as well as localized time series at specific points in space. In this study we apply 5 different fitting techniques to 2 hypothetical time series derived from MHD simulation results. Independent teams performed the analysis of the events in "blind tests", for which no information, other than the time series, was provided. F rom the results, we infer the following: (1) Accuracy decreases markedly with increasingly glancing encounters; (2) Correct identification of the boundaries of the flux rope can be a significant limiter; and (3) Results from techniques that infer global morphology must be viewed with caution. In spite of these limitations, FRF techniques remain a useful tool for describing in situ observations of flux rope CMEs.
Riegel, Adam C; Chen, Yu; Kapur, Ajay; Apicello, Laura; Kuruvilla, Abraham; Rea, Anthony J; Jamshidi, Abolghassem; Potters, Louis
Optically stimulated luminescent dosimeters (OSLDs) are utilized for in vivo dosimetry (IVD) of modern radiation therapy techniques such as intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT). Dosimetric precision achieved with conventional techniques may not be attainable. In this work, we measured accuracy and precision for a large sample of clinical OSLD-based IVD measurements. Weekly IVD measurements were collected from 4 linear accelerators for 2 years and were expressed as percent differences from planned doses. After outlier analysis, 10,224 measurements were grouped in the following way: overall, modality (photons, electrons), treatment technique (3-dimensional [3D] conformal, field-in-field intensity modulation, inverse-planned IMRT, and VMAT), placement location (gantry angle, cardinality, and central axis positioning), and anatomical site (prostate, breast, head and neck, pelvis, lung, rectum and anus, brain, abdomen, esophagus, and bladder). Distributions were modeled via a Gaussian function. Fitting was performed with least squares, and goodness-of-fit was assessed with the coefficient of determination. Model means (μ) and standard deviations (σ) were calculated. Sample means and variances were compared for statistical significance by analysis of variance and the Levene tests (α = 0.05). Overall, μ ± σ was 0.3 ± 10.3%. Precision for electron measurements (6.9%) was significantly better than for photons (10.5%). Precision varied significantly among treatment techniques (P < .0001) with field-in-field lowest (σ = 7.2%) and IMRT and VMAT highest (σ = 11.9% and 13.4%, respectively). Treatment site models with goodness-of-fit greater than 0.90 (6 of 10) yielded accuracy within ±3%, except for head and neck (μ = -3.7%). Precision varied with treatment site (range, 7.3%-13.0%), with breast and head and neck yielding the best and worst precision, respectively. Placement on the central axis of cardinal gantry angles yielded more precise results (σ = 8.5%) compared with other locations (range, 10.5%-11.4%). Accuracy of ±3% was achievable. Precision ranged from 6.9% to 13.4% depending on modality, technique, and treatment site. Simple, standardized locations may improve IVD precision. These findings may aid development of patient-specific tolerances for OSLD-based IVD. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
A Healthy Approach to Fitness Center Security.
ERIC Educational Resources Information Center
Sturgeon, Julie
2000-01-01
Examines techniques for keeping college fitness centers secure while maintaining an inviting atmosphere. Building access control, preventing locker room theft, and suppressing causes for physical violence are discussed. (GR)
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kojima, Fumio
1988-01-01
The identification of the geometrical structure of the system boundary for a two-dimensional diffusion system is reported. The domain identification problem treated here is converted into an optimization problem based on a fit-to-data criterion and theoretical convergence results for approximate identification techniques are discussed. Results of numerical experiments to demonstrate the efficacy of the theoretical ideas are reported.
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-08-08
The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth.
NASA Astrophysics Data System (ADS)
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Warenghem, Marc; Henninot, Jean François; Blach, Jean François; Buchnev, Oleksandr; Kaczmarek, Malgosia; Stchakovsky, Michel
2012-03-01
Spectroscopic ellipsometry is a technique especially well suited to measure the effective optical properties of a composite material. However, as the sample is optically thick and anisotropic, this technique loses its accuracy for two reasons: anisotropy means that two parameters have to be determined (ordinary and extraordinary indices) and optically thick means a large order of interference. In that case, several dielectric functions can emerge out of the fitting procedure with a similar mean square error and no criterion to discriminate the right solution. In this paper, we develop a methodology to overcome that drawback. It combines ellipsometry with refractometry. The same sample is used in a total internal reflection (TIR) setup and in a spectroscopic ellipsometer. The number of parameters to be determined by the fitting procedure is reduced in analysing two spectra, the correct final solution is found by using the TIR results both as initial values for the parameters and as check for the final dielectric function. A prefitting routine is developed to enter the right initial values in the fitting procedure and so to approach the right solution. As an example, this methodology is used to analyse the optical properties of BaTiO(3) nanoparticles embedded in a nematic liquid crystal. Such a methodology can also be used to analyse experimentally the validity of the mixing laws, since ellipsometry gives the effective dielectric function and thus, can be compared to the dielectric function of the components of the mixture, as it is shown on the example of BaTiO(3)/nematic composite.
Albiero, Alberto Maria; Benato, Renato
2016-09-01
Complications are frequently reported when combining computer assisted flapless surgery with an immediate loaded prefabricated prosthesis. The authors have combined computer-assisted surgery with the intraoral welding technique to obtain a precise passive fit of the immediate loading prosthesis. An edentulous maxilla was rehabilitated with four computer assisted implants welded together intraorally and immediately loaded with a provisional restoration. A perfect passive fit of the metal framework was obtained that enabled proper osseointegration of implants. Computer assisted preoperative planning has been shown to be effective in reducing the intraoperative time of the intraoral welding technique. No complications were observed at 1 year follow-up. This guided-welded approach is useful to achieve a passive fit of the provisional prosthesis on the inserted implants the same day as the surgery, reducing intraoperative time with respect to the traditional intraoral welding technique. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Hernández, Emilio; Liedo, Pablo; Toledo, Jorge; Montoya, Pablo; Perales, Hugo; Ruiz-Montoya, Lorena
2017-12-05
The sterile insect technique uses males that have been mass-reared in a controlled environment. The insects, once released in the field, must compete to mate. However, the mass-rearing condition supposes a loss of fitness that will be noticeable by wild females. To compare the fitness of wild males and mass-reared males, three competition settings were established. In setting 1, wild males, mass-reared males and wild females were released in field cages. In setting 2, wild females and wild males were released without competition, and in setting 3, mass-reared males and mass-reared females were also released without competition. Male fitness was based on their mating success, fecundity, weight and longevity. The fitness of the females was measured based on weight and several demographic parameters. The highest percentage of mating was between wild males and wild females between 0800 and 0900 h in the competition condition, while the mass-reared males started one hour later. The successful wild males weighed more and showed longer mating times, greater longevity and a higher number of matings than the mass-reared males. Although the mass-reared males showed the lowest percentage of matings, their fecundity when mating with wild females indicated a high fitness. Since the survival and fecundity of wild females that mated with mass-reared males decreased to become similar to those of mass-reared females that mated with mass-reared males, females seem to be influenced by the type of male (wild or mass-reared). © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Sarzaeem, M M; Najafi, F; Razi, M; Najafi, M A
2014-07-01
The gold standard in ACL reconstructions has been the bone-patellar tendon-bone autograft fixed with interference screws. This prospective, randomized clinical trial aimed to compare two methods of fixation for BPTB grafts: press-fit fixation vs. interference screw, over a 12-month follow-up interval. 158 patients with an average age of 29.8 years, between 2011 and 2012, were treated for torn ACL. 82 patients underwent reconstruction with BPTB autograft with a press fit fixation technique, and in 76 cases an interference screw was used. At the time of final follow-up, 71 patients in press-fit group and 65 patients in interference screw group were evaluated in terms of return to pre-injury activity level, pain, knee stability, range of motion, IKDC score and complications. At 12-month follow-up, 59 (83 %) and 55 (85 %) in press-fit and screw group, respectively had good-to-excellent IKDC score (p > 0.05). The mean laxity assessed using a KT-1000 arthrometer improved to 2.7 and 2.5 mm in press-fit and screw group, respectively. Regarding Lachman and pivot shift test, there was a statistically significant improvement in the integrity of the ACL in both the groups, but no significant differences was noted between groups. There were no significant differences in terms of femur circumference difference, effusion, knee range of motion, pain and complications. The press-fit technique is an efficient procedure. Its outcome was comparable with the interference screw group. Furthermore it has unlimited bone-to-bone healing, no need for removal of hardware, ease for revision and cost effectiveness.
NASA Astrophysics Data System (ADS)
Song, Zhen; Moore, Kevin L.; Chen, YangQuan; Bahl, Vikas
2003-09-01
As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.
Peñarrocha-Oltra, David; Agustín-Panadero, Rubén; Bagán, Leticia; Giménez, Beatriz; Peñarrocha, María
2014-07-01
To describe a technique for registering the positions of multiple dental implants using a system based on photogrammetry. A case is presented in which a prosthetic treatment was performed using this technique. Three Euroteknika® dental implants were placed to rehabilitate a 55-year-old male patient with right posterior maxillary edentulism. Three months later, the positions of the implants were registered using a photogrammetry-based stereo-camera (PICcamera®). After processing patient and implant data, special abutments (PICabutment®) were screwed onto each implant. The PICcamera® was then used to capture images of the implant positions, automatically taking 150 images in less than 60 seconds. From this information a file was obtained describing the relative positions - angles and distances - of each implant in vector form. Information regarding the soft tissues was obtained from an alginate impression that was cast in plaster and scanned. A Cr-Co structure was obtained using CAD/CAM, and its passive fit was verified in the patient's mouth using the Sheffield test and the screw resistance test. Twelve months after loading, peri-implant tissues were healthy and no marginal bone loss was observed. The clinical application of this new system using photogrammetry to record the position of multiple dental implants facilitated the rehabilitation of a patient with posterior maxillary edentulism by means of a prosthesis with optimal fit. The prosthetic process was accurate, fast, simple to apply and comfortable for the patient.
Molecular Dynamic Simulations of Interaction of an AFM Probe with the Surface of an SCN Sample
NASA Technical Reports Server (NTRS)
Bune, Adris; Kaukler, William; Rose, M. Franklin (Technical Monitor)
2001-01-01
Molecular dynamic (MD) simulations is conducted in order to estimate forces of probe-substrate interaction in the Atomic Force Microscope (AFM). First a review of available molecular dynamic techniques is given. Implementation of MD simulation is based on an object-oriented code developed at the University of Delft. Modeling of the sample material - succinonitrile (SCN) - is based on the Lennard-Jones potentials. For the polystyrene probe an atomic interaction potential is used. Due to object-oriented structure of the code modification of an atomic interaction potential is straight forward. Calculation of melting temperature is used for validation of the code and of the interaction potentials. Various fitting parameters of the probe-substrate interaction potentials are considered, as potentials fitted to certain properties and temperature ranges may not be reliable for the others. This research provides theoretical foundation for an interpretation of actual measurements of an interaction forces using AFM.
Advanced Code-Division Multiplexers for Superconducting Detector Arrays
NASA Astrophysics Data System (ADS)
Irwin, K. D.; Cho, H. M.; Doriese, W. B.; Fowler, J. W.; Hilton, G. C.; Niemack, M. D.; Reintsema, C. D.; Schmidt, D. R.; Ullom, J. N.; Vale, L. R.
2012-06-01
Multiplexers based on the modulation of superconducting quantum interference devices are now regularly used in multi-kilopixel arrays of superconducting detectors for astrophysics, cosmology, and materials analysis. Over the next decade, much larger arrays will be needed. These larger arrays require new modulation techniques and compact multiplexer elements that fit within each pixel. We present a new in-focal-plane code-division multiplexer that provides multiplexing elements with the required scalability. This code-division multiplexer uses compact lithographic modulation elements that simultaneously multiplex both signal outputs and superconducting transition-edge sensor (TES) detector bias voltages. It eliminates the shunt resistor used to voltage bias TES detectors, greatly reduces power dissipation, allows different dc bias voltages for each TES, and makes all elements sufficiently compact to fit inside the detector pixel area. These in-focal plane code-division multiplexers can be combined with multi-GHz readout based on superconducting microresonators to scale to even larger arrays.
Electron Neutrino Appearance in the MINOS Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orchanian, Mhair-armen Hagop
2012-01-01
This thesis describes a search for v e appearance in the two-detector long-baseline MINOS neutrino experiment at Fermilab, based on a data set representing an exposure of 8.2×10 20 protons on the NuMI target. The analysis detailed herein represents an increase in sensitivity to the θ 13 mixing angle of approximately 25% over previous analyses, due to improvements in the event discriminant and fitting technique. Based on our observation, we constrain the value of θ 13 further, finding 2 sin 2θ 23 sin 2θ 13< 0.12(0.20) at the 90% confidence level for δ CP = 0 and themore » normal (inverted) neutrino mass hierarchy. The best-fit value is 2 sin 2θ 23 sin 2θ 13 = 0.041 +0.047 -0.031(0.079 +0.071 -0.053) under the same assumptions. We exclude the θ 13 = 0 hypothesis at the 89% confidence level.« less
Design of a dual band metamaterial absorber for Wi-Fi bands
NASA Astrophysics Data System (ADS)
Alkurt, Fatih Özkan; Baǧmancı, Mehmet; Karaaslan, Muharrem; Bakır, Mehmet; Altıntaş, Olcay; Karadaǧ, Faruk; Akgöl, Oǧuzhan; Ünal, Emin
2018-02-01
The goal of this work is to design and fabrication of a dual band metamaterial based absorber for Wireless Fidelity (Wi-Fi) bands. Wi-Fi has two different operating frequencies such as 2.45 GHz and 5 GHz. A dual band absorber is proposed and the proposed structure consists of two layered unit cells, and different sized square split ring (SSR) resonators located on each layers. Copper is used for metal layer and resonator structure, FR-4 is used as substrate layer in the proposed structure. This designed dual band metamaterial absorber is used in the wireless frequency bands which has two center frequencies such as 2.45 GHz and 5 GHz. Finite Integration Technique (FIT) based simulation software used and according to FIT based simulation results, the absorption peak in the 2.45 GHz is about 90% and the another frequency 5 GHz has absorption peak near 99%. In addition, this proposed structure has a potential for energy harvesting applications in future works.
O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin
2017-12-06
Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.
NASA Astrophysics Data System (ADS)
Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao
2018-07-01
In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.
2D Bayesian automated tilted-ring fitting of disc galaxies in large H I galaxy surveys: 2DBAT
NASA Astrophysics Data System (ADS)
Oh, Se-Heon; Staveley-Smith, Lister; Spekkens, Kristine; Kamphuis, Peter; Koribalski, Bärbel S.
2018-01-01
We present a novel algorithm based on a Bayesian method for 2D tilted-ring analysis of disc galaxy velocity fields. Compared to the conventional algorithms based on a chi-squared minimization procedure, this new Bayesian-based algorithm suffers less from local minima of the model parameters even with highly multimodal posterior distributions. Moreover, the Bayesian analysis, implemented via Markov Chain Monte Carlo sampling, only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature will be essential when performing kinematic analysis on the large number of resolved galaxies expected to be detected in neutral hydrogen (H I) surveys with the Square Kilometre Array and its pathfinders. The so-called 2D Bayesian Automated Tilted-ring fitter (2DBAT) implements Bayesian fits of 2D tilted-ring models in order to derive rotation curves of galaxies. We explore 2DBAT performance on (a) artificial H I data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies, and (b) Australia Telescope Compact Array H I data from the Local Volume H I Survey. We find that 2DBAT works best for well-resolved galaxies with intermediate inclinations (20° < i < 70°), complementing 3D techniques better suited to modelling inclined galaxies.
Development and comparison of projection and image space 3D nodule insertion techniques
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan
2016-04-01
This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.
Fitting and Modeling in the ASC Data Analysis Environment
NASA Astrophysics Data System (ADS)
Doe, S.; Siemiginowska, A.; Joye, W.; McDowell, J.
As part of the AXAF Science Center (ASC) Data Analysis Environment, we will provide to the astronomical community a Fitting Application. We present a design of the application in this paper. Our design goal is to give the user the flexibility to use a variety of optimization techniques (Levenberg-Marquardt, maximum entropy, Monte Carlo, Powell, downhill simplex, CERN-Minuit, and simulated annealing) and fit statistics (chi (2) , Cash, variance, and maximum likelihood); our modular design allows the user easily to add their own optimization techniques and/or fit statistics. We also present a comparison of the optimization techniques to be provided by the Application. The high spatial and spectral resolutions that will be obtained with AXAF instruments require a sophisticated data modeling capability. We will provide not only a suite of astronomical spatial and spectral source models, but also the capability of combining these models into source models of up to four data dimensions (i.e., into source functions f(E,x,y,t)). We will also provide tools to create instrument response models appropriate for each observation.
González, Javier; Shirodkar, S P; Ciancio, G
2011-04-01
The excision of large retroperitoneal masses poses a challenge for every surgeon. Sometimes the urologist must face situations that do not fit to any conventional approach or technique previously described. Obtaining adequate exposure for safe and oncologically correct management of these masses is based, on many cases, in the mobilization of anatomical adjacent structures to generate a sufficient field in abdominal areas of difficult access. Complex visceral mobilization maneuvers derived from multivisceral transplantation organ procurement surgery provides ancillary techniques that used properly facilitate their successful resolution. The main purpose of this paper is the description of these surgical maneuvers essential to increase both exposure and vascular control in addressing the ever-dreaded high-volume retroperitoneal masses.
Quantifying the life-history response to increased male exposure in female Drosophila melanogaster.
Edward, Dominic A; Fricke, Claudia; Gerrard, Dave T; Chapman, Tracey
2011-02-01
Precise estimates of costs and benefits, the fitness economics, of mating are of key importance in understanding how selection shapes the coevolution of male and female mating traits. However, fitness is difficult to define and quantify. Here, we used a novel application of an established analytical technique to calculate individual- and population-based estimates of fitness-including those sensitive to the timing of reproduction-to measure the effects on females of increased exposure to males. Drosophila melanogaster females were exposed to high and low frequencies of contact with males, and life-history traits for each individual female were recorded. We then compared different fitness estimates to determine which of them best described the changes in life histories. We predicted that rate-sensitive estimates would be more accurate, as mating influences the rate of offspring production in this species. The results supported this prediction. Increased exposure to males led to significantly decreased fitness within declining but not stable or increasing populations. There was a net benefit of increased male exposure in expanding populations, despite a significant decrease in lifespan. The study shows how a more accurate description of fitness, and new insights can be achieved by considering individual life-history strategies within the context of population growth. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
Random-growth urban model with geographical fitness
NASA Astrophysics Data System (ADS)
Kii, Masanobu; Akimoto, Keigo; Doi, Kenji
2012-12-01
This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.
Just-in-Time Training of the Evidence-Based Public Health Framework, Oklahoma, 2016-2017.
Douglas, Malinda R; Lowry, Jon P; Morgan, Latricia A
2018-03-07
Training of practitioners on evidence-based public health has shown to be beneficial, yet overwhelming. Chunking information and proximate practical application are effective techniques to increase retention in adult learning. Evidence-based public health training for practitioners from African American and Hispanic/Latino community agencies and tribes/tribal nations incorporated these 2 techniques. The community-level practitioners alternated attending training and implementing the steps of the evidence-based public health framework as they planned state-funded programs. One year later, survey results showed that participants reported increased confidence in skills that were reinforced by practical and practiced application as compared with posttraining survey results. In addition, at 1 year, reported confidence in skills that were not fortified by proximate application decreased when compared with posttraining confidence levels. All 7 community programs successfully created individualized evidence-based action plans that included evidence-based practices and policies across socioecological levels that fit with the unique culture and climate of their own community.
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Steincamp, James W.; Stewart, Eric T.; Patton, Bruce W.; Pannell, William P.; Newby, Ronald L.; Coffman, Mark E.; Qualls, A. L.; Bancroft, S.; Molvik, Greg
2003-01-01
The Nuclear Electric Vehicle Optimization Toolset (NEVOT) optimizes the design of all major Nuclear Electric Propulsion (NEP) vehicle subsystems for a defined mission within constraints and optimization parameters chosen by a user. The tool uses a Genetic Algorithm (GA) search technique to combine subsystem designs and evaluate the fitness of the integrated design to fulfill a mission. The fitness of an individual is used within the GA to determine its probability of survival through successive generations in which the designs with low fitness are eliminated and replaced with combinations or mutations of designs with higher fitness. The program can find optimal solutions for different sets of fitness metrics without modification and can create and evaluate vehicle designs that might never be conceived of through traditional design techniques. It is anticipated that the flexible optimization methodology will expand present knowledge of the design trade-offs inherent in designing nuclear powered space vehicles and lead to improved NEP designs.
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
Comparative research on activation technique for GaAs photocathodes
NASA Astrophysics Data System (ADS)
Chen, Liang; Qian, Yunsheng; Chang, Benkang; Chen, Xinlong; Yang, Rui
2012-03-01
The properties of GaAs photocathodes mainly depend on the material design and activation technique. In early researches, high-low temperature two-step activation has been proved to get more quantum efficiency than high-temperature single-step activation. But the variations of surface barriers for two activation techniques have not been well studied, thus the best activation temperature, best Cs-O ratio and best activation time for two-step activation technique have not been well found. Because the surface photovoltage spectroscopy (SPS) before activation is only in connection with the body parameters for GaAs photocathode such as electron diffusion length and the spectral response current (SRC) after activation is in connection with not only body parameters but also surface barriers, thus the surface escape probability (SEP) can be well fitted through the comparative research between SPS before activation and SEP after activation. Through deduction for the tunneling process of surface barriers by Schrödinger equation, the width and height for surface barrier I and II can be well fitted through the curves of SEP. The fitting results were well proved and analyzed by quantitative analysis of angle-dependent X-ray photoelectron spectroscopy (ADXPS) which can also study the surface chemical compositions, atomic concentration percentage and layer thickness for GaAs photocathodes. This comparative research method for fitting parameters of surface barriers through SPS before activation and SRC after activation shows a better real-time in system method for the researches of activation techniques.
NASA Astrophysics Data System (ADS)
Rollett, T.; Möstl, C.; Isavnin, A.; Davies, J. A.; Kubicka, M.; Amerstorfer, U. V.; Harrison, R. A.
2016-06-01
In this study, we present a new method for forecasting arrival times and speeds of coronal mass ejections (CMEs) at any location in the inner heliosphere. This new approach enables the adoption of a highly flexible geometrical shape for the CME front with an adjustable CME angular width and an adjustable radius of curvature of its leading edge, I.e., the assumed geometry is elliptical. Using, as input, Solar TErrestrial RElations Observatory (STEREO) heliospheric imager (HI) observations, a new elliptic conversion (ElCon) method is introduced and combined with the use of drag-based model (DBM) fitting to quantify the deceleration or acceleration experienced by CMEs during propagation. The result is then used as input for the Ellipse Evolution Model (ElEvo). Together, ElCon, DBM fitting, and ElEvo form the novel ElEvoHI forecasting utility. To demonstrate the applicability of ElEvoHI, we forecast the arrival times and speeds of 21 CMEs remotely observed from STEREO/HI and compare them to in situ arrival times and speeds at 1 AU. Compared to the commonly used STEREO/HI fitting techniques (Fixed-ϕ, Harmonic Mean, and Self-similar Expansion fitting), ElEvoHI improves the arrival time forecast by about 2 to ±6.5 hr and the arrival speed forecast by ≈ 250 to ±53 km s-1, depending on the ellipse aspect ratio assumed. In particular, the remarkable improvement of the arrival speed prediction is potentially beneficial for predicting geomagnetic storm strength at Earth.
Extracting harmonic signal from a chaotic background with local linear model
NASA Astrophysics Data System (ADS)
Li, Chenlong; Su, Liyun
2017-02-01
In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.
USDA-ARS?s Scientific Manuscript database
TA study was conducted to compare nutrient flows determined by a reticular sampling technique with those made by sampling of digesta from the omasal canal. Six lactating dairy cows fitted with ruminal cannulas were used in a design with a 3 x 2 factorial arrangement of treatments and 4 periods. Trea...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venghaus, Florian; Eisfeld, Wolfgang, E-mail: wolfgang.eisfeld@uni-bielefeld.de
2016-03-21
Robust diabatization techniques are key for the development of high-dimensional coupled potential energy surfaces (PESs) to be used in multi-state quantum dynamics simulations. In the present study we demonstrate that, besides the actual diabatization technique, common problems with the underlying electronic structure calculations can be the reason why a diabatization fails. After giving a short review of the theoretical background of diabatization, we propose a method based on the block-diagonalization to analyse the electronic structure data. This analysis tool can be used in three different ways: First, it allows to detect issues with the ab initio reference data and ismore » used to optimize the setup of the electronic structure calculations. Second, the data from the block-diagonalization are utilized for the development of optimal parametrized diabatic model matrices by identifying the most significant couplings. Third, the block-diagonalization data are used to fit the parameters of the diabatic model, which yields an optimal initial guess for the non-linear fitting required by standard or more advanced energy based diabatization methods. The new approach is demonstrated by the diabatization of 9 electronic states of the propargyl radical, yielding fully coupled full-dimensional (12D) PESs in closed form.« less
Gao, Zhan; Desai, Jaydev P.
2009-01-01
This paper presents several experimental techniques and concepts in the process of measuring mechanical properties of very soft tissue in an ex vivo tensile test. Gravitational body force on very soft tissue causes pre-compression and results in a non-uniform initial deformation. The global Digital Image Correlation technique is used to measure the full field deformation behavior of liver tissue in uniaxial tension testing. A maximum stretching band is observed in the incremental strain field when a region of tissue passes from compression and enters a state of tension. A new method for estimating the zero strain state is proposed: the zero strain position is close to, but ahead of the position of the maximum stretching band, or in other words, the tangent of a nominal stress-stretch curve reaches minimum at λ ≳ 1. The approach, to identify zero strain by using maximum incremental strain, can be implemented in other types of image-based soft tissue analysis. The experimental results of ten samples from seven porcine livers are presented and material parameters for the Ogden model fit are obtained. The finite element simulation based on the fitted model confirms the effect of gravity on the deformation of very soft tissue and validates our approach. PMID:20015676
Lam, Philippe; Stern, Al
2010-01-01
We developed several techniques for visualizing the fit between a stopper and a vial in the critical flange area, a location typically hidden from view. Using these tools, it is possible to identify surfaces involved in forming the initial seal immediately after stopper insertion. We present examples illustrating important design elements that can contribute to forming a robust primary package. These techniques can also be used for component screening by facilitating the identification of combinations that do not fit well together so that they can be eliminated early in the selection process.
Ravindran, Sindhu; Jambek, Asral Bahari; Muthusamy, Hariharan; Neoh, Siew-Chin
2015-01-01
A novel clinical decision support system is proposed in this paper for evaluating the fetal well-being from the cardiotocogram (CTG) dataset through an Improved Adaptive Genetic Algorithm (IAGA) and Extreme Learning Machine (ELM). IAGA employs a new scaling technique (called sigma scaling) to avoid premature convergence and applies adaptive crossover and mutation techniques with masking concepts to enhance population diversity. Also, this search algorithm utilizes three different fitness functions (two single objective fitness functions and multi-objective fitness function) to assess its performance. The classification results unfold that promising classification accuracy of 94% is obtained with an optimal feature subset using IAGA. Also, the classification results are compared with those of other Feature Reduction techniques to substantiate its exhaustive search towards the global optimum. Besides, five other benchmark datasets are used to gauge the strength of the proposed IAGA algorithm.
Customization of stock eye prosthesis for a pediatric patient by a simplified technique.
Jurel, Sunit Kumar; Talwar, Naina; Chand, Pooran; Singh, Raghuwar D; Gupta, Durga Shanker
2012-05-01
The unfortunate loss or absence of an eye may be caused by congenital defect , irreparable trauma, tumor or blind eye. The role of the maxillofacial prosthodontist in fabricating an ocular prosthesis to restore facial symmetry and normal appearance for the anophthalmic patient becomes essential. A custom-made ocular prosthesis is an excellent alternative for the people who lose their eye especially in young age. It has acceptable fit, retention and esthetics but is technically difficult to fabricate. On the other hand the stock eye has compromised fit and poor esthetics. Our case report presents a simple technique of customization of stock eye prosthesis to provide accurate fit and acceptable esthetics. How to cite this article: Jurel SK, Talwar N, Chand P, Singh RD, Gupta DS. Customization of Stock Eye Prosthesis for a Pediatric Patient by a Simplified Technique. Int J Clin Pediatr Dent 2012;5(2):155-158.
Customization of Stock Eye Prosthesis for a Pediatric Patient by a Simplified Technique
Talwar, Naina; Chand, Pooran; Singh, Raghuwar D; Gupta, Durga Shanker
2012-01-01
ABSTRACT The unfortunate loss or absence of an eye may be caused by congenital defect , irreparable trauma, tumor or blind eye. The role of the maxillofacial prosthodontist in fabricating an ocular prosthesis to restore facial symmetry and normal appearance for the anophthalmic patient becomes essential. A custom-made ocular prosthesis is an excellent alternative for the people who lose their eye especially in young age. It has acceptable fit, retention and esthetics but is technically difficult to fabricate. On the other hand the stock eye has compromised fit and poor esthetics. Our case report presents a simple technique of customization of stock eye prosthesis to provide accurate fit and acceptable esthetics. How to cite this article: Jurel SK, Talwar N, Chand P, Singh RD, Gupta DS. Customization of Stock Eye Prosthesis for a Pediatric Patient by a Simplified Technique. Int J Clin Pediatr Dent 2012;5(2):155-158. PMID:25206159
Sparsity based terahertz reflective off-axis digital holography
NASA Astrophysics Data System (ADS)
Wan, Min; Muniraj, Inbarasan; Malallah, Ra'ed; Zhao, Liang; Ryle, James P.; Rong, Lu; Healy, John J.; Wang, Dayong; Sheridan, John T.
2017-05-01
Terahertz radiation lies between the microwave and infrared regions in the electromagnetic spectrum. Emitted frequencies range from 0.1 to 10 THz with corresponding wavelengths ranging from 30 μm to 3 mm. In this paper, a continuous-wave Terahertz off-axis digital holographic system is described. A Gaussian fitting method and image normalisation techniques were employed on the recorded hologram to improve the image resolution. A synthesised contrast enhanced hologram is then digitally constructed. Numerical reconstruction is achieved using the angular spectrum method of the filtered off-axis hologram. A sparsity based compression technique is introduced before numerical data reconstruction in order to reduce the dataset required for hologram reconstruction. Results prove that a tiny amount of sparse dataset is sufficient in order to reconstruct the hologram with good image quality.
Approach to spatial information security based on digital certificate
NASA Astrophysics Data System (ADS)
Cong, Shengri; Zhang, Kai; Chen, Baowen
2005-11-01
With the development of the online applications of geographic information systems (GIS) and the spatial information services, the spatial information security becomes more important. This work introduced digital certificates and authorization schemes into GIS to protect the crucial spatial information combining the techniques of the role-based access control (RBAC), the public key infrastructure (PKI) and the privilege management infrastructure (PMI). We investigated the spatial information granularity suited for sensitivity marking and digital certificate model that fits the need of GIS security based on the semantics analysis of spatial information. It implements a secure, flexible, fine-grained data access based on public technologies in GIS in the world.
Chandra Observations of Associates of η Carinae. II. Spectra
NASA Astrophysics Data System (ADS)
Evans, Nancy Remage; Schlegel, Eric M.; Waldron, Wayne L.; Seward, Frederick D.; Krauss, Miriam I.; Nichols, Joy; Wolk, Scott J.
2004-09-01
The low-resolution X-ray spectra around η Car covering Trumpler 16 and part of Trumpler 14 have been extracted from a Chandra CCD ACIS image. Various analysis techniques have been applied to the spectra based on their count rates. The spectra with the greatest number of counts (HD 93162 = WR 25, HD 93129 AB, and HD 93250) have been fitted with a wind model, which uses several components with different temperatures and depths in the wind. Weaker spectra have been fitted with Raymond-Smith models. The weakest spectra are simply intercompared with strong spectra. In general, fits produce reasonable parameters based on knowledge of the extinction from optical studies and on the range of temperatures for high- and low-mass stars. Direct comparisons of spectra confirm the consistency of the fitting results and also hardness ratios for cases of unusually large extinction in the clusters. The spectra of the low-mass stars are harder than the more massive stars. Stars in the sequence evolving from the main sequence (HD 93250) through the system containing the O supergiant (HD 93129 AB) and then through the Wolf-Rayet stage (HD 93162), presumably ending in the extreme example of η Car, share the property of being unusually luminous and hard in X-rays. For these X-ray-luminous stars, their high mass and evolutionary status (from the very last stages of the main sequence and beyond) is the common feature. Their binary status is mixed, and their magnetic status is still uncertain. Based on observations made with the Chandra X-Ray Observatory.
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul
2014-09-01
In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.
Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization
Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali
2014-01-01
Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584
Design of refractive laser beam shapers to generate complex irradiance profiles
NASA Astrophysics Data System (ADS)
Li, Meijie; Meuret, Youri; Duerr, Fabian; Vervaeke, Michael; Thienpont, Hugo
2014-05-01
A Gaussian laser beam is reshaped to have specific irradiance distributions in many applications in order to ensure optimal system performance. Refractive optics are commonly used for laser beam shaping. A refractive laser beam shaper is typically formed by either two plano-aspheric lenses or by one thick lens with two aspherical surfaces. Ray mapping is a general optical design technique to design refractive beam shapers based on geometric optics. This design technique in principle allows to generate any rotational-symmetric irradiance profile, yet in literature ray mapping is mainly developed to transform a Gaussian irradiance profile to a uniform profile. For more complex profiles especially with low intensity in the inner region, like a Dark Hollow Gaussian (DHG) irradiance profile, ray mapping technique is not directly applicable in practice. In order to these complex profiles, the numerical effort of calculating the aspherical surface points and fitting a surface with sufficient accuracy increases considerably. In this work we evaluate different sampling approaches and surface fitting methods. This allows us to propose and demonstrate a comprehensive numerical approach to efficiently design refractive laser beam shapers to generate rotational-symmetric collimated beams with a complex irradiance profile. Ray tracing analysis for several complex irradiance profiles demonstrates excellent performance of the designed lenses and the versatility of our design procedure.
Efficient and robust analysis of complex scattering data under noise in microwave resonators.
Probst, S; Song, F B; Bushev, P A; Ustinov, A V; Weides, M
2015-02-01
Superconducting microwave resonators are reliable circuits widely used for detection and as test devices for material research. A reliable determination of their external and internal quality factors is crucial for many modern applications, which either require fast measurements or operate in the single photon regime with small signal to noise ratios. Here, we use the circle fit technique with diameter correction and provide a step by step guide for implementing an algorithm for robust fitting and calibration of complex resonator scattering data in the presence of noise. The speedup and robustness of the analysis are achieved by employing an algebraic rather than an iterative fit technique for the resonance circle.
Fire detection behind a wall by using microwave techniques
NASA Astrophysics Data System (ADS)
Alkurt, Fatih Özkan; Baǧmancı, Mehmet; Karaaslan, Muharrem; Bakır, Mehmet; Altıntaş, Olcay; Karadaǧ, Faruk; Akgöl, Oǧuzhan; Ünal, Emin
2018-02-01
In this work, detection of the fire location behind a wall by using microwave techniques is illustrated. According to Planck's Law, Blackbody emits electromagnetic radiation in the microwave region of the electromagnetic spectrum. This emitted waves penetrates all materials except that metals. These radiated waves can be detected by using directional and high gain antennas. The proposed antenna consists of a simple microstrip patch antenna and a 2×2 microstrip patch antenna array. FIT based simulation results show that 2×2 array antenna can absorb emitted power from a fire source which is located behind a wall. This contribution can be inspirational for further works.
A Sludge Drum in the APNea System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, D.
1998-11-17
The assay of sludge drums pushes the APNea System to a definite extreme. Even though it seems clear that neutron based assay should be the method of choice for sludge drums, the difficulties posed by this matrix push any NDA technique to its limits. Special emphasis is given here to the differential die-away technique, which appears to approach the desired sensitivity. A parallel analysis of ethafoam drums will be presented, since the ethafoam matrix fits well within the operating range of the AIWea System, and, having been part of the early PDP trials, has been assayed by many in themore » NDA community.« less
Numerical solution of potential flow about arbitrary 2-dimensional multiple bodies
NASA Technical Reports Server (NTRS)
Thompson, J. F.; Thames, F. C.
1982-01-01
A procedure for the finite-difference numerical solution of the lifting potential flow about any number of arbitrarily shaped bodies is given. The solution is based on a technique of automatic numerical generation of a curvilinear coordinate system having coordinate lines coincident with the contours of all bodies in the field, regardless of their shapes and number. The effects of all numerical parameters involved are analyzed and appropriate values are recommended. Comparisons with analytic solutions for single Karman-Trefftz airfoils and a circular cylinder pair show excellent agreement. The technique of application of the boundary-fitted coordinate systems to the numerical solution of partial differential equations is illustrated.
Miao, Zewei; Xu, Ming; Lathrop, Richard G; Wang, Yufei
2009-02-01
A review of the literature revealed that a variety of methods are currently used for fitting net assimilation of CO2-chloroplastic CO2 concentration (A-Cc) curves, resulting in considerable differences in estimating the A-Cc parameters [including maximum ribulose 1.5-bisphosphate carboxylase/oxygenase (Rubisco) carboxylation rate (Vcmax), potential light saturated electron transport rate (Jmax), leaf dark respiration in the light (Rd), mesophyll conductance (gm) and triose-phosphate utilization (TPU)]. In this paper, we examined the impacts of fitting methods on the estimations of Vcmax, Jmax, TPU, Rd and gm using grid search and non-linear fitting techniques. Our results suggested that the fitting methods significantly affected the predictions of Rubisco-limited (Ac), ribulose 1,5-bisphosphate-limited (Aj) and TPU-limited (Ap) curves and leaf photosynthesis velocities because of the inconsistent estimate of Vcmax, Jmax, TPU, Rd and gm, but they barely influenced the Jmax : Vcmax, Vcmax : Rd and Jmax : TPU ratio. In terms of fitting accuracy, simplicity of fitting procedures and sample size requirement, we recommend to combine grid search and non-linear techniques to directly and simultaneously fit Vcmax, Jmax, TPU, Rd and gm with the whole A-Cc curve in contrast to the conventional method, which fits Vcmax, Rd or gm first and then solves for Vcmax, Jmax and/or TPU with V(cmax), Rd and/or gm held as constants.
A new methodology for vibration error compensation of optical encoders.
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.
Improvement on Timing Accuracy of LIDAR for Remote Sensing
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.
2018-05-01
The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.
Cooperative photometric redshift estimation
NASA Astrophysics Data System (ADS)
Cavuoti, S.; Tortora, C.; Brescia, M.; Longo, G.; Radovich, M.; Napolitano, N. R.; Amaro, V.; Vellucci, C.
2017-06-01
In the modern galaxy surveys photometric redshifts play a central role in a broad range of studies, from gravitational lensing and dark matter distribution to galaxy evolution. Using a dataset of ~ 25,000 galaxies from the second data release of the Kilo Degree Survey (KiDS) we obtain photometric redshifts with five different methods: (i) Random forest, (ii) Multi Layer Perceptron with Quasi Newton Algorithm, (iii) Multi Layer Perceptron with an optimization network based on the Levenberg-Marquardt learning rule, (iv) the Bayesian Photometric Redshift model (or BPZ) and (v) a classical SED template fitting procedure (Le Phare). We show how SED fitting techniques could provide useful information on the galaxy spectral type which can be used to improve the capability of machine learning methods constraining systematic errors and reduce the occurrence of catastrophic outliers. We use such classification to train specialized regression estimators, by demonstrating that such hybrid approach, involving SED fitting and machine learning in a single collaborative framework, is capable to improve the overall prediction accuracy of photometric redshifts.
Widuchowski, Wojciech; Widuchowska, Malgorzata; Koczy, Bogdan; Dragan, Szymon; Czamara, Andrzej; Tomaszewski, Wieslaw; Widuchowski, Jerzy
2012-06-27
If anterior cruciate ligament (ACL) reconstruction is to be performed, decision regarding graft choice and its fixation remains one of the most controversial. Multiple techniques for ACL reconstruction are available. To avoid disadvantages related to fixation devices, a hardware-free, press-fit ACL reconstruction technique was developed. The aim of this study was to evaluate clinical outcome and osteoarthritis progression in long term after ACL reconstruction with central third patellar-tendon autograft fixed to femur by press-fit technique. Fifty two patients met inclusion/excusion criteria for this study. The patients were assessed preoperatively and at 15 years after surgery with International Knee Documentation Committee Knee Ligament Evaluation Form, Lysholm knee score, Tegner activity scale and radiographs. Good overall clinical outcomes and self-reported assessments were documented, and remained good at 15 years. The mean Lysholm and Tegner scores improved from 59.7 ± 18.5 and 4.2 ± 1.0 preoperatively to 86.4 ± 5.6 (p = 0.004) and 6.9 ± 1.4 (p = 0.005) respectively at follow-up. The IKDC subjective score improved from 60.1 ± 9.2 to 80.2 ± 8.1 (p = 0.003). According to IKDC objective score, 75% of patients had normal or nearly normal knee joints at follow-up. Grade 0 or 1 results were seen in 85% of patients on laxity testing. Degenerative changes were found in 67% of patients. There was no correlation between arthritic changes and stability of knee and subjective evaluation (p > 0.05). ACL reconstruction with patellar tendon autograft fixed to femur with press-fit technique allows to achieve good self-reported assessments and clinical ligament evaluation up to 15 years. Advantages of the bone-patellar-tendon-bone (BPTB) press-fit fixation include unlimited bone-to-bone healing, cost effectiveness, avoidance of disadvantages associated with hardware, and ease for revision surgery. BPTB femoral press-fit fixation technique can be safely applied in clinical practice and enables patients to return to preinjury activities including high-risk sports.
NASA Astrophysics Data System (ADS)
Lu, Yuzhen; Lu, Renfu
2017-05-01
Three-dimensional (3-D) shape information is valuable for fruit quality evaluation. This study was aimed at developing phase analysis techniques for reconstruction of the 3-D surface of fruit from the pattern images acquired by a structuredillumination reflectance imaging (SIRI) system. Phase-shifted sinusoidal patterns, distorted by the fruit geometry, were acquired and processed through phase demodulation, phase unwrapping and other post-processing procedures to obtain phase difference maps relative to the phase of a reference plane. The phase maps were then transformed into height profiles and 3-D shapes in a world coordinate system based on phase-to-height and in-plane calibrations. A reference plane-based approach, coupled with the curve fitting technique using polynomials of order 3 or higher, was utilized for phase-to-height calibrations, which achieved superior accuracies with the root-mean-squared errors (RMSEs) of 0.027- 0.033 mm for a height measurement range of 0-91 mm. The 3rd-order polynomial curve fitting technique was further tested on two reference blocks with known heights, resulting in relative errors of 3.75% and 4.16%. In-plane calibrations were performed by solving a linear system formed by a number of control points in a calibration object, which yielded a RMSE of 0.311 mm. Tests of the calibrated system for reconstructing the surface of apple samples showed that surface concavities (i.e., stem/calyx regions) could be easily discriminated from bruises from the phase difference maps, reconstructed height profiles and the 3-D shape of apples. This study has laid a foundation for using SIRI for 3-D shape measurement, and thus expanded the capability of the technique for quality evaluation of horticultural products. Further research is needed to utilize the phase analysis techniques for stem/calyx detection of apples, and optimize the phase demodulation and unwrapping algorithms for faster and more reliable detection.
NASA Astrophysics Data System (ADS)
Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.
2008-03-01
Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.
Two imaging techniques for 3D quantification of pre-cementation space for CAD/CAM crowns.
Rungruanganunt, Patchanee; Kelly, J Robert; Adams, Douglas J
2010-12-01
Internal three-dimensional (3D) "fit" of prostheses to prepared teeth is likely more important clinically than "fit" judged only at the level of the margin (i.e. marginal "opening"). This work evaluates two techniques for quantitatively defining 3D "fit", both using pre-cementation space impressions: X-ray microcomputed tomography (micro-CT) and quantitative optical analysis. Both techniques are of interest for comparison of CAD/CAM system capabilities and for documenting "fit" as part of clinical studies. Pre-cementation space impressions were taken of a single zirconia coping on its die using a low viscosity poly(vinyl siloxane) impression material. Calibration specimens of this material were fabricated between the measuring platens of a micrometre. Both calibration curves and pre-cementation space impression data sets were obtained by examination using micro-CT and quantitative optical analysis. Regression analysis was used to compare calibration curves with calibration sets. Micro-CT calibration data showed tighter 95% confidence intervals and was able to measure over a wider thickness range than for the optical technique. Regions of interest (e.g., lingual, cervical) were more easily analysed with optical image analysis and this technique was more suitable for extremely thin impression walls (<10-15μm). Specimen preparation is easier for micro-CT and segmentation parameters appeared to capture dimensions accurately. Both micro-CT and the optical method can be used to quantify the thickness of pre-cementation space impressions. Each has advantages and limitations but either technique has the potential for use as part of clinical studies or CAD/CAM protocol optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
Some Improved Diagnostics for Failure of The Rasch Model.
ERIC Educational Resources Information Center
Molenaar, Ivo W.
1983-01-01
Goodness of fit tests for the Rasch model are typically large-sample, global measures. This paper offers suggestions for small-sample exploratory techniques for examining the fit of item data to the Rasch model. (Author/JKS)
Defining window-boundaries for genomic analyses using smoothing spline techniques
Beissinger, Timothy M.; Rosa, Guilherme J.M.; Kaeppler, Shawn M.; ...
2015-04-17
High-density genomic data is often analyzed by combining information over windows of adjacent markers. Interpretation of data grouped in windows versus at individual locations may increase statistical power, simplify computation, reduce sampling noise, and reduce the total number of tests performed. However, use of adjacent marker information can result in over- or under-smoothing, undesirable window boundary specifications, or highly correlated test statistics. We introduce a method for defining windows based on statistically guided breakpoints in the data, as a foundation for the analysis of multiple adjacent data points. This method involves first fitting a cubic smoothing spline to the datamore » and then identifying the inflection points of the fitted spline, which serve as the boundaries of adjacent windows. This technique does not require prior knowledge of linkage disequilibrium, and therefore can be applied to data collected from individual or pooled sequencing experiments. Moreover, in contrast to existing methods, an arbitrary choice of window size is not necessary, since these are determined empirically and allowed to vary along the genome.« less
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques
Barzaghi, Riccardo; De Gaetani, Carlo Iapige
2018-01-01
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D’Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations. PMID:29498650
Captive and field-tested radio attachments for bald eagles
Buehler, D.A.; Fraser, J.D.; Fuller, M.R.; McAllister, L.S.; Seegar, J.K.D.
1995-01-01
The effects of two radio transmitter attachment techniques on captive and one attachment technique on wild Bald Eagles (Haliaeetus leucocephalus) were studied. A Y-attachment method with a 160-g dummy transmitter was less apt to cause tissue damage on captive birds than an X-attachment method, and loosely fit transmitters caused less damage than tightly fit transmitters Annual survival of wild birds fitted with 65-g transmitters via an X attachment was estimated at 90-95%. As a result of high survival, only five wild birds marked as nestlings were recovered.Two of these birds had superficial pressure sores from tight-fitting harnesses It is recommended that a 1.3-cm space be left between the transmitter and the bird's b ack when radio-tagging post-fiedging Bald Eagles. Additional space, perhaps up to 2.5 cm, is required for nestlings to allow for added growth and development.
Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad
2014-01-01
Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.
Ivanov, R; Marín, E; Villa, J; Aguilar, C Hernández; Pacheco, A Domínguez; Garrido, S Hernández
2016-02-01
In a recent paper published in this journal [R. Ivanov et al., Rev. Sci. Instrum. 86, 064902 (2015)], a methodology free of fitting procedures for determining the thermal effusivity of liquids using the electropyroelectric technique was reported. Here the same measurement principle is extended to the well-known photopyroelectric technique. The theoretical basis and experimental basis of the method are presented and its usefulness is demonstrated with measurements on test samples.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
Stochastic approach to data analysis in fluorescence correlation spectroscopy.
Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo
2006-09-21
Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
The training and learning process of transseptal puncture using a modified technique.
Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom
2013-12-01
As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.
A New Stellar Atmosphere Grid and Comparisons with HST /STIS CALSPEC Flux Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohlin, Ralph C.; Fleming, Scott W.; Gordon, Karl D.
The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli and Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanzmore » and Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T {sub eff} = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope . Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.« less
Németh, Károly; Chapman, Karena W; Balasubramanian, Mahalingam; Shyam, Badri; Chupas, Peter J; Heald, Steve M; Newville, Matt; Klingler, Robert J; Winans, Randall E; Almer, Jonathan D; Sandi, Giselle; Srajer, George
2012-02-21
An efficient implementation of simultaneous reverse Monte Carlo (RMC) modeling of pair distribution function (PDF) and EXAFS spectra is reported. This implementation is an extension of the technique established by Krayzman et al. [J. Appl. Cryst. 42, 867 (2009)] in the sense that it enables simultaneous real-space fitting of x-ray PDF with accurate treatment of Q-dependence of the scattering cross-sections and EXAFS with multiple photoelectron scattering included. The extension also allows for atom swaps during EXAFS fits thereby enabling modeling the effects of chemical disorder, such as migrating atoms and vacancies. Significant acceleration of EXAFS computation is achieved via discretization of effective path lengths and subsequent reduction of operation counts. The validity and accuracy of the approach is illustrated on small atomic clusters and on 5500-9000 atom models of bcc-Fe and α-Fe(2)O(3). The accuracy gains of combined simultaneous EXAFS and PDF fits are pointed out against PDF-only and EXAFS-only RMC fits. Our modeling approach may be widely used in PDF and EXAFS based investigations of disordered materials. © 2012 American Institute of Physics
A New Stellar Atmosphere Grid and Comparisons with HST/STIS CALSPEC Flux Distributions
NASA Astrophysics Data System (ADS)
Bohlin, Ralph C.; Mészáros, Szabolcs; Fleming, Scott W.; Gordon, Karl D.; Koekemoer, Anton M.; Kovács, József
2017-05-01
The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli & Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanz & Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T eff = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope. Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.
Schneider, Martin; Iskander, D Robert; Collins, Michael J
2009-02-01
High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.
NASA Technical Reports Server (NTRS)
Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.
1984-01-01
A numerical procedure is presented for analyzing a wide variety of heat conduction problems in multilayered bodies having complex geometry. The method is based on a finite difference solution of the heat conduction equation using a body fitted coordinate system transformation. Solution techniques are described for steady and transient problems with and without internal energy generation. Results are found to compare favorably with several well known solutions.
Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George
2017-06-26
We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
Joyner, Damon; Wengreen, Heidi J; Aguilar, Sheryl S; Spruance, Lori Andersen; Morrill, Brooke A; Madden, Gregory J
2017-04-01
Previously published versions of the healthy eating "FIT Game" were administered by teachers in all grades at elementary schools. The present study evaluated whether the game would retain its efficacy if teachers were relieved of this task; presenting instead all game materials on visual displays in the school cafeteria. Participants were 572 children attending two Title 1 elementary schools (grades K-5). Following a no-intervention baseline period in which fruit and vegetable consumption were measured from food waste, the schools played the FIT Game. In the game, the children's vegetable consumption influenced events in a good versus evil narrative presented in comic book-formatted episodes in the school cafeteria. When daily vegetable-consumption goals were met, new FIT Game episodes were displayed. Game elements included a game narrative, competition, virtual currency, and limited player autonomy. The two intervention phases were separated by a second baseline phase (within-school reversal design). Simulation Modeling Analysis (a bootstrapping technique appropriate to within-group time-series designs) was used to evaluate whether vegetable consumption increased significantly above baseline levels in the FIT Game phases (P < 0.05). Vegetable consumption increased significantly from 21.3 g during the two baseline phases to 42.5 g during the FIT Game phases; a 99.9% increase. The Game did not significantly increase fruit consumption (which was not targeted for change), nor was there a decrease in fruit consumption. Labor-reductions in the FIT Game did not reduce its positive impact on healthy eating.
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction
Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija
2017-01-01
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468
Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.
Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija
2017-12-02
This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.
Narcotics Misuse Victims: Is Physical Exercise for Their Fitness Needed
NASA Astrophysics Data System (ADS)
Tarigan, B.
2017-03-01
This research is purposed to find out whether physical exercise needed to improve physical fitness of narcotics misuse victims in Social Rehabilitation Center Pamardi Putera West Java Province. Survey method and field test were applied in this research. Population is all members of rehabilitation in BRSPP and the sampling technique used in this research was purposive sampling. Indonesia Physical Fitness Test (TKJI) was used as the instrument. The result of the research showed that level of narcotics misuse victims’ physical fitness is in ‘low’ category so that regular and measurable physical activity is needed in developing their physical fitness.
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
NASA Astrophysics Data System (ADS)
Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.
2007-12-01
An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.
Modeling T1 and T2 relaxation in bovine white matter
NASA Astrophysics Data System (ADS)
Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.
2015-10-01
The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.
Inter-technique validation of tropospheric slant total delays
NASA Astrophysics Data System (ADS)
Kačmařík, Michal; Douša, Jan; Dick, Galina; Zus, Florian; Brenot, Hugues; Möller, Gregor; Pottiaux, Eric; Kapłon, Jan; Hordyniec, Paweł; Václavovic, Pavel; Morel, Laurent
2017-06-01
An extensive validation of line-of-sight tropospheric slant total delays (STD) from Global Navigation Satellite Systems (GNSS), ray tracing in numerical weather prediction model (NWM) fields and microwave water vapour radiometer (WVR) is presented. Ten GNSS reference stations, including collocated sites, and almost 2 months of data from 2013, including severe weather events were used for comparison. Seven institutions delivered their STDs based on GNSS observations processed using 5 software programs and 11 strategies enabling to compare rather different solutions and to assess the impact of several aspects of the processing strategy. STDs from NWM ray tracing came from three institutions using three different NWMs and ray-tracing software. Inter-techniques evaluations demonstrated a good mutual agreement of various GNSS STD solutions compared to NWM and WVR STDs. The mean bias among GNSS solutions not considering post-fit residuals in STDs was -0.6 mm for STDs scaled in the zenith direction and the mean standard deviation was 3.7 mm. Standard deviations of comparisons between GNSS and NWM ray-tracing solutions were typically 10 mm ± 2 mm (scaled in the zenith direction), depending on the NWM model and the GNSS station. Comparing GNSS versus WVR STDs reached standard deviations of 12 mm ± 2 mm also scaled in the zenith direction. Impacts of raw GNSS post-fit residuals and cleaned residuals on optimal reconstructing of GNSS STDs were evaluated at inter-technique comparison and for GNSS at collocated sites. The use of raw post-fit residuals is not generally recommended as they might contain strong systematic effects, as demonstrated in the case of station LDB0. Simplified STDs reconstructed only from estimated GNSS tropospheric parameters, i.e. without applying post-fit residuals, performed the best in all the comparisons; however, it obviously missed part of tropospheric signals due to non-linear temporal and spatial variations in the troposphere. Although the post-fit residuals cleaned of visible systematic errors generally showed a slightly worse performance, they contained significant tropospheric signal on top of the simplified model. They are thus recommended for the reconstruction of STDs, particularly during high variability in the troposphere. Cleaned residuals also showed a stable performance during ordinary days while containing promising information about the troposphere at low-elevation angles.
Kalman Filter Tracking on Parallel Architectures
NASA Astrophysics Data System (ADS)
Cerati, Giuseppe; Elmer, Peter; Lantz, Steven; McDermott, Kevin; Riley, Dan; Tadel, Matevž; Wittich, Peter; Würthwein, Frank; Yagil, Avi
2015-12-01
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen the introduction of lower-power, multi-core processors, but the future will be even more exciting. In order to stay within the power density limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms to exploit larger numbers of lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi and GPGPUs. Track finding and fitting is one of the most computationally challenging problems for event reconstruction in particle physics. At the High Luminosity LHC, for example, this will be by far the dominant problem. The need for greater parallelism has driven investigations of very different track finding techniques including Cellular Automata or returning to Hough Transform. The most common track finding techniques in use today are however those based on the Kalman Filter [2]. Significant experience has been accumulated with these techniques on real tracking detector systems, both in the trigger and offline. They are known to provide high physics performance, are robust and are exactly those being used today for the design of the tracking system for HL-LHC. Our previous investigations showed that, using optimized data structures, track fitting with Kalman Filter can achieve large speedup both with Intel Xeon and Xeon Phi. We report here our further progress towards an end-to-end track reconstruction algorithm fully exploiting vectorization and parallelization techniques in a realistic simulation setup.
Chi-squared and C statistic minimization for low count per bin data
NASA Astrophysics Data System (ADS)
Nousek, John A.; Shue, David R.
1989-07-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy
NASA Technical Reports Server (NTRS)
Nousek, John A.; Shue, David R.
1989-01-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Ivanov, R; Marin, E; Villa, J; Gonzalez, E; Rodríguez, C I; Olvera, J E
2015-06-01
This paper describes an alternative methodology to determine the thermal effusivity of a liquid sample using the recently proposed electropyroelectric technique, without fitting the experimental data with a theoretical model and without having to know the pyroelectric sensor related parameters, as in most previous reported approaches. The method is not absolute, because a reference liquid with known thermal properties is needed. Experiments have been performed that demonstrate the high reliability and accuracy of the method with measurement uncertainties smaller than 3%.
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
Guess, Petra C; Vagkopoulou, Thaleia; Zhang, Yu; Wolkewitz, Martin; Strub, Joerg R
2014-02-01
The aim of the study was to evaluate the marginal and internal fit of heat-pressed and CAD/CAM fabricated all-ceramic onlays before and after luting as well as after thermo-mechanical fatigue. Seventy-two caries-free, extracted human mandibular molars were randomly divided into three groups (n=24/group). All teeth received an onlay preparation with a mesio-occlusal-distal inlay cavity and an occlusal reduction of all cusps. Teeth were restored with heat-pressed IPS-e.max-Press* (IP, *Ivoclar-Vivadent) and Vita-PM9 (VP, Vita-Zahnfabrik) as well as CAD/CAM fabricated IPS-e.max-CAD* (IC, Cerec 3D/InLab/Sirona) all-ceramic materials. After cementation with a dual-polymerising resin cement (VariolinkII*), all restorations were subjected to mouth-motion fatigue (98 N, 1.2 million cycles; 5°C/55°C). Marginal fit discrepancies were examined on epoxy replicas before and after luting as well as after fatigue at 200× magnification. Internal fit was evaluated by multiple sectioning technique. For the statistical analysis, a linear model was fitted with accounting for repeated measurements. Adhesive cementation of onlays resulted in significantly increased marginal gap values in all groups, whereas thermo-mechanical fatigue had no effect. Marginal gap values of all test groups were equal after fatigue exposure. Internal discrepancies of CAD/CAM fabricated restorations were significantly higher than both press manufactured onlays. Mean marginal gap values of the investigated onlays before and after luting as well as after fatigue were within the clinically acceptable range. Marginal fit was not affected by the investigated heat-press versus CAD/CAM fabrication technique. Press fabrication resulted in a superior internal fit of onlays as compared to the CAD/CAM technique. Clinical requirements of 100 μm for marginal fit were fulfilled by the heat-press as well as by the CAD/CAM fabricated all-ceramic onlays. Superior internal fit was observed with the heat-press manufacturing method. The impact of present findings on the clinical long-term behaviour of differently fabricated all-ceramic onlays warrants further investigation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Guess, Petra C.; Vagopoulou, Thaleia; Zhang, Yu; Wolkewitz, Martin; Strub, Joerg R.
2015-01-01
Objectives The aim of the study was to evaluate the marginal and internal fit of heat-pressed and CAD/CAM fabricated all-ceramic onlays before and after luting as well as after thermo-mechanical fatigue. Materials and Methods Seventy-two caries-free, extracted human mandibular molars were randomly divided into three groups (n=24/group). All teeth received an onlay preparation with a mesio-occlusal-distal inlay cavity and an occlusal reduction of all cusps. Teeth were restored with heat-pressed IPS-e.max-Press* (IP, *Ivoclar-Vivadent) and Vita-PM9 (VP, Vita-Zahnfabrik) as well as CAD/CAM fabricated IPS-e.max-CAD* (IC, Cerec 3D/InLab/Sirona) all-ceramic materials. After cementation with a dual-polymerizing resin cement (VariolinkII*), all restorations were subjected to mouth-motion fatigue (98N, 1.2 million cycles; 5°C/55°C). Marginal fit discrepancies were examined on epoxy replicas before and after luting as well as after fatigue at 200x magnification. Internal fit was evaluated by multiple sectioning technique. For the statistical analysis, a linear model was fitted with accounting for repeated measurements. Results Adhesive cementation of onlays resulted in significantly increased marginal gap values in all groups, whereas thermo-mechanical fatigue had no effect. Marginal gap values of all test groups were equal after fatigue exposure. Internal discrepancies of CAD/CAM fabricated restorations were significantly higher than both press manufactured onlays. Conclusions Mean marginal gap values of the investigated onlays before and after luting as well as after fatigue were within the clinically acceptable range. Marginal fit was not affected by the investigated heat-press versus CAD/CAM fabrication technique. Press fabrication resulted in a superior internal fit of onlays as compared to the CAD/CAM technique. Clinical Relevance Clinical requirements of 100 μm for marginal fit were fulfilled by the heat-press as well as by the CAD/CAM fabricated all-ceramic onlays. Superior internal fit was observed with the heat-press manufacturing method. The impact of present findings on the clinical long-term behaviour of differently fabricated all-ceramic onlays warrants further investigation. PMID:24161516
Barnett, Lisa M; Morgan, Philip J; van Beurden, Eric; Beard, John R
2008-01-01
Background The purpose of this paper was to investigate whether perceived sports competence mediates the relationship between childhood motor skill proficiency and subsequent adolescent physical activity and fitness. Methods In 2000, children's motor skill proficiency was assessed as part of a school-based physical activity intervention. In 2006/07, participants were followed up as part of the Physical Activity and Skills Study and completed assessments for perceived sports competence (Physical Self-Perception Profile), physical activity (Adolescent Physical Activity Recall Questionnaire) and cardiorespiratory fitness (Multistage Fitness Test). Structural equation modelling techniques were used to determine whether perceived sports competence mediated between childhood object control skill proficiency (composite score of kick, catch and overhand throw), and subsequent adolescent self-reported time in moderate-to-vigorous physical activity and cardiorespiratory fitness. Results Of 928 original intervention participants, 481 were located in 28 schools and 276 (57%) were assessed with at least one follow-up measure. Slightly more than half were female (52.4%) with a mean age of 16.4 years (range 14.2 to 18.3 yrs). Relevant assessments were completed by 250 (90.6%) students for the Physical Activity Model and 227 (82.3%) for the Fitness Model. Both hypothesised mediation models had a good fit to the observed data, with the Physical Activity Model accounting for 18% (R2 = 0.18) of physical activity variance and the Fitness Model accounting for 30% (R2 = 0.30) of fitness variance. Sex did not act as a moderator in either model. Conclusion Developing a high perceived sports competence through object control skill development in childhood is important for both boys and girls in determining adolescent physical activity participation and fitness. Our findings highlight the need for interventions to target and improve the perceived sports competence of youth. PMID:18687148
NASA Technical Reports Server (NTRS)
Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.
1987-01-01
The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.
NASA Astrophysics Data System (ADS)
Demir, I.
2013-12-01
Recent developments in web technologies make it easy to manage and visualize large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to create realistic environments, and interact with data to gain insight from simulations and environmental observations. The floodplain simulation system is a web-based 3D interactive flood simulation environment to create real world flooding scenarios. The simulation systems provides a visually striking platform with realistic terrain information, and water simulation. Students can create and modify predefined scenarios, control environmental parameters, and evaluate flood mitigation techniques. The web-based simulation system provides an environment to children and adults learn about the flooding, flood damage, and effects of development and human activity in the floodplain. The system provides various scenarios customized to fit the age and education level of the users. This presentation provides an overview of the web-based flood simulation system, and demonstrates the capabilities of the system for various flooding and land use scenarios.
Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials.
Finke, Mareike; Billinger, Martin; Büchner, Andreas
Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.
Airborne geoid mapping of land and sea areas of East Malaysia
NASA Astrophysics Data System (ADS)
Jamil, H.; Kadir, M.; Forsberg, R.; Olesen, A.; Isa, M. N.; Rasidi, S.; Mohamed, A.; Chihat, Z.; Nielsen, E.; Majid, F.; Talib, K.; Aman, S.
2017-02-01
This paper describes the development of a new geoid-based vertical datum from airborne gravity data, by the Department of Survey and Mapping Malaysia, on land and in the South China Sea out of the coast of East Malaysia region, covering an area of about 610,000 square kilometres. More than 107,000 km flight line of airborne gravity data over land and marine areas of East Malaysia has been combined to provide a seamless land-to-sea gravity field coverage; with an estimated accuracy of better than 2.0 mGal. The iMAR-IMU processed gravity anomaly data has been used during a 2014-2016 airborne survey to extend a composite gravity solution across a number of minor gaps on selected lines, using a draping technique. The geoid computations were all done with the GRAVSOFT suite of programs from DTU-Space. EGM2008 augmented with GOCE spherical harmonic model has been used to spherical harmonic degree N = 720. The gravimetric geoid first was tied at one tide-gauge (in Kota Kinabalu, KK2019) to produce a fitted geoid, my_geoid2017_fit_kk. The fitted geoid was offset from the gravimetric geoid by +0.852 m, based on the comparison at the tide-gauge benchmark KK2019. Consequently, orthometric height at the six other tide gauge stations was computed from HGPS Lev = hGPS - Nmy_geoid2017_.t_kk. Comparison of the conventional (HLev) and GPS-levelling heights (HGPS Lev) at the six tide gauge locations indicate RMS height difference of 2.6 cm. The final gravimetric geoidwas fitted to the seven tide gauge stations and is known as my_geoid2017_fit_east. The accuracy of the gravimetric geoid is estimated to be better than 5 cm across most of East Malaysia land and marine areas
Correcting for deformation in skin-based marker systems.
Alexander, E J; Andriacchi, T P
2001-03-01
A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
Using framework-based synthesis for conducting reviews of qualitative studies.
Dixon-Woods, Mary
2011-04-14
Framework analysis is a technique used for data analysis in primary qualitative research. Recent years have seen its being adapted to conduct syntheses of qualitative studies. Framework-based synthesis shows considerable promise in addressing applied policy questions. An innovation in the approach, known as 'best fit' framework synthesis, has been published in BMC Medical Research Methodology this month. It involves reviewers in choosing a conceptual model likely to be suitable for the question of the review, and using it as the basis of their initial coding framework. This framework is then modified in response to the evidence reported in the studies in the reviews, so that the final product is a revised framework that may include both modified factors and new factors that were not anticipated in the original model. 'Best fit' framework-based synthesis may be especially suitable in addressing urgent policy questions where the need for a more fully developed synthesis is balanced by the need for a quick answer. Please see related article: http://www.biomedcentral.com/1471-2288/11/29.
Utilization of volume correlation filters for underwater mine identification in LIDAR imagery
NASA Astrophysics Data System (ADS)
Walls, Bradley
2008-04-01
Underwater mine identification persists as a critical technology pursued aggressively by the Navy for fleet protection. As such, new and improved techniques must continue to be developed in order to provide measurable increases in mine identification performance and noticeable reductions in false alarm rates. In this paper we show how recent advances in the Volume Correlation Filter (VCF) developed for ground based LIDAR systems can be adapted to identify targets in underwater LIDAR imagery. Current automated target recognition (ATR) algorithms for underwater mine identification employ spatial based three-dimensional (3D) shape fitting of models to LIDAR data to identify common mine shapes consisting of the box, cylinder, hemisphere, truncated cone, wedge, and annulus. VCFs provide a promising alternative to these spatial techniques by correlating 3D models against the 3D rendered LIDAR data.
Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.
Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla
2018-01-01
To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.
Neutron spectroscopy with scintillation detectors using wavelets
NASA Astrophysics Data System (ADS)
Hartman, Jessica
The purpose of this research was to study neutron spectroscopy using the EJ-299-33A plastic scintillator. This scintillator material provided a novel means of detection for fast neutrons, without the disadvantages of traditional liquid scintillation materials. EJ-299-33A provided a more durable option to these materials, making it less likely to be damaged during handling. Unlike liquid scintillators, this plastic scintillator was manufactured from a non-toxic material, making it safer to use, as well as easier to design detectors. The material was also manufactured with inherent pulse shape discrimination abilities, making it suitable for use in neutron detection. The neutron spectral unfolding technique was developed in two stages. Initial detector response function modeling was carried out through the use of the MCNPX Monte Carlo code. The response functions were developed for a monoenergetic neutron flux. Wavelets were then applied to smooth the response function. The spectral unfolding technique was applied through polynomial fitting and optimization techniques in MATLAB. Verification of the unfolding technique was carried out through the use of experimentally determined response functions. These were measured on the neutron source based on the Van de Graff accelerator at the University of Kentucky. This machine provided a range of monoenergetic neutron beams between 0.1 MeV and 24 MeV, making it possible to measure the set of response functions of the EJ-299-33A plastic scintillator detector to neutrons of specific energies. The response of a plutonium-beryllium (PuBe) source was measured using the source available at the University of Nevada, Las Vegas. The neutron spectrum reconstruction was carried out using the experimentally measured response functions. Experimental data was collected in the list mode of the waveform digitizer. Post processing of this data focused on the pulse shape discrimination analysis of the recorded response functions to remove the effects of photons and allow for source characterization based solely on the neutron response. The unfolding technique was performed through polynomial fitting and optimization techniques in MATLAB, and provided an energy spectrum for the PuBe source.
NASA Astrophysics Data System (ADS)
Jain, Varun; Biesinger, Mark C.; Linford, Matthew R.
2018-07-01
X-ray photoelectron spectroscopy (XPS) is arguably the most important vacuum technique for surface chemical analysis, and peak fitting is an indispensable part of XPS data analysis. Functions that have been widely explored and used in XPS peak fitting include the Gaussian, Lorentzian, Gaussian-Lorentzian sum (GLS), Gaussian-Lorentzian product (GLP), and Voigt functions, where the Voigt function is a convolution of a Gaussian and a Lorentzian function. In this article we discuss these functions from a graphical perspective. Arguments based on convolution and the Central Limit Theorem are made to justify the use of functions that are intermediate between pure Gaussians and pure Lorentzians in XPS peak fitting. Mathematical forms for the GLS and GLP functions are presented with a mixing parameter m. Plots are shown for GLS and GLP functions with mixing parameters ranging from 0 to 1. There are fundamental differences between the GLS and GLP functions. The GLS function better follows the 'wings' of the Lorentzian, while these 'wings' are suppressed in the GLP. That is, these two functions are not interchangeable. The GLS and GLP functions are compared to the Voigt function, where the GLS is shown to be a decent approximation of it. Practically, both the GLS and the GLP functions can be useful for XPS peak fitting. Examples of the uses of these functions are provided herein.
DE and NLP Based QPLS Algorithm
NASA Astrophysics Data System (ADS)
Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo
As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.
Multi Sensor Fusion Using Fitness Adaptive Differential Evolution
NASA Astrophysics Data System (ADS)
Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam
The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Pawlikowski, Marek; Jankowski, Krzysztof; Skalski, Konstanty
2018-05-30
A new constitutive model for human trabecular bone is presented in the present study. As the model is based on indentation tests performed on single trabeculae it is formulated in a microscale. The constitutive law takes into account non-linear viscoelasticity of the tissue. The elastic response is described by the hyperelastic Mooney-Rivlin model while the viscoelastic effects are considered by means of the hereditary integral in which stress depends on both time and strain. The material constants in the constitutive equation are identified on the basis of the stress relaxation tests and the indentation tests using curve-fitting procedure. The constitutive model is implemented into finite element package Abaqus ® by means of UMAT subroutine. The curve-fitting error is low and the viscoelastic behaviour of the tissue predicted by the proposed constitutive model corresponds well to the realistic response of the trabecular bone. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Dan, Wen-Yan; Di, You-Ying; He, Dong-Hua; Liu, Yu-Pu
2011-02-01
1-Decylammonium hydrochloride was synthesized by the method of liquid phase synthesis. Chemical analysis, elemental analysis, and X-ray single crystal diffraction techniques were applied to characterize its composition and structure. Low-temperature heat capacities of the compounds were measured with a precision automated adiabatic calorimeter over the temperature range from 78 to 380 K. Three solid-solid phase transitions have been observed at the peak temperatures of 307.52 ± 0.13, 325.02 ± 0.19, and 327.26 ± 0.07 K. The molar enthalpies and entropies of three phase transitions were determined based on the analysis of heat capacity curves. Experimental molar heat capacities were fitted to two polynomial equations of the heat capacities as a function of temperature by least square method. Smoothed heat capacities and thermodynamic functions of the compound relative to the standard reference temperature 298.15 K were calculated and tabulated at intervals of 5 K based on the fitted polynomials.
Diong, B; Grainger, J; Goldman, M; Nazeran, H
2009-01-01
The forced oscillation technique offers some advantages over spirometry for assessing pulmonary function. It requires only passive patient cooperation; it also provides data in a form, frequency-dependent impedance, which is very amenable to engineering analysis. In particular, the data can be used to obtain parameter estimates for electric circuit-based models of the respiratory system, which can in turn aid the detection and diagnosis of various diseases/pathologies. In this study, we compare the least-squares error performance of the RIC, extended RIC, augmented RIC, augmented RIC+I(p), DuBois, Nagels and Mead models in fitting 3 sets of impedance data. These data were obtained by pseudorandom noise forced oscillation of healthy subjects, mild asthmatics and more severe asthmatics. We found that the aRIC+I(p) and DuBois models yielded the lowest fitting errors (for the healthy subjects group and the 2 asthmatic patient groups, respectively) without also producing unphysiologically large component estimates.
Risk factors for unsuccessful acetabular press-fit fixation at primary total hip arthroplasty.
Brulc, U; Antolič, V; Mavčič, B
2017-11-01
Surgeon at primary total hip arthroplasty sometimes cannot achieve sufficient cementless acetabular press-fit fixation and must resort to other fixation methods. Despite a predominant use of cementless cups, this issue is not fully clarified, therefore we performed a large retrospective study to: (1) identify risk factors related to patient or implant or surgeon for unsuccessful intraoperative press-fit; (2) check for correlation between surgeons' volume of operated cases and the press-fit success rate. Unsuccessful intra-operative press-fit more often occurs in older female patients, particular implants, due to learning curve and low-volume surgeons. Retrospective observational cohort of prospectively collected intraoperative data (2009-2016) included all primary total hip arthroplasty patients with implant brands that offered acetabular press-fit fixation only. Press-fit was considered successful if acetabulum was of the same implant brand as the femoral component without additional screws or cement. Logistic regression models for unsuccessful acetabular press-fit included patients' gender/age/operated side, implant, surgeon, approach (posterior n=1206, direct-lateral n=871) and surgery date (i.e. learning curve). In 2077 patients (mean 65.5 years, 1093 females, 1163 right hips), three different implant brands (973 ABG-II™-Stryker, 646 EcoFit™ Implantcast, 458 Procotyl™ L-Wright) were implanted by eight surgeons. Their unsuccessful press-fit fixation rates ranged from 3.5% to 23.7%. Older age (odds ratio 1.01 [95% CI: 0.99-1.02]), female gender (2.87 [95% CI: 2.11-3.91]), right side (1.44 [95% CI: 1.08-1.92]), surgery date (0.90 [95% CI: 1.08-1.92]) and particular implants were significant risk factors only in three surgeons with less successful surgical technique (higher rates of unsuccessful press-fit with Procotyl™-L and EcoFit™ [P=0.01]). Direct-lateral hip approach had a lower rate of unsuccessful press-fit than posterior hip approach (P<0.01), but there was no correlation between surgeons' volume and rate of successful press-fit (Spearman's rho=0.10, P=0.82). Subcohort of 961 patients with 5-7-years follow-up indicated higher early/late cup revision rates with unsuccessful press-fit. Success of press-fit fixation depends entirely on the surgeon and surgical approach. With proper operative technique, the unsuccessful press-fit fixation rate should be below 5% and the impact of patients' characteristics or implants on press-fit fixation is then insignificant. Findings of huge variability in operative technique between surgeons of the presented study emphasize the need for surgeon-specific data stratification in arthroplasty studies and indicate the possibility of false attribution of clinically observed phenomena to patient-related factors in pooled data of large centers or hip arthroplasty registers. Level III, retrospective observational case control study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Hearing Aid Fitting in Infants.
ERIC Educational Resources Information Center
Hoover, Brenda M.
2000-01-01
This article examines the latest technological advances in hearing aids and explores the available research to help families and professionals make informed decisions when fitting amplification devices on infants and young children. Diagnostic procedures, evaluation techniques, hearing aid selection, circuit and advanced technology options, and…
Knowledge translation to fitness trainers: A systematic review
2010-01-01
Background This study investigates approaches for translating evidence-based knowledge for use by fitness trainers. Specific questions were: Where do fitness trainers get their evidence-based information? What types of interventions are effective for translating evidence-based knowledge for use by fitness trainers? What are the barriers and facilitators to the use of evidence-based information by fitness trainers in their practice? Methods We describe a systematic review of studies about knowledge translation interventions targeting fitness trainers. Fitness trainers were defined as individuals who provide exercise program design and supervision services to the public. Nurses, physicians, physiotherapists, school teachers, athletic trainers, and sport team strength coaches were excluded. Results Of 634 citations, two studies were eligible for inclusion: a survey of 325 registered health fitness professionals (66% response rate) and a qualitative study of 10 fitness instructors. Both studies identified that fitness trainers obtain information from textbooks, networking with colleagues, scientific journals, seminars, and mass media. Fitness trainers holding higher levels of education are reported to use evidence-based information sources such as scientific journals compared to those with lower education levels, who were reported to use mass media sources. The studies identified did not evaluate interventions to translate evidence-based knowledge for fitness trainers and did not explore factors influencing uptake of evidence in their practice. Conclusion Little is known about how fitness trainers obtain and incorporate new evidence-based knowledge into their practice. Further exploration and specific research is needed to better understand how emerging health-fitness evidence can be translated to maximize its use by fitness trainers providing services to the general public. PMID:20398317
Ruiz, J R; España Romero, V; Castro Piñero, J; Artero, E G; Ortega, F B; Cuenca García, M; Jiménez Pavón, D; Chillón, P; Girela Rejón, Ma J; Mora, J; Gutiérrez, A; Suni, J; Sjöstrom, M; Castillo, M J
2011-01-01
Hereby we summarize the work developed by the ALPHA (Assessing Levels of Physical Activity) Study and describe the tests included in the ALPHA health-related fitness test battery for children and adolescents. The evidence-based ALPHA-Fitness test battery include the following tests: 1) the 20 m shuttle run test to assess cardiorespiratory fitness; 2) the handgrip strength and 3) standing broad jump to assess musculoskeletal fitness, and 4) body mass index, 5) waist circumference; and 6) skinfold thickness (triceps and subscapular) to assess body composition. Furthermore, we include two versions: 1) the high priority ALPHA health-related fitness test battery, which comprises all the evidence-based fitness tests except the measurement of the skinfold thickness; and 2) the extended ALPHA health-related fitness tests battery for children and adolescents, which includes all the evidence-based fitness tests plus the 4 x 10 m shuttle run test to assess motor fitness.
2018-04-25
unlimited. NOTICES Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so...this report, intermolecular potentials for 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) are developed using machine learning techniques. Three...potentials based on support vector regression, kernel ridge regression, and a neural network are fit using symmetry-adapted perturbation theory. The
Liu, Y; Allen, R
2002-09-01
The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.
Papadopoulos, Nikos T.; Abd-Alla, Adly M. M.; Cáceres, Carlos; Bourtzis, Kostas
2015-01-01
The Mediterranean fruit fly (medfly), Ceratitis capitata, is a pest of worldwide substantial economic importance, as well as a Tephritidae model for sterile insect technique (SIT) applications. The latter is partially due to the development and utilization of genetic sexing strains (GSS) for this species, such as the Vienna 8 strain, which is currently used in mass rearing facilities worldwide. Improving the performance of such a strain both in mass rearing facilities and in the field could significantly enhance the efficacy of SIT and reduce operational costs. Recent studies have suggested that the manipulation of gut symbionts can have a significant positive effect on the overall fitness of insect strains. We used culture-based approaches to isolate and characterize gut-associated bacterial species of the Vienna 8 strain under mass rearing conditions. We also exploited one of the isolated bacterial species, Enterobacter sp., as dietary supplement (probiotic) to the larval diet, and we assessed its effects on fitness parameters under the standard operating procedures used in SIT operational programs. Probiotic application of Enterobacter sp. resulted in improvement of both pupal and adult productivity, as well as reduced rearing duration, particularly for males, without affecting pupal weight, sex ratio, male mating competitiveness, flight ability and longevity under starvation. PMID:26325068
NASA Astrophysics Data System (ADS)
Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer
2017-03-01
Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.
Farina, Ana Paula; Spazzin, Aloísio Oro; Consani, Rafael Leonardo Xediek; Mesquita, Marcelo Ferraz
2014-06-01
Screws can loosen through mechanisms that have not been clearly established. The purpose of this study was to evaluate the influence of the tightening technique (the application of torque and retorque on the joint stability of titanium and gold prosthetic screws) in implant-supported dentures under different fit levels after 1 year of simulated masticatory function by means of mechanical cycling. Ten mandibular implant-supported dentures were fabricated, and 20 cast models were prepared by using the dentures to create 2 fit levels: passive fit and created misfit. The tightening protocol was evaluated according to 4 distinct profiles: without retorque plus titanium screws, without retorque plus gold screws, retorque plus titanium screws, and retorque plus gold screws. In the retorque application, the screws were tightened to 10 Ncm and retightened to 10 Ncm after 10 minutes. The screw joint stability after 1 year of simulated clinical function was measured with a digital torque meter. Data were analyzed statistically by 2-way ANOVA and Tukey honestly significant difference (HSD) post hoc tests (α=.05). The factors of fit level and tightening technique as well as the interaction between the factors, were statistically significant. The misfit decreases the loosening torque. The retorque application increased joint stability independent of fit level or screw material, which suggests that this procedure should be performed routinely during the tightening of these devices. All tightening techniques revealed reduced loosening torque values that were significantly lower in misfit dentures than in passive fit dentures. However, the retorque application significantly increased the loosening torque when titanium and gold screws were used. Therefore, this procedure should be performed routinely during screw tightening. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Effectiveness of three just-in-time training modalities for N-95 mask fit testing.
Jones, David; Stoler, Genevieve; Suyama, Joe
2013-01-01
To compare and contrast three different training modalities for fit testing N-95 respirator face masks. Block randomized interventional study. Urban university. Two hundred eighty-nine medical students. Students were randomly assigned to video, lecture, or slide show to evaluate the effectiveness of the methods for fit testing large groups of people. Ease of fit and success of fit for each instructional technique. Mask 1 was a Kimberly-Clark duckbill N-95 respirator mask, and mask 2 was a 3M™ carpenters N-95 respirator mask. "Ease of fit" was defined as the ability to successfully don a mask in less than 30 seconds. "Success of fit" was defined as the ability to correctly don a mask in one try. There were no statistical differences by training modality for either mask regarding ease of fit or success of fit. There were no differences among video presentation, small group demonstration, and self-directed slide show just-in-time training modalities for ease of fit or success of fit N-95 respirator mask fitting. Further study is needed to explore more effective fit training modalities.
Star-Formation Histories of MUSCEL Galaxies
NASA Astrophysics Data System (ADS)
Young, Jason; Kuzio de Naray, Rachel; Xuesong Wang, Sharon
2018-01-01
The MUSCEL program (MUltiwavelength observations of the Structure, Chemistry and Evolution of LSB galaxies) uses combined ground-based/space-based data to determine the spatially resolved star-formation histories of low surface brightness (LSB) galaxies. LSB galaxies are paradoxical in that they are gas rich but have low star-formation rates. Here we present our observations and fitting technique, and the derived histories for select MUSCEL galaxies. It is our aim to use these histories in tandem with velocity fields and metallicity profiles to determine the physical mechanism(s) that give these faint galaxies low star-formation rates despite ample gas supplies.
Frisardi, Gianni; Barone, Sandro; Razionale, Armando V; Paoli, Alessandro; Frisardi, Flavio; Tullio, Antonio; Lumbau, Aurea; Chessa, Giacomo
2012-05-29
A fundamental pre-requisite for the clinical success in dental implant surgery is the fast and stable implant osseointegration. The press-fit phenomenon occurring at implant insertion induces biomechanical effects in the bone tissues, which ensure implant primary stability. In the field of dental surgery, the understanding of the key factors governing the osseointegration process still remains of utmost importance. A thorough analysis of the biomechanics of dental implantology requires a detailed knowledge of bone mechanical properties as well as an accurate definition of the jaw bone geometry. In this work, a CT image-based approach, combined with the Finite Element Method (FEM), has been used to investigate the effect of the drill size on the biomechanics of the dental implant technique. A very accurate model of the human mandible bone segment has been created by processing high resolution micro-CT image data. The press-fit phenomenon has been simulated by FE analyses for different common drill diameters (DA=2.8 mm, DB=3.3 mm, and DC=3.8 mm) with depth L=12 mm. A virtual implant model has been assumed with a cylindrical geometry having height L=11 mm and diameter D=4 mm. The maximum stresses calculated for drill diameters DA, DB and DC have been 12.31 GPa, 7.74 GPa and 4.52 GPa, respectively. High strain values have been measured in the cortical area for the models of diameters DA and DB, while a uniform distribution has been observed for the model of diameter DC . The maximum logarithmic strains, calculated in nonlinear analyses, have been ϵ=2.46, 0.51 and 0.49 for the three models, respectively. This study introduces a very powerful, accurate and non-destructive methodology for investigating the effect of the drill size on the biomechanics of the dental implant technique.Further studies could aim at understanding how different drill shapes can determine the optimal press-fit condition with an equally distributed preload on both the cortical and trabecular structure around the implant.
2012-01-01
Background A fundamental pre-requisite for the clinical success in dental implant surgery is the fast and stable implant osseointegration. The press-fit phenomenon occurring at implant insertion induces biomechanical effects in the bone tissues, which ensure implant primary stability. In the field of dental surgery, the understanding of the key factors governing the osseointegration process still remains of utmost importance. A thorough analysis of the biomechanics of dental implantology requires a detailed knowledge of bone mechanical properties as well as an accurate definition of the jaw bone geometry. Methods In this work, a CT image-based approach, combined with the Finite Element Method (FEM), has been used to investigate the effect of the drill size on the biomechanics of the dental implant technique. A very accurate model of the human mandible bone segment has been created by processing high resolution micro-CT image data. The press-fit phenomenon has been simulated by FE analyses for different common drill diameters (DA = 2.8 mm, DB = 3.3 mm, and DC = 3.8 mm) with depth L = 12 mm. A virtual implant model has been assumed with a cylindrical geometry having height L = 11 mm and diameter D = 4 mm. Results The maximum stresses calculated for drill diameters DA, DB and DC have been 12.31 GPa, 7.74 GPa and 4.52 GPa, respectively. High strain values have been measured in the cortical area for the models of diameters DA and DB, while a uniform distribution has been observed for the model of diameter DC . The maximum logarithmic strains, calculated in nonlinear analyses, have been ϵ = 2.46, 0.51 and 0.49 for the three models, respectively. Conclusions This study introduces a very powerful, accurate and non-destructive methodology for investigating the effect of the drill size on the biomechanics of the dental implant technique. Further studies could aim at understanding how different drill shapes can determine the optimal press-fit condition with an equally distributed preload on both the cortical and trabecular structure around the implant. PMID:22642768
Oyagüe, Raquel Castillo; Sánchez-Turrión, Andrés; López-Lozano, José Francisco; Suárez-García, M Jesús
2012-02-01
This study aimed to evaluate the vertical misfit and microleakage of laser-sintered and vacuum-cast cement-retained implant-supported frameworks. Three-unit implant-fixed structures were constructed with: (1) laser-sintered Co-Cr (LS); (2) vacuum-cast Co-Cr (CC); and (3) vacuum-cast Pd-Au (CP). Every framework was luted onto 2 prefabricated abutments under constant seating pressure. Each alloy group was randomly divided into three subgroups (n=10) according to the cement used: (1) Ketac Cem Plus (KC); (2) Panavia F 2.0 (PF); and (3) RelyX Unicem 2 Automix (RXU). After 30 days of water ageing, vertical discrepancy was measured by SEM, and marginal microleakage was scored using a digital microscope. Three-way ANOVA and Student-Newman-Keuls tests were run to investigate the effect of alloy/fabrication technique, FDP retainer, and cement type on vertical misfit. Data for marginal microleakage were analysed with Kruskal-Wallis and Dunn's tests (α=0.05). Vertical discrepancy was affected by alloy/manufacturing technique and cement type (p<0.001). Despite the luting agent, LS structures showed the best marginal adaptation, followed by CP, and CC. Within each alloy group, KC provided the best fit, whilst the use of PF or RXU resulted in no significant differences. Regardless of the framework alloy, KC exhibited the highest microleakage scores, whilst PF and RXU showed values that were comparable to each other. Laser-sintered Co-Cr structures achieved the best fit in the study. Notwithstanding the framework alloy, resin-modified glass-ionomer demonstrated better marginal fit but greater microleakage than did MDP-based and self-adhesive dual-cure resin cements. All groups were within the clinically acceptable misfit range. Laser-sintered Co-Cr may be an alternative to cast base metal and noble alloys to obtain passive-fitting structures. Despite showing higher discrepancies, resin cements displayed lower microleakage than resin-modified glass-ionomer. Further research is necessary to determine whether low microleakage scores may guarantee a suitable seal that could compensate for misfit. Copyright © 2011 Elsevier Ltd. All rights reserved.
Microwave, Millimeter, Submillimeter, and Far Infrared Spectral Databases
NASA Technical Reports Server (NTRS)
Pearson, J. C.; Pickett, H. M.; Drouin, B. J.; Chen, P.; Cohen, E. A.
2002-01-01
The spectrum of most known astrophysical molecules is derived from transitions between a few hundred to a few hundred thousand energy levels populated at room temperature. In the microwave and millimeter wave regions. spectroscopy is almost always performed with traditional microwave techniques. In the submillimeter and far infrared microwave technique becomes progressively more technologically challenging and infrared techniques become more widely employed as the wavelength gets shorter. Infrared techniques are typically one to two orders of magnitude less precise but they do generate all the strong features in the spectrum. With microwave technique, it is generally impossible and rarely necessary to measure every single transition of a molecular species, so careful fitting of quantum mechanical Hamiltonians to the transitions measured are required to produce the complete spectral picture of the molecule required by astronomers. The fitting process produces the most precise data possible and is required in the interpret heterodyne observations. The drawback of traditional microwave technique is that precise knowledge of the band origins of low lying excited states is rarely gained. The fitting of data interpolates well for the range of quantum numbers where there is laboratory data, but extrapolation is almost never precise. The majority of high resolution spectroscopic data is millimeter or longer in wavelength and a very limited number of molecules have ever been studied with microwave techniques at wavelengths shorter than 0.3 millimeters. The situation with infrared technique is similarly dire in the submillimeter and far infrared because the black body sources used are competing with a very significant thermal background making the signal to noise poor. Regardless of the technique used the data must be archived in a way useful for the interpretation of observations.
NASA Astrophysics Data System (ADS)
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-01
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia
2017-05-07
A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.
Use of reconstructed 3D VMEC equilibria to match effects of toroidally rotating discharges in DIII-D
Wingen, Andreas; Wilcox, Robert S.; Cianciosa, Mark R.; ...
2016-10-13
Here, a technique for tokamak equilibrium reconstructions is used for multiple DIII-D discharges, including L-mode and H-mode cases when weakly 3D fieldsmore » $$\\left(\\delta B/B\\sim {{10}^{-3}}\\right)$$ are applied. The technique couples diagnostics to the non-linear, ideal MHD equilibrium solver VMEC, using the V3FIT code, to find the most likely 3D equilibrium based on a suite of measurements. It is demonstrated that V3FIT can be used to find non-linear 3D equilibria that are consistent with experimental measurements of the plasma response to very weak 3D perturbations, as well as with 2D profile measurements. Observations at DIII-D show that plasma rotation larger than 20 krad s –1 changes the relative phase between the applied 3D fields and the measured plasma response. Discharges with low averaged rotation (10 krad s –1) and peaked rotation profiles (40 krad s –1) are reconstructed. Similarities and differences to forward modeled VMEC equilibria, which do not include rotational effects, are shown. Toroidal phase shifts of up to $${{30}^{\\circ}}$$ are found between the measured and forward modeled plasma responses at the highest values of rotation. The plasma response phases of reconstructed equilibra on the other hand match the measured ones. This is the first time V3FIT has been used to reconstruct weakly 3D tokamak equilibria.« less
Assessing fitness to stand trial: the utility of the Fitness Interview Test (revised edition).
Zapf, P A; Roesch, R; Viljoen, J L
2001-06-01
In Canada most evaluations of fitness to stand trial are conducted on an inpatient basis. This costs time and money, and deprives those defendants remanded for evaluation of liberty. This research assessed the predictive efficiency of the Fitness Interview Test, revised edition (FIT) as a screening instrument for fitness to stand trial. We compared decisions about fitness to stand trial, based on the FIT, with the results of institution-based evaluations for 2 samples of men remanded for inpatient fitness assessments. The FIT demonstrates excellent utility as a screening instrument. The FIT shows good sensitivity and negative predictive power, which suggests that it can reliably screen those individuals who are clearly fit to stand trial, before they are remanded to an inpatient facility for a fitness assessment. We discuss the implications for evaluating fitness to stand trial, particularly in terms of the need for community-based alternatives to traditional forensic assessments.
Determination of calibration parameters of a VRX CT system using an “Amoeba” algorithm
Jordan, Lawrence M.; DiBianca, Frank A.; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M. Waleed
2008-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge “clouds” created by the detected x-ray photons, i.e., the “physics limit.” This paper focuses on implementing a technique called “projective compression.” which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm “variable-resolution x-ray” (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown. PMID:19430581
Determination of calibration parameters of a VRX CT system using an "Amoeba" algorithm.
Jordan, Lawrence M; Dibianca, Frank A; Melnyk, Roman; Choudhary, Apoorva; Shukla, Hemant; Laughter, Joseph; Gaber, M Waleed
2004-01-01
Efforts to improve the spatial resolution of CT scanners have focused mainly on reducing the source and detector element sizes, ignoring losses from the size of the secondary-ionization charge "clouds" created by the detected x-ray photons, i.e., the "physics limit." This paper focuses on implementing a technique called "projective compression." which allows further reduction in effective cell size while overcoming the physics limit as well. Projective compression signifies detector geometries in which the apparent cell size is smaller than the physical cell size, allowing large resolution boosts. A realization of this technique has been developed with a dual-arm "variable-resolution x-ray" (VRX) detector. Accurate values of the geometrical parameters are needed to convert VRX outputs to formats ready for optimal image reconstruction by standard CT techniques. The required calibrating data are obtained by scanning a rotating pin and fitting a theoretical parametric curve (using a multi-parameter minimization algorithm) to the resulting pin sinogram. Excellent fits are obtained for both detector-arm sections with an average (maximum) fit deviation of ~0.05 (0.1) detector cell width. Fit convergence and sensitivity to starting conditions are considered. Pre- and post-optimization reconstructions of the alignment pin and a biological subject reconstruction after calibration are shown.
The fixation strength of tibial PCL press-fit reconstructions.
Ettinger, M; Wehrhahn, T; Petri, M; Liodakis, E; Olender, G; Albrecht, U-V; Hurschler, C; Krettek, C; Jagodzinski, M
2012-02-01
A secure tibial press-fit technique in posterior cruciate ligament reconstructions is an interesting technique because no hardware is necessary. For anterior cruciate ligament (ACL) reconstruction, a few press-fit procedures have been published. Up to the present point, no biomechanical data exist for a tibial press-fit posterior cruciate ligament (PCL) reconstruction. The purpose of this study was to characterize a press-fit procedure for PCL reconstruction that is biomechanically equivalent to an interference screw fixation. Quadriceps and hamstring tendons of 20 human cadavers (age: 49.2 ± 18.5 years) were used. A press-fit fixation with a knot in the semitendinosus tendon (K) and a quadriceps tendon bone block graft (Q) were compared to an interference screw fixation (I) in 30 porcine femora. In each group, nine constructs were cyclically stretched and then loaded until failure. Maximum load to failure, stiffness, and elongation during failure testing and cyclical loading were investigated. The maximum load to failure was 518 ± 157 N (387-650 N) for the (K) group, 558 ± 119 N (466-650 N) for the (I) group, and 620 ± 102 N (541-699 N) for the (Q) group. The stiffness was 55 ± 27 N/mm (18-89 N/mm) for the (K) group, 117 ± 62 N/mm (69-165 N/mm) for the (I) group, and 65 ± 21 N/mm (49-82 N/mm) for the (Q) group. The stiffness of the (I) group was significantly larger (P = 0.01). The elongation during cyclical loading was significantly larger for all groups from the 1st to the 5th cycle compared to the elongation in between the 5th to the 20th cycle (P < 0.03). All techniques exhibited larger elongation during initial loading. Load to failure and stiffness was significantly different between the fixations. The Q fixation showed equal biomechanical properties compared to a pure tendon fixation (I) with an interference screw. All three fixation techniques that were investigated exhibit comparable biomechanical properties. Preconditioning of the constructs is critical. Clinical trials have to investigate the biological effectiveness of these fixation techniques.
NASA Astrophysics Data System (ADS)
Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.
2015-11-01
A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.
Joyner, Damon; Aguilar, Sheryl S.; Spruance, Lori Andersen; Morrill, Brooke A.; Madden, Gregory J.
2017-01-01
Abstract Objective: Previously published versions of the healthy eating “FIT Game” were administered by teachers in all grades at elementary schools. The present study evaluated whether the game would retain its efficacy if teachers were relieved of this task; presenting instead all game materials on visual displays in the school cafeteria. Materials and Methods: Participants were 572 children attending two Title 1 elementary schools (grades K-5). Following a no-intervention baseline period in which fruit and vegetable consumption were measured from food waste, the schools played the FIT Game. In the game, the children's vegetable consumption influenced events in a good versus evil narrative presented in comic book-formatted episodes in the school cafeteria. When daily vegetable-consumption goals were met, new FIT Game episodes were displayed. Game elements included a game narrative, competition, virtual currency, and limited player autonomy. The two intervention phases were separated by a second baseline phase (within-school reversal design). Simulation Modeling Analysis (a bootstrapping technique appropriate to within-group time-series designs) was used to evaluate whether vegetable consumption increased significantly above baseline levels in the FIT Game phases (P < 0.05). Results: Vegetable consumption increased significantly from 21.3 g during the two baseline phases to 42.5 g during the FIT Game phases; a 99.9% increase. The Game did not significantly increase fruit consumption (which was not targeted for change), nor was there a decrease in fruit consumption. Conclusion: Labor-reductions in the FIT Game did not reduce its positive impact on healthy eating. PMID:28375645
Buzayan, Muaiyed Mahmoud; Yunus, Norsiah Binti
2014-03-01
One of the considerable challenges for screw-retained multi-unit implant prosthesis is achieving a passive fit of the prosthesis' superstructure to the implants. This passive fit is supposed to be one of the most vital requirements for the maintenance of the osseointegration. On the other hand, the misfit of the implant supported superstructure may lead to unfavourable complications, which can be mechanical or biological in nature. The manifestations of these complications may range from fracture of various components in the implant system, pain, marginal bone loss, and even loss of osseointegration. Thus, minimizing the misfit and optimizing the passive fit should be a prerequisite for implant survival and success. The purpose of this article is to present and summarize some aspects of the passive fit achieving and improving methods. The literature review was performed through Science Direct, Pubmed, and Google database. They were searched in English using the following combinations of keywords: passive fit, implant misfit and framework misfit. Articles were selected on the basis of whether they had sufficient information related to framework misfit's related factors, passive fit and its achievement techniques, marginal bone changes relation with the misfit, implant impression techniques and splinting concept. The related references were selected in order to emphasize the importance of the passive fit achievement and the misfit minimizing. Despite the fact that the literature presents considerable information regarding the framework's misfit, there was not consistency in literature on a specified number or even a range to be the acceptable level of misfit. On the other hand, a review of the literature revealed that the complete passive fit still remains a tricky goal to be achieved by the prosthodontist.
Measurement of contact angle in a clearance-fit pin-loaded hole
NASA Technical Reports Server (NTRS)
Prabhakaran, R.; Naik, R. A.
1986-01-01
A technique which measures load-contact variation in a clearance-fit, pin-loaded hole is presented in detail. A steel instrumented pin, which activates a make-or-break electrical circuit in the pin-hole contact region, was inserted into one aluminum and one polycarbonate specimen. The resulting load-contact variations are indicated schematically. The ability to accurately determine the arc of contact at any load was crucial to this measurement. It is noted that this simple experimental technique is applicable to both conducting and nonconducting materials.
Longoni, Salvatore; Sartori, Matteo; Davide, Roberto
2004-06-01
An important aim of implant-supported prostheses is to achieve a passive fit of the framework with the abutments to limit the amount of stress transfer to the bone-implant interface. An efficient and standardized technique is proposed. A definitive screw-retained, implant-supported complete denture was fabricated for an immediately loaded provisional screw-retained implant-supported complete denture. Precise fit was achieved by the use of industrial titanium components and the passivity, by an intraoral luting sequence and laser welding.
A Physical Education Dilemma: Team Sports or Physical Fitness.
ERIC Educational Resources Information Center
Gilliam, G. McKenzie; And Others
1988-01-01
A study of 56 fifth graders found the traditional physical education approach (game techniques and fundamentals) was ineffective in improving scores on a health-related physical fitness test. Modification of the same sport (basketball) with conditioning exercises to improve cardiorespiratory and musculoskeletal function, produced improvement in…
Suueeet Kickin' ... Suueeet Sweatin'
ERIC Educational Resources Information Center
Clevenger, Karen
2005-01-01
This article focuses on cardio-kickboxing, one of the hottest fitness trends sweeping the nation's fitness centers. The activity moves beyond the basics of kickboxing and provides participants with skills and techniques that help to enhance form, speed, power, and balance. Cardio-kickboxing is fun because it deviates from more conventional…
Techniques to measure tension in wires or straw tubes
NASA Astrophysics Data System (ADS)
Oh, S. H.; Lin, S.; Wang, C.
2018-01-01
We discuss two different ways of measuring the tension in light wires and straws. The first technique uses an operational amplifier to subtract out the oscillating driving voltage mixed in the output voltage, which also has the signal. The isolated signal is amplified and displayed in an oscilloscope. In the second technique, an analog switch routes the oscillating voltage to a wire for a fraction of seconds, and then switches off the voltage. As the voltage is turned off, the induced signal from the wire is routed to an amplifier-rectifier circuit for a fraction of a second to measure the signal size as a function of the driving frequency. The first technique fits well to measure a single wire, while the second one fits well to measure many wires, 16 in our case, at a time.
NASA Astrophysics Data System (ADS)
Leja, Joel; Johnson, Benjamin D.; Conroy, Charlie; van Dokkum, Pieter
2018-02-01
Forward modeling of the full galaxy SED is a powerful technique, providing self-consistent constraints on stellar ages, dust properties, and metallicities. However, the accuracy of these results is contingent on the accuracy of the model. One significant source of uncertainty is the contribution of obscured AGN, as they are relatively common and can produce substantial mid-IR (MIR) emission. Here we include emission from dusty AGN torii in the Prospector SED-fitting framework, and fit the UV–IR broadband photometry of 129 nearby galaxies. We find that 10% of the fitted galaxies host an AGN contributing >10% of the observed galaxy MIR luminosity. We demonstrate the necessity of this AGN component in the following ways. First, we compare observed spectral features to spectral features predicted from our model fit to the photometry. We find that the AGN component greatly improves predictions for observed Hα and Hβ luminosities, as well as mid-infrared Akari and Spitzer/IRS spectra. Second, we show that inclusion of the AGN component changes stellar ages and SFRs by up to a factor of 10, and dust attenuations by up to a factor of 2.5. Finally, we show that the strength of our model AGN component correlates with independent AGN indicators, suggesting that these galaxies truly host AGN. Notably, only 46% of the SED-detected AGN would be detected with a simple MIR color selection. Based on these results, we conclude that SED models which fit MIR data without AGN components are vulnerable to substantial bias in their derived parameters.
EMU Suit Performance Simulation
NASA Technical Reports Server (NTRS)
Cowley, Matthew S.; Benson, Elizabeth; Harvill, Lauren; Rajulu, Sudhakar
2014-01-01
Introduction: Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. To verify that new suit designs meet requirements, full prototypes must be built and tested with human subjects. However, numerous design iterations will occur before the hardware meets those requirements. Traditional draw-prototype-test paradigms for research and development are prohibitively expensive with today's shrinking Government budgets. Personnel at NASA are developing modern simulation techniques that focus on a human-centric design paradigm. These new techniques make use of virtual prototype simulations and fully adjustable physical prototypes of suit hardware. This is extremely advantageous and enables comprehensive design down-selections to be made early in the design process. Objectives: The primary objective was to test modern simulation techniques for evaluating the human performance component of two EMU suit concepts, pivoted and planar style hard upper torso (HUT). Methods: This project simulated variations in EVA suit shoulder joint design and subject anthropometry and then measured the differences in shoulder mobility caused by the modifications. These estimations were compared to human-in-the-loop test data gathered during past suited testing using four subjects (two large males, two small females). Results: Results demonstrated that EVA suit modeling and simulation are feasible design tools for evaluating and optimizing suit design based on simulated performance. The suit simulation model was found to be advantageous in its ability to visually represent complex motions and volumetric reach zones in three dimensions, giving designers a faster and deeper comprehension of suit component performance vs. human performance. Suit models were able to discern differing movement capabilities between EMU HUT configurations, generic suit fit concerns, and specific suit fit concerns for crewmembers based on individual anthropometry
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Towards Evolving Electronic Circuits for Autonomous Space Applications
NASA Technical Reports Server (NTRS)
Lohn, Jason D.; Haith, Gary L.; Colombano, Silvano P.; Stassinopoulos, Dimitris
2000-01-01
The relatively new field of Evolvable Hardware studies how simulated evolution can reconfigure, adapt, and design hardware structures in an automated manner. Space applications, especially those requiring autonomy, are potential beneficiaries of evolvable hardware. For example, robotic drilling from a mobile platform requires high-bandwidth controller circuits that are difficult to design. In this paper, we present automated design techniques based on evolutionary search that could potentially be used in such applications. First, we present a method of automatically generating analog circuit designs using evolutionary search and a circuit construction language. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm, we present experimental results for five design tasks. Second, we investigate the use of coevolution in automated circuit design. We examine fitness evaluation by comparing the effectiveness of four fitness schedules. The results indicate that solution quality is highest with static and co-evolving fitness schedules as compared to the other two dynamic schedules. We discuss these results and offer two possible explanations for the observed behavior: retention of useful information, and alignment of problem difficulty with circuit proficiency.
CalFitter: a web server for analysis of protein thermal denaturation data.
Mazurenko, Stanislav; Stourac, Jan; Kunka, Antonin; Nedeljkovic, Sava; Bednar, David; Prokop, Zbynek; Damborsky, Jiri
2018-05-14
Despite significant advances in the understanding of protein structure-function relationships, revealing protein folding pathways still poses a challenge due to a limited number of relevant experimental tools. Widely-used experimental techniques, such as calorimetry or spectroscopy, critically depend on a proper data analysis. Currently, there are only separate data analysis tools available for each type of experiment with a limited model selection. To address this problem, we have developed the CalFitter web server to be a unified platform for comprehensive data fitting and analysis of protein thermal denaturation data. The server allows simultaneous global data fitting using any combination of input data types and offers 12 protein unfolding pathway models for selection, including irreversible transitions often missing from other tools. The data fitting produces optimal parameter values, their confidence intervals, and statistical information to define unfolding pathways. The server provides an interactive and easy-to-use interface that allows users to directly analyse input datasets and simulate modelled output based on the model parameters. CalFitter web server is available free at https://loschmidt.chemi.muni.cz/calfitter/.
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
NASA Astrophysics Data System (ADS)
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries
NASA Technical Reports Server (NTRS)
Cutler, Andrew D.; Magnotti, Gaetano
2010-01-01
The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M
2018-07-01
Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).
The Sixth Spectrum of Iridium (Ir VI): Determination of the 5d4, 5d36s and 5d36p Configurations
NASA Astrophysics Data System (ADS)
Azarov, V. I.; Gayasov, R. R.; Gayasov, R. R.; Joshi, Y. N.; Churilov, S. S.
The spectrum of five times ionized iridium, Ir VI, was investigated in the 420-1520 Å wavelength region. The analysis has led to the determination of the 5d4, 5d36s and 5d36p configurations. Thirty of thirty four theoretically possible 5d4 levels, 27 of 38 possible 5d36s levels and 96 of 110 possible 5d36p levels have been established. The levels are based on 711 classified spectral lines. The level structure of the configurations has been theoretically interpreted using the orthogonal operators technique. The energy parameters have been determined by a least squares fit to the observed levels. Calculated energy values and LS-compositions, obtained from the fitted parameter values are given.
Multi-Scale Measures of Rugosity, Slope and Aspect from Benthic Stereo Image Reconstructions
Friedman, Ariell; Pizarro, Oscar; Williams, Stefan B.; Johnson-Roberson, Matthew
2012-01-01
This paper demonstrates how multi-scale measures of rugosity, slope and aspect can be derived from fine-scale bathymetric reconstructions created from geo-referenced stereo imagery. We generate three-dimensional reconstructions over large spatial scales using data collected by Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), manned submersibles and diver-held imaging systems. We propose a new method for calculating rugosity in a Delaunay triangulated surface mesh by projecting areas onto the plane of best fit using Principal Component Analysis (PCA). Slope and aspect can be calculated with very little extra effort, and fitting a plane serves to decouple rugosity from slope. We compare the results of the virtual terrain complexity calculations with experimental results using conventional in-situ measurement methods. We show that performing calculations over a digital terrain reconstruction is more flexible, robust and easily repeatable. In addition, the method is non-contact and provides much less environmental impact compared to traditional survey techniques. For diver-based surveys, the time underwater needed to collect rugosity data is significantly reduced and, being a technique based on images, it is possible to use robotic platforms that can operate beyond diver depths. Measurements can be calculated exhaustively at multiple scales for surveys with tens of thousands of images covering thousands of square metres. The technique is demonstrated on data gathered by a diver-rig and an AUV, on small single-transect surveys and on a larger, dense survey that covers over . Stereo images provide 3D structure as well as visual appearance, which could potentially feed into automated classification techniques. Our multi-scale rugosity, slope and aspect measures have already been adopted in a number of marine science studies. This paper presents a detailed description of the method and thoroughly validates it against traditional in-situ measurements. PMID:23251370
Methodologies for Development of Patient Specific Bone Models from Human Body CT Scans
NASA Astrophysics Data System (ADS)
Chougule, Vikas Narayan; Mulay, Arati Vinayak; Ahuja, Bharatkumar Bhagatraj
2016-06-01
This work deals with development of algorithm for physical replication of patient specific human bone and construction of corresponding implants/inserts RP models by using Reverse Engineering approach from non-invasive medical images for surgical purpose. In medical field, the volumetric data i.e. voxel and triangular facet based models are primarily used for bio-modelling and visualization, which requires huge memory space. On the other side, recent advances in Computer Aided Design (CAD) technology provides additional facilities/functions for design, prototyping and manufacturing of any object having freeform surfaces based on boundary representation techniques. This work presents a process to physical replication of 3D rapid prototyping (RP) physical models of human bone from various CAD modeling techniques developed by using 3D point cloud data which is obtained from non-invasive CT/MRI scans in DICOM 3.0 format. This point cloud data is used for construction of 3D CAD model by fitting B-spline curves through these points and then fitting surface between these curve networks by using swept blend techniques. This process also can be achieved by generating the triangular mesh directly from 3D point cloud data without developing any surface model using any commercial CAD software. The generated STL file from 3D point cloud data is used as a basic input for RP process. The Delaunay tetrahedralization approach is used to process the 3D point cloud data to obtain STL file. CT scan data of Metacarpus (human bone) is used as the case study for the generation of the 3D RP model. A 3D physical model of the human bone is generated on rapid prototyping machine and its virtual reality model is presented for visualization. The generated CAD model by different techniques is compared for the accuracy and reliability. The results of this research work are assessed for clinical reliability in replication of human bone in medical field.
SU-E-T-75: A Simple Technique for Proton Beam Range Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgdorf, B; Kassaee, A; Garver, E
2015-06-15
Purpose: To develop a measurement-based technique to verify the range of proton beams for quality assurance (QA). Methods: We developed a simple technique to verify the proton beam range with in-house fabricated devices. Two separate devices were fabricated; a clear acrylic rectangular cuboid and a solid polyvinyl chloride (PVC) step wedge. For efficiency in our clinic, we used the rectangular cuboid for double scattering (DS) beams and the step wedge for pencil beam scanning (PBS) beams. These devices were added to our QA phantom to measure dose points along the distal fall-off region (between 80% and 20%) in addition tomore » dose at mid-SOBP (spread out Bragg peak) using a two-dimensional parallel plate chamber array (MatriXX™, IBA Dosimetry, Schwarzenbruck, Germany). This method relies on the fact that the slope of the distal fall-off is linear and does not vary with small changes in energy. Using a multi-layer ionization chamber (Zebra™, IBA Dosimetry), percent depth dose (PDD) curves were measured for our standard daily QA beams. The range (energy) for each beam was then varied (i.e. ±2mm and ±5mm) and additional PDD curves were measured. The distal fall-off of all PDD curves was fit to a linear equation. The distal fall-off measured dose for a particular beam was used in our linear equation to determine the beam range. Results: The linear fit of the fall-off region for the PDD curves, when varying the range by a few millimeters for a specific QA beam, yielded identical slopes. The calculated range based on measured point dose(s) in the fall-off region using the slope resulted in agreement of ±1mm of the expected beam range. Conclusion: We developed a simple technique for accurately verifying the beam range for proton therapy QA programs.« less
FitEM2EM—Tools for Low Resolution Study of Macromolecular Assembly and Dynamics
Frankenstein, Ziv; Sperling, Joseph; Sperling, Ruth; Eisenstein, Miriam
2008-01-01
Studies of the structure and dynamics of macromolecular assemblies often involve comparison of low resolution models obtained using different techniques such as electron microscopy or atomic force microscopy. We present new computational tools for comparing (matching) and docking of low resolution structures, based on shape complementarity. The matched or docked objects are represented by three dimensional grids where the value of each grid point depends on its position with regard to the interior, surface or exterior of the object. The grids are correlated using fast Fourier transformations producing either matches of related objects or docking models depending on the details of the grid representations. The procedures incorporate thickening and smoothing of the surfaces of the objects which effectively compensates for differences in the resolution of the matched/docked objects, circumventing the need for resolution modification. The presented matching tool FitEM2EMin successfully fitted electron microscopy structures obtained at different resolutions, different conformers of the same structure and partial structures, ranking correct matches at the top in every case. The differences between the grid representations of the matched objects can be used to study conformation differences or to characterize the size and shape of substructures. The presented low-to-low docking tool FitEM2EMout ranked the expected models at the top. PMID:18974836
Runoff potentiality of a watershed through SCS and functional data analysis technique.
Adham, M I; Shirazi, S M; Othman, F; Rahman, S; Yusop, Z; Ismail, Z
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling.
Runoff Potentiality of a Watershed through SCS and Functional Data Analysis Technique
Adham, M. I.; Shirazi, S. M.; Othman, F.; Rahman, S.; Yusop, Z.; Ismail, Z.
2014-01-01
Runoff potentiality of a watershed was assessed based on identifying curve number (CN), soil conservation service (SCS), and functional data analysis (FDA) techniques. Daily discrete rainfall data were collected from weather stations in the study area and analyzed through lowess method for smoothing curve. As runoff data represents a periodic pattern in each watershed, Fourier series was introduced to fit the smooth curve of eight watersheds. Seven terms of Fourier series were introduced for the watersheds 5 and 8, while 8 terms of Fourier series were used for the rest of the watersheds for the best fit of data. Bootstrapping smooth curve analysis reveals that watersheds 1, 2, 3, 6, 7, and 8 are with monthly mean runoffs of 29, 24, 22, 23, 26, and 27 mm, respectively, and these watersheds would likely contribute to surface runoff in the study area. The purpose of this study was to transform runoff data into a smooth curve for representing the surface runoff pattern and mean runoff of each watershed through statistical method. This study provides information of runoff potentiality of each watershed and also provides input data for hydrological modeling. PMID:25152911
Histogram-based ionogram displays and their application to autoscaling
NASA Astrophysics Data System (ADS)
Lynn, Kenneth J. W.
2018-03-01
A simple method is described for displaying and auto scaling the basic ionogram parameters foF2 and h'F2 as well as some additional layer parameters from digital ionograms. The technique employed is based on forming frequency and height histograms in each ionogram. This technique has now been applied specifically to ionograms produced by the IPS5D ionosonde developed and operated by the Australian Space Weather Service (SWS). The SWS ionograms are archived in a cleaned format and readily available from the SWS internet site. However, the method is applicable to any ionosonde which produces ionograms in a digital format at a useful signal-to-noise level. The most novel feature of the technique for autoscaling is its simplicity and the avoidance of the mathematical imaging and line fitting techniques often used. The program arose from the necessity to display many days of ionogram output to allow the location of specific types of ionospheric event such as ionospheric storms, travelling ionospheric disturbances and repetitive ionospheric height changes for further investigation and measurement. Examples and applications of the method are given including the removal of sporadic E and spread F.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
Yan, Guanyong; Wang, Xiangzhao; Li, Sikun; Yang, Jishuo; Xu, Dongbo; Erdmann, Andreas
2014-03-10
We propose an in situ aberration measurement technique based on an analytical linear model of through-focus aerial images. The aberrations are retrieved from aerial images of six isolated space patterns, which have the same width but different orientations. The imaging formulas of the space patterns are investigated and simplified, and then an analytical linear relationship between the aerial image intensity distributions and the Zernike coefficients is established. The linear relationship is composed of linear fitting matrices and rotation matrices, which can be calculated numerically in advance and utilized to retrieve Zernike coefficients. Numerical simulations using the lithography simulators PROLITH and Dr.LiTHO demonstrate that the proposed method can measure wavefront aberrations up to Z(37). Experiments on a real lithography tool confirm that our method can monitor lens aberration offset with an accuracy of 0.7 nm.
Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project
NASA Technical Reports Server (NTRS)
Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)
2001-01-01
The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.
Wang, Yue; Adalý, Tülay; Kung, Sun-Yuan; Szabo, Zsolt
2007-01-01
This paper presents a probabilistic neural network based technique for unsupervised quantification and segmentation of brain tissues from magnetic resonance images. It is shown that this problem can be solved by distribution learning and relaxation labeling, resulting in an efficient method that may be particularly useful in quantifying and segmenting abnormal brain tissues where the number of tissue types is unknown and the distributions of tissue types heavily overlap. The new technique uses suitable statistical models for both the pixel and context images and formulates the problem in terms of model-histogram fitting and global consistency labeling. The quantification is achieved by probabilistic self-organizing mixtures and the segmentation by a probabilistic constraint relaxation network. The experimental results show the efficient and robust performance of the new algorithm and that it outperforms the conventional classification based approaches. PMID:18172510
Robust pupil center detection using a curvature algorithm
NASA Technical Reports Server (NTRS)
Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)
1999-01-01
Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.
Simulation of hypersonic rarefied flows with the immersed-boundary method
NASA Astrophysics Data System (ADS)
Bruno, D.; De Palma, P.; de Tullio, M. D.
2011-05-01
This paper provides a validation of an immersed boundary method for computing hypersonic rarefied gas flows. The method is based on the solution of the Navier-Stokes equation and is validated versus numerical results obtained by the DSMC approach. The Navier-Stokes solver employs a flexible local grid refinement technique and is implemented on parallel machines using a domain-decomposition approach. Thanks to the efficient grid generation process, based on the ray-tracing technique, and the use of the METIS software, it is possible to obtain the partitioned grids to be assigned to each processor with a minimal effort by the user. This allows one to by-pass the expensive (in terms of time and human resources) classical generation process of a body fitted grid. First-order slip-velocity boundary conditions are employed and tested for taking into account rarefied gas effects.
Image fusion for visualization of hepatic vasculature and tumors
NASA Astrophysics Data System (ADS)
Chou, Jin-Shin; Chen, Shiuh-Yung J.; Sudakoff, Gary S.; Hoffmann, Kenneth R.; Chen, Chin-Tu; Dachman, Abraham H.
1995-05-01
We have developed segmentation and simultaneous display techniques to facilitate the visualization of the three-dimensional spatial relationships between organ structures and organ vasculature. We concentrate on the visualization of the liver based on spiral computed tomography images. Surface-based 3-D rendering and maximal intensity projection algorithms are used for data visualization. To extract the liver in the serial of images accurately and efficiently, we have developed a user-friendly interactive program with a deformable-model segmentation. Surface rendering techniques are used to visualize the extracted structures, adjacent contours are aligned and fitted with a Bezier surface to yield a smooth surface. Visualization of the vascular structures, portal and hepatic veins, is achieved by applying a MIP technique to the extracted liver volume. To integrate the extracted structures they are surface-rendered and their MIP images are aligned and a color table is designed for simultaneous display of the combined liver/tumor and vasculature images. By combining the 3-D surface rendering and MIP techniques, portal veins, hepatic veins, and hepatic tumor can be inspected simultaneously and their spatial relationships can be more easily perceived. The proposed technique will be useful for visualization of both hepatic neoplasm and vasculature in surgical planning for tumor resection or living-donor liver transplantation.
Sensor data fusion for spectroscopy-based detection of explosives
NASA Astrophysics Data System (ADS)
Shah, Pratik V.; Singh, Abhijeet; Agarwal, Sanjeev; Sedigh, Sahra; Ford, Alan; Waterbury, Robert
2009-05-01
In-situ trace detection of explosive compounds such as RDX, TNT, and ammonium nitrate, is an important problem for the detection of IEDs and IED precursors. Spectroscopic techniques such as LIBS and Raman have shown promise for the detection of residues of explosive compounds on surfaces from standoff distances. Individually, both LIBS and Raman techniques suffer from various limitations, e.g., their robustness and reliability suffers due to variations in peak strengths and locations. However, the orthogonal nature of the spectral and compositional information provided by these techniques makes them suitable candidates for the use of sensor fusion to improve the overall detection performance. In this paper, we utilize peak energies in a region by fitting Lorentzian or Gaussian peaks around the location of interest. The ratios of peak energies are used for discrimination, in order to normalize the effect of changes in overall signal strength. Two data fusion techniques are discussed in this paper. Multi-spot fusion is performed on a set of independent samples from the same region based on the maximum likelihood formulation. Furthermore, the results from LIBS and Raman sensors are fused using linear discriminators. Improved detection performance with significantly reduced false alarm rates is reported using fusion techniques on data collected for sponsor demonstration at Fort Leonard Wood.
An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm
NASA Astrophysics Data System (ADS)
Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin
2018-04-01
Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.
A New Methodology for Vibration Error Compensation of Optical Encoders
Lopez, Jesus; Artes, Mariano
2012-01-01
Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067
Double-path acquisition of pulse wave transit time and heartbeat using self-mixing interferometry
NASA Astrophysics Data System (ADS)
Wei, Yingbin; Huang, Wencai; Wei, Zheng; Zhang, Jie; An, Tong; Wang, Xiulin; Xu, Huizhen
2017-06-01
We present a technique based on self-mixing interferometry for acquiring the pulse wave transit time (PWTT) and heartbeat. A signal processing method based on Continuous Wavelet Transform and Hilbert Transform is applied to extract potentially useful information in the self-mixing interference (SMI) signal, including PWTT and heartbeat. Then, some cardiovascular characteristics of the human body are easily acquired without retrieving the SMI signal by complicated algorithms. Experimentally, the PWTT is measured on the finger and the toe of the human body using double-path self-mixing interferometry. Experimental statistical data show the relation between the PWTT and blood pressure, which can be used to estimate the systolic pressure value by fitting. Moreover, the measured heartbeat shows good agreement with that obtained by a photoplethysmography sensor. The method that we demonstrate, which is based on self-mixing interferometry with significant advantages of simplicity, compactness and non-invasion, effectively illustrates the viability of the SMI technique for measuring other cardiovascular signals.
Real-time terahertz imaging through self-mixing in a quantum-cascade laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wienold, M., E-mail: martin.wienold@dlr.de; Rothbart, N.; Hübers, H.-W.
2016-07-04
We report on a fast self-mixing approach for real-time, coherent terahertz imaging based on a quantum-cascade laser and a scanning mirror. Due to a fast deflection of the terahertz beam, images with frame rates up to several Hz are obtained, eventually limited by the mechanical inertia of the employed scanning mirror. A phase modulation technique allows for the separation of the amplitude and phase information without the necessity of parameter fitting routines. We further demonstrate the potential for transmission imaging.
Managing Problems Before Problems Manage You.
Grigsby, Jim
2015-01-01
Every day we face problems, both personal and professional, and our initial reaction determines how well we solve those problems. Whether a problem is minor or major, short-term or lingering, there are techniques we can employ to help manage the problem and the problem-solving process. This article, based on my book Don't Tick Off The Gators! Managing Problems Before Problems Manage You, presents 12 different concepts for managing problems, not "cookie cutter" solutions, but different ideas that you can apply as they fit your circumstances.
Finite Volume Method for Pricing European Call Option with Regime-switching Volatility
NASA Astrophysics Data System (ADS)
Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM
2018-03-01
In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.
A New and Fast Method for Smoothing Spectral Imaging Data
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Liu, Ming; Davis, Curtiss O.
1998-01-01
The Airborne Visible Infrared Imaging Spectrometer (AVIRIS) acquires spectral imaging data covering the 0.4 - 2.5 micron wavelength range in 224 10-nm-wide channels from a NASA ER-2 aircraft at 20 km. More than half of the spectral region is affected by atmospheric gaseous absorption. Over the past decade, several techniques have been used to remove atmospheric effects from AVIRIS data for the derivation of surface reflectance spectra. An operational atmosphere removal algorithm (ATREM), which is based on theoretical modeling of atmospheric absorption and scattering effects, has been developed and updated for deriving surface reflectance spectra from AVIRIS data. Due to small errors in assumed wavelengths and errors in line parameters compiled on the HITRAN database, small spikes (particularly near the centers of the 0.94- and 1.14-micron water vapor bands) are present in this spectrum. Similar small spikes are systematically present in entire ATREM output cubes. These spikes have distracted geologists who are interested in studying surface mineral features. A method based on the "global" fitting of spectra with low order polynomials or other functions for removing these weak spikes has recently been developed by Boardman (this volume). In this paper, we describe another technique, which fits spectra "locally" based on cubic spline smoothing, for quick post processing of ATREM apparent reflectance spectra derived from AVIRIS data. Results from our analysis of AVIRIS data acquired over Cuprite mining district in Nevada in June of 1995 are given. Comparisons between our smoothed spectra and those derived with the empirical line method are presented.
NASA Astrophysics Data System (ADS)
Ouriev, Boris; Windhab, Erich; Braun, Peter; Birkhofer, Beat
2004-10-01
In-line visualization and on-line characterization of nontransparent fluids becomes an important subject for process development in food and nonfood industries. In our work, a noninvasive Doppler ultrasound-based technique is introduced. Such a technique is applied for investigation of nonstationary flow in the chocolate precrystallization process. Unstable flow conditions were induced by abrupt flow interruption and were followed up by strong flow pulsations in the piping system. While relying on available process information, such as absolute pressures and temperatures, no analyses of flow conditions or characterization of suspension properties could possibly be done. It is obvious that chocolate flow properties are sensitive to flow boundary conditions. Therefore, it becomes essential to perform reliable structure state monitoring and particularly in application to nonstationary flow processes. Such flow instabilities in chocolate processing can often lead to failed product quality with interruption of the mainstream production. As will be discussed, a combination of flow velocity profiles, on-line fit into flow profiles, and pressure difference measurement are sufficient for reliable analyses of fluid properties and flow boundary conditions as well as monitoring of the flow state. Analyses of the flow state and flow properties of chocolate suspension are based on on-line measurement of one-dimensional velocity profiles across the flow channel and their on-line characterization with the power-law model. Conclusions about flow boundary conditions were drawn from a calculated velocity standard mean deviation, the parameters of power-law fit into velocity profiles, and volumetric flow rate information.
Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H
2009-01-01
This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).
NASA Astrophysics Data System (ADS)
Molinari, Filippo; Acharya, Rajendra; Zeng, Guang; Suri, Jasjit S.
2011-03-01
The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 +/- 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
Lathe-cut hydrophilic contact lenses: report of 100 clinical cases.
Espy, J W
1978-10-01
In a review of the literature, it became apparent that there were very few articles describing the advantages, as well as the fitting techniques, of lathe-cut hydrophilic contact lenses. Few practitioners, including those who fit other types of hydrophilic lenses and hard lenses, have had any experience with this lens, and considerable interest has been generated by fragmentary reports of good results. This paper describes in detail the geometry of the first lathe-cut hydrophilic lens approved by the Federal Drug Administration, the fitting methods utilizing trial lenses, and the results of 100 patients successfully fitted.
Efe, Turgay; Füglein, Alexander; Heyse, Thomas J; Stein, Thomas; Timmesfeld, Nina; Fuchs-Winkelmann, Susanne; Schmitt, Jan; Paletta, Jürgen R J; Schofer, Markus D
2012-02-01
Adequate graft fixation over a certain time period is necessary for successful cartilage repair and permanent integration of the graft into the surrounding tissue. The aim of the present study was to test the primary stability of a new cell-free collagen gel plug (CaReS(®)-1S) with two different graft fixation techniques over a simulated early postoperative period. Isolated chondral lesions (11 mm diameter by 6 mm deep) down to the subchondral bone plate were created on the medial femoral condyle in 40 porcine knee specimens. The collagen scaffolds were fixed in 20 knees each by press-fit only or by press-fit + fibrin glue. Each knee was then put through 2,000 cycles in an ex vivo continuous passive motion model. Before and after the 2,000 motions, standardized digital pictures of the grafts were taken. The area of worn surface as a percentage of the total collagen plug surface was evaluated using image analysis software. No total delamination of the scaffolds to leave an empty defect site was recorded in any of the knees. The two fixation techniques showed no significant difference in worn surface area after 2,000 cycles (P = n.s.). This study reveals that both the press-fit only and the press-fit + fibrin glue technique provide similar, adequate, stability of a type I collagen plug in the described porcine model. In the clinical setting, this fact may be particularly important for implantation of arthroscopic grafts.
OPEN CLUSTERS AS PROBES OF THE GALACTIC MAGNETIC FIELD. I. CLUSTER PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoq, Sadia; Clemens, D. P., E-mail: shoq@bu.edu, E-mail: clemens@bu.edu
2015-10-15
Stars in open clusters are powerful probes of the intervening Galactic magnetic field via background starlight polarimetry because they provide constraints on the magnetic field distances. We use 2MASS photometric data for a sample of 31 clusters in the outer Galaxy for which near-IR polarimetric data were obtained to determine the cluster distances, ages, and reddenings via fitting theoretical isochrones to cluster color–magnitude diagrams. The fitting approach uses an objective χ{sup 2} minimization technique to derive the cluster properties and their uncertainties. We found the ages, distances, and reddenings for 24 of the clusters, and the distances and reddenings formore » 6 additional clusters that were either sparse or faint in the near-IR. The derived ranges of log(age), distance, and E(B−V) were 7.25–9.63, ∼670–6160 pc, and 0.02–1.46 mag, respectively. The distance uncertainties ranged from ∼8% to 20%. The derived parameters were compared to previous studies, and most cluster parameters agree within our uncertainties. To test the accuracy of the fitting technique, synthetic clusters with 50, 100, or 200 cluster members and a wide range of ages were fit. These tests recovered the input parameters within their uncertainties for more than 90% of the individual synthetic cluster parameters. These results indicate that the fitting technique likely provides reliable estimates of cluster properties. The distances derived will be used in an upcoming study of the Galactic magnetic field in the outer Galaxy.« less
Advanced fitness landscape analysis and the performance of memetic algorithms.
Merz, Peter
2004-01-01
Memetic algorithms (MAs) have demonstrated very effective in combinatorial optimization. This paper offers explanations as to why this is so by investigating the performance of MAs in terms of efficiency and effectiveness. A special class of MAs is used to discuss efficiency and effectiveness for local search and evolutionary meta-search. It is shown that the efficiency of MAs can be increased drastically with the use of domain knowledge. However, effectiveness highly depends on the structure of the problem. As is well-known, identifying this structure is made easier with the notion of fitness landscapes: the local properties of the fitness landscape strongly influence the effectiveness of the local search while the global properties strongly influence the effectiveness of the evolutionary meta-search. This paper also introduces new techniques for analyzing the fitness landscapes of combinatorial problems; these techniques focus on the investigation of random walks in the fitness landscape starting at locally optimal solutions as well as on the escape from the basins of attractions of current local optima. It is shown for NK-landscapes and landscapes of the unconstrained binary quadratic programming problem (BQP) that a random walk to another local optimum can be used to explain the efficiency of recombination in comparison to mutation. Moreover, the paper shows that other aspects like the size of the basins of attractions of local optima are important for the efficiency of MAs and a local search escape analysis is proposed. These simple analysis techniques have several advantages over previously proposed statistical measures and provide valuable insight into the behaviour of MAs on different kinds of landscapes.
Liu, Xingguo; Niu, Jianwei; Ran, Linghua; Liu, Taijie
2017-08-01
This study aimed to develop estimation formulae for the total human body volume (BV) of adult males using anthropometric measurements based on a three-dimensional (3D) scanning technique. Noninvasive and reliable methods to predict the total BV from anthropometric measurements based on a 3D scan technique were addressed in detail. A regression analysis of BV based on four key measurements was conducted for approximately 160 adult male subjects. Eight total models of human BV show that the predicted results fitted by the regression models were highly correlated with the actual BV (p < 0.001). Two metrics, the mean value of the absolute difference between the actual and predicted BV (V error ) and the mean value of the ratio between V error and actual BV (RV error ), were calculated. The linear model based on human weight was recommended as the most optimal due to its simplicity and high efficiency. The proposed estimation formulae are valuable for estimating total body volume in circumstances in which traditional underwater weighing or air displacement plethysmography is not applicable or accessible. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Two-layer wireless distributed sensor/control network based on RF
NASA Astrophysics Data System (ADS)
Feng, Li; Lin, Yuchi; Zhou, Jingjing; Dong, Guimei; Xia, Guisuo
2006-11-01
A project of embedded Wireless Distributed Sensor/Control Network (WDSCN) based on RF is presented after analyzing the disadvantages of traditional measure and control system. Because of high-cost and complexity, such wireless techniques as Bluetooth and WiFi can't meet the needs of WDSCN. The two-layer WDSCN is designed based on RF technique, which operates in the ISM free frequency channel with low power and high transmission speed. Also the network is low cost, portable and moveable, integrated with the technologies of computer network, sensor, microprocessor and wireless communications. The two-layer network topology is selected in the system; a simple but efficient self-organization net protocol is designed to fit the periodic data collection, event-driven and store-and-forward. Furthermore, adaptive frequency hopping technique is adopted for anti-jamming apparently. The problems about power reduction and synchronization of data in wireless system are solved efficiently. Based on the discussion above, a measure and control network is set up to control such typical instruments and sensors as temperature sensor and signal converter, collect data, and monitor environmental parameters around. This system works well in different rooms. Experiment results show that the system provides an efficient solution to WDSCN through wireless links, with high efficiency, low power, high stability, flexibility and wide working range.
Golestanirad, Laleh; Keil, Boris; Angelone, Leonardo M.; Bonmassar, Giorgio; Mareyam, Azma; Wald, Lawrence L.
2016-01-01
Purpose MRI of patients with deep brain stimulation (DBS) implants is strictly limited due to safety concerns, including high levels of local specific absorption rate (SAR) of radiofrequency (RF) fields near the implant and related RF-induced heating. This study demonstrates the feasibility of using a rotating linearly polarized birdcage transmitter and a 32-channel close-fit receive array to significantly reduce local SAR in MRI of DBS patients. Methods Electromagnetic simulations and phantom experiments were performed with generic DBS lead geometries and implantation paths. The technique was based on mechanically rotating a linear birdcage transmitter to align its zero electric-field region with the implant while using a close-fit receive array to significantly increase signal to noise ratio of the images. Results It was found that the zero electric-field region of the transmitter is thick enough at 1.5 Tesla to encompass DBS lead trajectories with wire segments that were up to 30 degrees out of plane, as well as leads with looped segments. Moreover, SAR reduction was not sensitive to tissue properties, and insertion of a close-fit 32-channel receive array did not degrade the SAR reduction performance. Conclusion The ensemble of rotating linear birdcage and 32-channel close-fit receive array introduces a promising technology for future improvement of imaging in patients with DBS implants. PMID:27059266
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
Diagnostic Techniques to Elucidate the Aerodynamic Performance of Acoustic Liners
NASA Technical Reports Server (NTRS)
June, Jason; Bertolucci, Brandon; Ukeiley, Lawrence; Cattafesta, Louis N., III; Sheplak, Mark
2017-01-01
In support of Topic A.2.8 of NASA NRA NNH10ZEA001N, the University of Florida (UF) has investigated the use of flow field optical diagnostic and micromachined sensor-based techniques for assessing the wall shear stress on an acoustic liner. Stereoscopic particle image velocimetry (sPIV) was used to study the velocity field over a liner in the Grazing Flow Impedance Duct (GFID). The results indicate that the use of a control volume based method to determine the wall shear stress is prone to significant error. The skin friction over the liner as measured using velocity curve fitting techniques was shown to be locally reduced behind an orifice, relative to the hard wall case in a streamwise plane centered on the orifice. The capacitive wall shear stress sensor exhibited a linear response for a range of shear stresses over a hard wall. PIV over the liner is consistent with lifting of the near wall turbulent structure as it passes over an orifice, followed by a region of low wall shear stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qian; University of the Chinese Academy of Sciences, Beijing 100039; Li, Bincheng, E-mail: bcli@ioe.ac.cn
2015-09-28
Spatially resolved steady-state photocarrier radiometric (PCR) imaging technique is developed to characterize the electronic transport properties of silicon wafers. Based on a nonlinear PCR theory, simulations are performed to investigate the effects of electronic transport parameters (the carrier lifetime, the carrier diffusion coefficient, and the front surface recombination velocity) on the steady-state PCR intensity profiles. The electronic transport parameters of an n-type silicon wafer are simultaneously determined by fitting the measured steady-state PCR intensity profiles to the three-dimensional nonlinear PCR model. The determined transport parameters are in good agreement with the results obtained by the conventional modulated PCR technique withmore » multiple pump beam radii.« less
Monitoring of bolted joints using piezoelectric active-sensing for aerospace applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Gyuhae; Farrar, Charles R; Park, Chan - Yik
2010-01-01
This paper is a report of an initial investigation into tracking and monitoring the integrity of bolted joints using piezoelectric active-sensors. The target application of this study is a fitting lug assembly of unmanned aerial vehicles (UAVs), where a composite wing is mounted to a UAV fuselage. The SHM methods deployed in this study are impedance-based SHM techniques, time-series analysis, and high-frequency response functions measured by piezoelectric active-sensors. Different types of simulated damage are introduced into the structure, and the capability of each technique is examined and compared. Additional considerations encountered in this initial investigation are made to guide furthermore » thorough research required for the successful field deployment of this technology.« less
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
Order reduction for a model of marine bacteriophage evolution
NASA Astrophysics Data System (ADS)
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
Efficient Power Network Analysis with Modeling of Inductive Effects
NASA Astrophysics Data System (ADS)
Zeng, Shan; Yu, Wenjian; Hong, Xianlong; Cheng, Chung-Kuan
In this paper, an efficient method is proposed to accurately analyze large-scale power/ground (P/G) networks, where inductive parasitics are modeled with the partial reluctance. The method is based on frequency-domain circuit analysis and the technique of vector fitting [14], and obtains the time-domain voltage response at given P/G nodes. The frequency-domain circuit equation including partial reluctances is derived, and then solved with the GMRES algorithm with rescaling, preconditioning and recycling techniques. With the merit of sparsified reluctance matrix and iterative solving techniques for the frequency-domain circuit equations, the proposed method is able to handle large-scale P/G networks with complete inductive modeling. Numerical results show that the proposed method is orders of magnitude faster than HSPICE, several times faster than INDUCTWISE [4], and capable of handling the inductive P/G structures with more than 100, 000 wire segments.
NASA Astrophysics Data System (ADS)
Kang, Sung-Ju; Kerton, C. R.
2014-01-01
KR 120 (Sh2-187) is a small Galactic HII region located at a distance of 1.4 kpc that shows evidence for triggered star formation in the surrounding molecular cloud. We present an analysis of the young stellar object (YSO) population of the molecular cloud as determined using a variety of classification techniques. YSO candidates are selected from the WISE all sky catalog and classified as Class I, Class II and Flat based on 1) spectral index, 2) color-color or color-magnitude plots, and 3) spectral energy distribution (SED) fits to radiative transfer models. We examine the discrepancies in YSO classification between the various techniques and explore how these discrepancies lead to uncertainty in such scientifically interesting quantities such as the ratio of Class I/Class II sources and the surface density of YSOs at various stages of evolution.
Four-dimensional modeling of recent vertical movements in the area of the southern California uplift
Vanicek, Petr; Elliot, Michael R.; Castle, Robert O.
1979-01-01
This paper describes an analytical technique that utilizes scattered geodetic relevelings and tide-gauge records to portray Recent vertical crustal movements that may have been characterized by spasmodic changes in velocity. The technique is based on the fitting of a time-varying algebraic surface of prescribed degree to the geodetic data treated as tilt elements and to tide-gauge readings treated as point movements. Desired variations in time can be selected as any combination of powers of vertical movement velocity and episodic events. The state of the modeled vertical displacement can be shown for any number of dates for visual display. Statistical confidence limits of the modeled displacements, derived from the density of measurements in both space and time, line length, and accuracy of input data, are also provided. The capabilities of the technique are demonstrated on selected data from the region of the southern California uplift.
A New Compression Method for FITS Tables
NASA Technical Reports Server (NTRS)
Pence, William; Seaman, Rob; White, Richard L.
2010-01-01
As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.
Pedata, Paola; Corvino, Anna Rita; Napolitano, Raffaele Carmine; Garzillo, Elpidio Maria; Furfaro, Ciro; Lamberti, Monica
2016-01-20
From many years now, thanks to the development of modern diving techniques, there has been a rapid spread of diving activities everywhere. In fact, divers are ever more numerous both among the Armed Forces and civilians who dive for work, like fishing, biological research and archeology. The aim of the study was to propose a health protocol for work fitness of professional divers keeping in mind the peculiar work activity, existing Italian legislation that is almost out of date and the technical and scientific evolution in this occupational field. We performed an analysis of the most frequently occurring diseases among professional divers and of the clinical investigation and imaging techniques used for work fitness assessment of professional divers. From analysis of the health protocol recommended by D.M. 13 January 1979 (Ministerial Decree), that is most used by occupational health physician, several critical issues emerged. Very often the clinical investigation and imaging techniques still used are almost obsolete, ignoring the execution of simple and inexpensive investigations that are more useful for work fitness assessment. Considering the out-dated legislation concerning diving disciplines, it is necessary to draw up a common health protocol that takes into account clinical and scientific knowledge and skills acquired in this area. This protocol's aim is to propose a useful tool for occupational health physicians who work in this sector.
NASA Astrophysics Data System (ADS)
Hao, Ming; Rohrdantz, Christian; Janetzko, Halldór; Keim, Daniel; Dayal, Umeshwar; Haug, Lars-Erik; Hsu, Mei-Chun
2012-01-01
Twitter currently receives over 190 million tweets (small text-based Web posts) and manufacturing companies receive over 10 thousand web product surveys a day, in which people share their thoughts regarding a wide range of products and their features. A large number of tweets and customer surveys include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for determining customer sentiments. To explore high-volume customer feedback streams, we integrate three time series-based visual analysis techniques: (1) feature-based sentiment analysis that extracts, measures, and maps customer feedback; (2) a novel idea of term associations that identify attributes, verbs, and adjectives frequently occurring together; and (3) new pixel cell-based sentiment calendars, geo-temporal map visualizations and self-organizing maps to identify co-occurring and influential opinions. We have combined these techniques into a well-fitted solution for an effective analysis of large customer feedback streams such as for movie reviews (e.g., Kung-Fu Panda) or web surveys (buyers).
Teaching professional boundaries to psychiatric residents.
Gabbard, Glen O; Crisp-Han, Holly
2010-01-01
The authors demonstrate that the teaching of professional boundaries in psychiatry is an essential component of training to prevent harm to patients and to the profession. The authors illustrate overarching principles that apply to didactic teaching in seminars and to psychotherapy supervision. The teaching of boundaries must be based in sound clinical theory and technique so that transference, countertransference, and frame theory are seen as interwoven with the concept of boundaries and must use case-based learning so that a "one-size-fits-all" approach is avoided. The emphasis in teaching should be on both the clinician's temptations and the management of the patient's wish to transgress therapeutic boundaries.
A novel load balanced energy conservation approach in WSN using biogeography based optimization
NASA Astrophysics Data System (ADS)
Kaushik, Ajay; Indu, S.; Gupta, Daya
2017-09-01
Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN
2003-09-11
KENNEDY SPACE CENTER, FLA. - Jeff Thon, an SRB mechanic with United Space Alliance, is fitted with a harness to test a vertical solid rocket booster propellant grain inspection technique. Thon will be lowered inside a mockup of two segments of the SRBs. The inspection of segments is required as part of safety analysis.
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
Wedge, David C; Rowe, William; Kell, Douglas B; Knowles, Joshua
2009-03-07
We model the process of directed evolution (DE) in silico using genetic algorithms. Making use of the NK fitness landscape model, we analyse the effects of mutation rate, crossover and selection pressure on the performance of DE. A range of values of K, the epistatic interaction of the landscape, are considered, and high- and low-throughput modes of evolution are compared. Our findings suggest that for runs of or around ten generations' duration-as is typical in DE-there is little difference between the way in which DE needs to be configured in the high- and low-throughput regimes, nor across different degrees of landscape epistasis. In all cases, a high selection pressure (but not an extreme one) combined with a moderately high mutation rate works best, while crossover provides some benefit but only on the less rugged landscapes. These genetic algorithms were also compared with a "model-based approach" from the literature, which uses sequential fixing of the problem parameters based on fitting a linear model. Overall, we find that purely evolutionary techniques fare better than do model-based approaches across all but the smoothest landscapes.
NASA Astrophysics Data System (ADS)
Belyaev, M. Yu.; Volkov, O. N.; Monakhov, M. I.; Sazonov, V. V.
2017-09-01
The paper has studied the accuracy of the technique that allows the rotational motion of the Earth artificial satellites (AES) to be reconstructed based on the data of onboard measurements of angular velocity vectors and the strength of the Earth magnetic field (EMF). The technique is based on kinematic equations of the rotational motion of a rigid body. Both types of measurement data collected over some time interval have been processed jointly. The angular velocity measurements have been approximated using convenient formulas, which are substituted into the kinematic differential equations for the quaternion that specifies the transition from the body-fixed coordinate system of a satellite to the inertial coordinate system. Thus obtained equations represent a kinematic model of the rotational motion of a satellite. The solution of these equations, which approximate real motion, has been found by the least-square method from the condition of best fitting between the data of measurements of the EMF strength vector and its calculated values. The accuracy of the technique has been estimated by processing the data obtained from the board of the service module of the International Space Station ( ISS). The reconstruction of station motion using the aforementioned technique has been compared with the telemetry data on the actual motion of the station. The technique has allowed us to reconstruct the station motion in the orbital orientation mode with a maximum error less than 0.6° and the turns with a maximal error of less than 1.2°.
Absolute irradiance of the Moon for on-orbit calibration
Stone, T.C.; Kieffer, H.H.; ,
2002-01-01
The recognized need for on-orbit calibration of remote sensing imaging instruments drives the ROLO project effort to characterize the Moon for use as an absolute radiance source. For over 5 years the ground-based ROLO telescopes have acquired spatially-resolved lunar images in 23 VNIR (Moon diameter ???500 pixels) and 9 SWIR (???250 pixels) passbands at phase angles within ??90 degrees. A numerical model for lunar irradiance has been developed which fits hundreds of ROLO images in each band, corrected for atmospheric extinction and calibrated to absolute radiance, then integrated to irradiance. The band-coupled extinction algorithm uses absorption spectra of several gases and aerosols derived from MODTRAN to fit time-dependent component abundances to nightly observations of standard stars. The absolute radiance scale is based upon independent telescopic measurements of the star Vega. The fitting process yields uncertainties in lunar relative irradiance over small ranges of phase angle and the full range of lunar libration well under 0.5%. A larger source of uncertainty enters in the absolute solar spectral irradiance, especially in the SWIR, where solar models disagree by up to 6%. Results of ROLO model direct comparisons to spacecraft observations demonstrate the ability of the technique to track sensor responsivity drifts to sub-percent precision. Intercomparisons among instruments provide key insights into both calibration issues and the absolute scale for lunar irradiance.
Mineral and Geochemical Classification From Spectroscopy/Diffraction Through Neural Networks
NASA Astrophysics Data System (ADS)
Ferralis, N.; Grossman, J.; Summons, R. E.
2017-12-01
Spectroscopy and diffraction techniques are essential for understanding structural, chemical and functional properties of geological materials for Earth and Planetary Sciences. Beyond data collection, quantitative insight relies on experimentally assembled, or computationally derived spectra. Inference on the geochemical or geophysical properties (such as crystallographic order, chemical functionality, elemental composition, etc.) of a particular geological material (mineral, organic matter, etc.) is based on fitting unknown spectra and comparing the fit with consolidated databases. The complexity of fitting highly convoluted spectra, often limits the ability to infer geochemical characteristics, and limits the throughput for extensive datasets. With the emergence of heuristic approaches to pattern recognitions though machine learning, in this work we investigate the possibility and potential of using supervised neural networks trained on available public spectroscopic database to directly infer geochemical parameters from unknown spectra. Using Raman, infrared spectroscopy and powder x-ray diffraction from the publicly available RRUFF database, we train neural network models to classify mineral and organic compounds (pure or mixtures) based on crystallographic structure from diffraction, chemical functionality, elemental composition and bonding from spectroscopy. As expected, the accuracy of the inference is strongly dependent on the quality and extent of the training data. We will identify a series of requirements and guidelines for the training dataset needed to achieve consistent high accuracy inference, along with methods to compensate for limited of data.
Local effects of redundant terrestrial and GPS-based tie vectors in ITRF-like combinations
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Sarti, Pierguido; Negusini, Monia; Vittuari, Luca
2009-11-01
Tie vectors (TVs) between co-located space geodetic instruments are essential for combining terrestrial reference frames (TRFs) realised using different techniques. They provide relative positioning between instrumental reference points (RPs) which are part of a global geodetic network such as the international terrestrial reference frame (ITRF). This paper gathers the set of very long baseline interferometry (VLBI)-global positioning system (GPS) local ties performed at the observatory of Medicina (Northern Italy) during the years 2001-2006 and discusses some important aspects related to the usage of co-location ties in the combinations of TRFs. Two measurement approaches of local survey are considered here: a GPS-based approach and a classical approach based on terrestrial observations (i.e. angles, distances and height differences). The behaviour of terrestrial local ties, which routinely join combinations of space geodetic solutions, is compared to that of GPS-based local ties. In particular, we have performed and analysed different combinations of satellite laser ranging (SLR), VLBI and GPS long term solutions in order to (i) evaluate the local effects of the insertion of the series of TVs computed at Medicina, (ii) investigate the consistency of GPS-based TVs with respect to space geodetic solutions, (iii) discuss the effects of an imprecise alignment of TVs from a local to a global reference frame. Results of ITRF-like combinations show that terrestrial TVs originate the smallest residuals in all the three components. In most cases, GPS-based TVs fit space geodetic solutions very well, especially in the horizontal components (N, E). On the contrary, the estimation of the VLBI RP Up component through GPS technique appears to be awkward, since the corresponding post fit residuals are considerably larger. Besides, combination tests including multi-temporal TVs display local effects of residual redistribution, when compared to those solutions where Medicina TVs are added one at a time. Finally, the combination of TRFs turns out to be sensitive to the orientation of the local tie into the global frame.
Activity Book. Fitting in Fitness: An Integrated Approach to Health, Nutrition, and Exercise.
ERIC Educational Resources Information Center
Fisher, Bruce; And Others
1991-01-01
This integrated unit focuses on healthy hearts and how to use exercise and nutrition to be "heart smart" for life. The unit includes activities on heart function, exercise, workouts, family activities, nutrition, cholesterol, and food labels. The activities help develop research techniques, thinking skills, and cooperative learning…
Effect of a Storyboarding Technique on Selected Measures of Fitness among University Employees
ERIC Educational Resources Information Center
Anshel, Mark H.; Sutarso, Toto
2010-01-01
The purpose of this study was to determine the effectiveness of storyboarding (i.e., participants' written narrative) on improving fitness among university employees over 10 weeks. Groups consisted of storytelling during the program orientation, storytelling plus two coaching sessions, or the normal program only (control). Using difference…
Impact of Missing Data on Person-Model Fit and Person Trait Estimation
ERIC Educational Resources Information Center
Zhang, Bo; Walker, Cindy M.
2008-01-01
The purpose of this research was to examine the effects of missing data on person-model fit and person trait estimation in tests with dichotomous items. Under the missing-completely-at-random framework, four missing data treatment techniques were investigated including pairwise deletion, coding missing responses as incorrect, hotdeck imputation,…
Zhao, D; Campos, D; Yan, Y; Kimple, R; Jacques, S; van der Kogel, A; Kissick, M
2012-06-01
To demonstrate a novel interstitial optical fiber spectroscopic system, based on diffuse optical spectroscopies with spectral fitting, for the simultaneous monitoring of tumor blood volume and oxygen tension. The technique provides real-time, minimally-invasive and quantification of tissue micro-vascular hemodynamics. An optical fiber prototype probe characterizesthe optical transport in tissue between two large Numerical Aperture (NA) fibers of 200μm core diameter (BFH37-200, ThorLabs) spaced 3-mm apart. Two 21-Ga medical needles are used to protect fiber ends and to facilitate tissue penetration with minimum local blunt trauma in nude mice with xenografts. A 20W white light source (HL-2000-HP, Ocean Optics) is coupled to one fiber with SMA adapter. The other fiber is used to collect light, which is coupled into the spectrometer (QE65000 with Spectrasuite Operating software and OmniDriver, Ocean Optics). The wavelength response of the probe depends on the wavelength dependence of the light source, and of the light signal collection that includes considerable scatter, modeled with Monte-Carlo techniques (S. Jacques 2010 J. of Innov. Opt. Health Sci. 2 123-9). Measured spectra of tissue are normalized by a measured spectrum of a white standard, yielding the transmission spectrum. A head-and-neck xenograft on the flank of a live mouse is used for development. The optical fiber probe delivers and collects light at an arbitrary depth in the tumor. By spectral fitting of the measured transmission spectrum, an analysis of blood volume and oxygen tension is obtained from the fitting parameters in real time. A newly developed optical fiber spectroscopic system with an optical fiber probe takes spectroscopic techniques to a much deeper level in a tumor, which has potential applications for real-time monitoring hypoxic cell population dynamics for an eventual adaptive therapy metric of particular use in hypofractionated radiotherapy. © 2012 American Association of Physicists in Medicine.
Arora, Aman; Yadav, Avneet; Upadhyaya, Viram; Jain, Prachi; Verma, Mrinalini
2018-01-01
The purpose of this study was to compare the marginal and internal adaptation of cobalt-chromium (Co-Cr) copings fabricated from conventional wax pattern, three-dimensional (3D)-printed resin pattern, and laser sintering technique. A total of thirty copings were made, out of which ten copings were made from 3D-printed resin pattern (Group A), ten from inlay wax pattern (Group B), and ten copings were obtained from direct metal laser sintering (DMLS) technique (Group C). All the thirty samples were seated on their respective dies and sectioned carefully using a laser jet cutter and were evaluated for marginal and internal gaps at the predetermined areas using a stereomicroscope. The values were then analyzed using one-way ANOVA test and post hoc Bonferroni test. One-way ANOVA showed lowest mean marginal discrepancy for DMLS and highest value for copings fabricated from inlay wax. The values for internal discrepancy were highest for DMLS (169.38) and lowest for 3D-printed resin pattern fabricated copings (133.87). Post hoc Bonferroni test for both marginal and internal discrepancies showed nonsignificant difference when Group A was compared to Group B ( P > 0.05) and significant when Group A was compared with Group C ( P < 0.05). Group B showed significant difference ( P < 0.05) when compared with Group C. Marginal and internal discrepancies of all the three casting techniques were within clinically acceptable values. Marginal fit of DMLS was superior as compared to other two techniques, whereas when internal fit was evaluated, conventional technique showed the best internal fit.
Quantifying cell turnover using CFSE data.
Ganusov, Vitaly V; Pilyugin, Sergei S; de Boer, Rob J; Murali-Krishna, Kaja; Ahmed, Rafi; Antia, Rustom
2005-03-01
The CFSE dye dilution assay is widely used to determine the number of divisions a given CFSE labelled cell has undergone in vitro and in vivo. In this paper, we consider how the data obtained with the use of CFSE (CFSE data) can be used to estimate the parameters determining cell division and death. For a homogeneous cell population (i.e., a population with the parameters for cell division and death being independent of time and the number of divisions cells have undergone), we consider a specific biologically based "Smith-Martin" model of cell turnover and analyze three different techniques for estimation of its parameters: direct fitting, indirect fitting and rescaling method. We find that using only CFSE data, the duration of the division phase (i.e., approximately the S+G2+M phase of the cell cycle) can be estimated with the use of either technique. In some cases, the average division or cell cycle time can be estimated using the direct fitting of the model solution to the data or by using the Gett-Hodgkin method [Gett A. and Hodgkin, P. 2000. A cellular calculus for signal integration by T cells. Nat. Immunol. 1:239-244]. Estimation of the death rates during commitment to division (i.e., approximately the G1 phase of the cell cycle) and during the division phase may not be feasible with the use of only CFSE data. We propose that measuring an additional parameter, the fraction of cells in division, may allow estimation of all model parameters including the death rates during different stages of the cell cycle.
Unimodular sequence design under frequency hopping communication compatibility requirements
NASA Astrophysics Data System (ADS)
Ge, Peng; Cui, Guolong; Kong, Lingjiang; Yang, Jianyu
2016-12-01
The integrated design for both radar and anonymous communication has drawn more attention recently since wireless communication system appeals to enhance security and reliability. Given the frequency hopping (FH) communication system, an effective way to realize integrated design is to meet the spectrum compatibility between these two systems. The paper deals with a unimodular sequence design technique which considers optimizing both the spectrum compatibility and peak sidelobes levels (PSL) of auto-correlation function (ACF). The spectrum compatibility requirement realizes anonymous communication for the FH system and provides this system lower probability of intercept (LPI) since the spectrum of the FH system is hidden in that of the radar system. The proposed algorithm, named generalized fitting template (GFT) technique, converts the sequence optimization design problem to a iterative fitting process. In this process, the power spectrum density (PSD) and PSL behaviors of the generated sequences fit both PSD and PSL templates progressively. Two templates are established based on the spectrum compatibility requirement and the expected PSL. As noted, in order to ensure the communication security and reliability, spectrum compatibility requirement is given a higher priority to achieve in the GFT algorithm. This algorithm realizes this point by adjusting the weight adaptively between these two terms during the iteration process. The simulation results are analyzed in terms of bit error rate (BER), PSD, PSL, and signal-interference rate (SIR) for both the radar and FH systems. The performance of GFT is compared with SCAN, CAN, FRE, CYC, and MAT algorithms in the above aspects, which shows its good effectiveness.
Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique
NASA Astrophysics Data System (ADS)
Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang
2017-04-01
The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.
TEMPy: a Python library for assessment of three-dimensional electron microscopy density fits.
Farabella, Irene; Vasishtan, Daven; Joseph, Agnel Praveen; Pandurangan, Arun Prasad; Sahota, Harpal; Topf, Maya
2015-08-01
Three-dimensional electron microscopy is currently one of the most promising techniques used to study macromolecular assemblies. Rigid and flexible fitting of atomic models into density maps is often essential to gain further insights into the assemblies they represent. Currently, tools that facilitate the assessment of fitted atomic models and maps are needed. TEMPy (template and electron microscopy comparison using Python) is a toolkit designed for this purpose. The library includes a set of methods to assess density fits in intermediate-to-low resolution maps, both globally and locally. It also provides procedures for single-fit assessment, ensemble generation of fits, clustering, and multiple and consensus scoring, as well as plots and output files for visualization purposes to help the user in analysing rigid and flexible fits. The modular nature of TEMPy helps the integration of scoring and assessment of fits into large pipelines, making it a tool suitable for both novice and expert structural biologists.
A new contrast-assisted method in microcirculation volumetric flow assessment
NASA Astrophysics Data System (ADS)
Lu, Sheng-Yi; Chen, Yung-Sheng; Yeh, Chih-Kuang
2007-03-01
Microcirculation volumetric flow rate is a significant index in diseases diagnosis and treatment such as diabetes and cancer. In this study, we propose an integrated algorithm to assess microcirculation volumetric flow rate including estimation of blood perfused area and corresponding flow velocity maps based on high frequency destruction/contrast replenishment imaging technique. The perfused area indicates the blood flow regions including capillaries, arterioles and venules. Due to the echo variance changes between ultrasonic contrast agents (UCAs) pre- and post-destruction two images, the perfused area can be estimated by the correlation-based approach. The flow velocity distribution within the perfused area can be estimated by refilling time-intensity curves (TICs) after UCAs destruction. Most studies introduced the rising exponential model proposed by Wei (1998) to fit the TICs. Nevertheless, we found the TICs profile has a great resemblance to sigmoid function in simulations and in vitro experiments results. Good fitting correlation reveals that sigmoid model was more close to actual fact in describing destruction/contrast replenishment phenomenon. We derived that the saddle point of sigmoid model is proportional to blood flow velocity. A strong linear relationship (R = 0.97) between the actual flow velocities (0.4-2.1 mm/s) and the estimated saddle constants was found in M-mode and B-mode flow phantom experiments. Potential applications of this technique include high-resolution volumetric flow rate assessment in small animal tumor and the evaluation of superficial vasculature in clinical studies.
A Web of applicant attraction: person-organization fit in the context of Web-based recruitment.
Dineen, Brian R; Ash, Steven R; Noe, Raymond A
2002-08-01
Applicant attraction was examined in the context of Web-based recruitment. A person-organization (P-O) fit framework was adopted to examine how the provision of feedback to individuals regarding their potential P-O fit with an organization related to attraction. Objective and subjective P-O fit, agreement with fit feedback, and self-esteem also were examined in relation to attraction. Results of an experiment that manipulated fit feedback level after a self-assessment provided by a fictitious company Web site found that both feedback level and objective P-O fit were positively related to attraction. These relationships were fully mediated by subjective P-O fit. In addition, attraction was related to the interaction of objective fit, feedback, and agreement and objective fit, feedback, and self-esteem. Implications and future Web-based recruitment research directions are discussed.
Massively parallel support for a case-based planning system
NASA Technical Reports Server (NTRS)
Kettler, Brian P.; Hendler, James A.; Anderson, William A.
1993-01-01
Case-based planning (CBP), a kind of case-based reasoning, is a technique in which previously generated plans (cases) are stored in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over generative planning, in which a new plan is produced from scratch. CBP thus offers a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory to reduce retrieval times. This approach requires significant domain engineering and complex memory indexing schemes to make these planners efficient. In contrast, our CBP system, CaPER, uses a massively parallel frame-based AI language (PARKA) and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large case bases can be used; memory can be probed in numerous alternate ways; and queries can be made at several levels, allowing more specific retrieval of stored plans that better fit the target problem with less adaptation. In this paper we describe CaPER's case retrieval techniques and some experimental results showing its good performance, even on large case bases.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy
NASA Astrophysics Data System (ADS)
Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.
2018-03-01
By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.
Lathdavong, Lemthong; Shao, Jie; Kluczynski, Pawel; Lundqvist, Stefan; Axner, Ove
2011-06-10
Detection of carbon monoxide (CO) in combustion gases by tunable diode laser spectrometry is often hampered by spectral interferences from H2O and CO2. A methodology for assessment of CO in hot, humid media using telecommunication distributed feedback lasers is presented. By addressing the R14 line at 6395.4 cm(-1), and by using a dual-species-fitting technique that incorporates the fitting of both a previously measured water background reference spectrum and a 2f-wavelength modulation lineshape function, percent-level concentrations of CO can be detected in media with tens of percent of water (c(H2O)≤40%) at T≤1000 °C with an accuracy of a few percent by the use of a single reference water spectrum for background correction.
The Phasor Approach to Fluorescence Lifetime Imaging Analysis
Digman, Michelle A.; Caiolfa, Valeria R.; Zamai, Moreno; Gratton, Enrico
2008-01-01
Changing the data representation from the classical time delay histogram to the phasor representation provides a global view of the fluorescence decay at each pixel of an image. In the phasor representation we can easily recognize the presence of different molecular species in a pixel or the occurrence of fluorescence resonance energy transfer. The analysis of the fluorescence lifetime imaging microscopy (FLIM) data in the phasor space is done observing clustering of pixels values in specific regions of the phasor plot rather than by fitting the fluorescence decay using exponentials. The analysis is instantaneous since is not based on calculations or nonlinear fitting. The phasor approach has the potential to simplify the way data are analyzed in FLIM, paving the way for the analysis of large data sets and, in general, making the FLIM technique accessible to the nonexpert in spectroscopy and data analysis. PMID:17981902
Analysis of spectra using correlation functions
NASA Technical Reports Server (NTRS)
Beer, Reinhard; Norton, Robert H.
1988-01-01
A novel method is presented for the quantitative analysis of spectra based on the properties of the cross correlation between a real spectrum and either a numerical synthesis or laboratory simulation. A new goodness-of-fit criterion called the heteromorphic coefficient H is proposed that has the property of being zero when a fit is achieved and varying smoothly through zero as the iteration proceeds, providing a powerful tool for automatic or near-automatic analysis. It is also shown that H can be rendered substantially noise-immune, permitting the analysis of very weak spectra well below the apparent noise level and, as a byproduct, providing Doppler shift and radial velocity information with excellent precision. The technique is in regular use in the Atmospheric Trace Molecule Spectroscopy (ATMOS) project and operates in an interactive, realtime computing environment with turn-around times of a few seconds or less.
NASA Astrophysics Data System (ADS)
Ullah, Kaleem; Garcia-Camara, Braulio; Habib, Muhammad; Yadav, N. P.; Liu, Xuefeng
2018-07-01
In this work, we report an indirect way to image the Stokes parameters of a sample under test (SUT) with sub-diffraction scattering information. We apply our previously reported technique called parametric indirect microscopic imaging (PIMI) based on a fitting and filtration process to measure the Stokes parameters of a submicron particle. A comparison with a classical Stokes measurement is also shown. By modulating the incident field in a precise way, fitting and filtration process at each pixel of the detector in PIMI make us enable to resolve and sense the scattering information of SUT and map them in terms of the Stokes parameters. We believe that our finding can be very useful in fields like singular optics, optical nanoantenna, biomedicine and much more. The spatial signature of the Stokes parameters given by our method has been confirmed with finite difference time domain (FDTD) method.
Methods of Fitting a Straight Line to Data: Examples in Water Resources
Hirsch, Robert M.; Gilroy, Edward J.
1984-01-01
Three methods of fitting straight lines to data are described and their purposes are discussed and contrasted in terms of their applicability in various water resources contexts. The three methods are ordinary least squares (OLS), least normal squares (LNS), and the line of organic correlation (OC). In all three methods the parameters are based on moment statistics of the data. When estimation of an individual value is the objective, OLS is the most appropriate. When estimation of many values is the objective and one wants the set of estimates to have the appropriate variance, then OC is most appropriate. When one wishes to describe the relationship between two variables and measurement error is unimportant, then OC is most appropriate. Where the error is important in descriptive problems or in calibration problems, then structural analysis techniques may be most appropriate. Finally, if the problem is one of describing some geographic trajectory, then LNS is most appropriate.
NASA Astrophysics Data System (ADS)
Wang, Xuchu; Niu, Yanmin
2011-02-01
Automatic measurement of vessels from fundus images is a crucial step for assessing vessel anomalies in ophthalmological community, where the change in retinal vessel diameters is believed to be indicative of the risk level of diabetic retinopathy. In this paper, a new retinal vessel diameter measurement method by combining vessel orientation estimation and filter response is proposed. Its interesting characteristics include: (1) different from the methods that only fit the vessel profiles, the proposed method extracts more stable and accurate vessel diameter by casting this problem as a maximal response problem of a variation of Gabor filter; (2) the proposed method can directly and efficiently estimate the vessel's orientation, which is usually captured by time-consuming multi-orientation fitting techniques in many existing methods. Experimental results shows that the proposed method both retains the computational simplicity and achieves stable and accurate estimation results.
Inversion for the driving forces of plate tectonics
NASA Technical Reports Server (NTRS)
Richardson, R. M.
1983-01-01
Inverse modeling techniques have been applied to the problem of determining the roles of various forces that may drive and resist plate tectonic motions. Separate linear inverse problems have been solved to find the best fitting pole of rotation for finite element grid point velocities and to find the best combination of force models to fit the observed relative plate velocities for the earth's twelve major plates using the generalized inverse operator. Variance-covariance data on plate motion have also been included. Results emphasize the relative importance of ridge push forces in the driving mechanism. Convergent margin forces are smaller by at least a factor of two, and perhaps by as much as a factor of twenty. Slab pull, apparently, is poorly transmitted to the surface plate as a driving force. Drag forces at the base of the plate are smaller than ridge push forces, although the sign of the force remains in question.
A Model-Based Approach for the Measurement of Eye Movements Using Image Processing
NASA Technical Reports Server (NTRS)
Sung, Kwangjae; Reschke, Millard F.
1997-01-01
This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.
Statistical parameters of thermally driven turbulent anabatic flow
NASA Astrophysics Data System (ADS)
Hilel, Roni; Liberzon, Dan
2016-11-01
Field measurements of thermally driven turbulent anabatic flow over a moderate slope are reported. A collocated hot-films-sonic anemometer (Combo) obtained the finer scales of the flow by implementing a Neural Networks based in-situ calibration technique. Eight days of continuous measurements of the wind and temperature fluctuations reviled a diurnal pattern of unstable stratification that forced development of highly turbulent unidirectional up slope flow. Empirical fits of important turbulence statistics were obtained from velocity fluctuations' time series alongside fully resolved spectra of velocity field components and characteristic length scales. TKE and TI showed linear dependence on Re, while velocity derivative skewness and dissipation rates indicated the anisotropic nature of the flow. Empirical fits of normalized velocity fluctuations power density spectra were derived as spectral shapes exhibited high level of similarity. Bursting phenomenon was detected at 15% of the total time. Frequency of occurrence, spectral characteristics and possible generation mechanism are discussed. BSF Grant #2014075.
A note on anomalous band-gap variations in semiconductors with temperature
NASA Astrophysics Data System (ADS)
Chakraborty, P. K.; Mondal, B. N.
2018-03-01
An attempt is made to theoretically study the band-gap variations (ΔEg) in semiconductors with temperature following the works, did by Fan and O'Donnell et al. based on thermodynamic functions. The semiconductor band-gap reflects the bonding energy. An increase in temperature changes the chemical bondings, and electrons are promoted from valence band to conduction band. In their analyses, they made several approximations with respect to temperature and other fitting parameters leading to real values of band-gap variations with linear temperature dependences. In the present communication, we have tried to re-analyse the works, specially did by Fan, and derived an analytical model for ΔEg(T). Because, it was based on the second-order perturbation technique of thermodynamic functions. Our analyses are made without any approximations with respect to temperatures and other fitting parameters mentioned in the text, leading to a complex functions followed by an oscillating nature of the variations of ΔEg. In support of the existence of the oscillating energy band-gap variations with temperature in a semiconductor, possible physical explanations are provided to justify the experimental observation for various materials.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Quantification of rectifications for the Northwestern University Flexible Sub-Ischial Vacuum Socket.
Fatone, Stefania; Johnson, William Brett; Tran, Lilly; Tucker, Kerice; Mowrer, Christofer; Caldwell, Ryan
2017-06-01
The fit and function of a prosthetic socket depend on the prosthetist's ability to design the socket's shape to distribute load comfortably over the residual limb. We recently developed a sub-ischial socket for persons with transfemoral amputation: the Northwestern University Flexible Sub-Ischial Vacuum Socket. This study aimed to quantify the rectifications required to fit the Northwestern University Flexible Sub-Ischial Vacuum Socket to teach the technique to prosthetists as well as provide a computer-aided design-computer-aided manufacturing option. Development project. A program was used to align scans of unrectified and rectified negative molds and calculate shape change as a result of rectification. Averaged rectifications were used to create a socket template, which was shared with a central fabrication facility engaged in provision of Northwestern University Flexible Sub-Ischial Vacuum Sockets to early clinical adopters. Feedback regarding quality of fitting was obtained. Rectification maps created from 30 cast pairs of successfully fit Northwestern University Flexible Sub-Ischial Vacuum Sockets confirmed that material was primarily removed from the positive mold in the proximal-lateral and posterior regions. The template was used to fabricate check sockets for 15 persons with transfemoral amputation. Feedback suggested that the template provided a reasonable initial fit with only minor adjustments. Rectification maps and template were used to facilitate teaching and central fabrication of the Northwestern University Flexible Sub-Ischial Vacuum Socket. Minor issues with quality of initial fit achieved with the template may be due to inability to adjust the template to patient characteristics (e.g. tissue type, limb shape) and/or the degree to which it represented a fully mature version of the technique. Clinical relevance Rectification maps help communicate an important step in the fabrication of the Northwestern University Flexible Sub-Ischial Vacuum Socket facilitating dissemination of the technique, while the average template provides an alternative fabrication option via computer-aided design-computer-aided manufacturing and central fabrication.
Point and path performance of light aircraft: A review and analysis
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Summey, D. C.; Johnson, W. D.
1973-01-01
The literature on methods for predicting the performance of light aircraft is reviewed. The methods discussed in the review extend from the classical instantaneous maximum or minimum technique to techniques for generating mathematically optimum flight paths. Classical point performance techniques are shown to be adequate in many cases but their accuracies are compromised by the need to use simple lift, drag, and thrust relations in order to get closed form solutions. Also the investigation of the effect of changes in weight, altitude, configuration, etc. involves many essentially repetitive calculations. Accordingly, computer programs are provided which can fit arbitrary drag polars and power curves with very high precision and which can then use the resulting fits to compute the performance under the assumption that the aircraft is not accelerating.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
2015-01-01
The purpose of this study was to: a) identify changes in jump height and perceived well-being as indirect markers of fatigue, b) determine the internal and external workloads performed by players, and c) examine the influence of Yo-Yo IR2 on changes in jump height, perceived well-being and internal and external workloads during a tag football tournament. Microtechnology devices combined with heart rate (HR) chest straps provided external and internal measures of match work-rate and workload for twelve male tag football players during the 2014 Australian National Championships. Jump height and perceived well-being were assessed prior to and during the tournament as indirect measures of fatigue. Changes in work-rate, workload and fatigue measures between high- and low-fitness groups were examined based on players’ Yo-Yo IR2 score using a median split technique. The low- and high-fitness groups reported similar mean HR, PlayerloadTM/min, and distance/min for matches, however the low-fitness group reported higher perceived match-intensities (ES = 0.90–1.35) for several matches. Further, the high-fitness group reported higher measures of tournament workload, including distance (ES = 0.71), PlayerloadTM (ES = 0.85) and Edwards’ training impulse (TRIMP) (ES = 1.23) than the low-fitness group. High- and low-fitness groups both showed large decreases (ES = 1.46–1.49) in perceived well-being during the tournament, although jump height did not decrease below pre-tournament values. Increased Yo-Yo IR2 appears to offer a protective effect against player fatigue despite increased workloads during a tag football tournament. It is vital that training programs adequately prepare tag football players for tournament competition to maximise performance and minimise player fatigue. PMID:26465599
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.
2014-01-01
The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.
Butts, Arielle; DeJarnette, Christian; Peters, Tracy L.; Parker, Josie E.; Kerns, Morgan E.; Eberle, Karen E.; Kelly, Steve L.
2017-01-01
ABSTRACT Traditional approaches to drug discovery are frustratingly inefficient and have several key limitations that severely constrain our capacity to rapidly identify and develop novel experimental therapeutics. To address this, we have devised a second-generation target-based whole-cell screening assay based on the principles of competitive fitness, which can rapidly identify target-specific and physiologically active compounds. Briefly, strains expressing high, intermediate, and low levels of a preselected target protein are constructed, tagged with spectrally distinct fluorescent proteins (FPs), and pooled. The pooled strains are then grown in the presence of various small molecules, and the relative growth of each strain within the mixed culture is compared by measuring the intensity of the corresponding FP tags. Chemical-induced population shifts indicate that the bioactivity of a small molecule is dependent upon the target protein’s abundance and thus establish a specific functional interaction. Here, we describe the molecular tools required to apply this technique in the prevalent human fungal pathogen Candida albicans and validate the approach using two well-characterized drug targets—lanosterol demethylase and dihydrofolate reductase. However, our approach, which we have termed target abundance-based fitness screening (TAFiS), should be applicable to a wide array of molecular targets and in essentially any genetically tractable microbe. IMPORTANCE Conventional drug screening typically employs either target-based or cell-based approaches. The first group relies on biochemical assays to detect modulators of a purified target. However, hits frequently lack drug-like characteristics such as membrane permeability and target specificity. Cell-based screens identify compounds that induce a desired phenotype, but the target is unknown, which severely restricts further development and optimization. To address these issues, we have developed a second-generation target-based whole-cell screening approach that incorporates the principles of both chemical genetics and competitive fitness, which enables the identification of target-specific and physiologically active compounds from a single screen. We have chosen to validate this approach using the important human fungal pathogen Candida albicans with the intention of pursuing novel antifungal targets. However, this approach is broadly applicable and is expected to dramatically reduce the time and resources required to progress from screening hit to lead compound. PMID:28989971
Ho, Wei-Pin; Lee, Chian-Her; Huang, Chang-Hung; Chen, Chih-Hwa; Chuang, Tai-Yuan
2014-07-01
To compare the clinical outcomes of femoral knot/press-fit anterior cruciate ligament (ACL) reconstruction with conventional techniques using femoral interference screws. Among patients who underwent arthroscopic ACL reconstruction with hamstring autografts, 73 were treated with either a femoral knot/press-fit technique (40 patients, group A) or femoral interference screw fixation (33 patients, group B). The clinical results of the 2 groups were retrospectively compared. The inclusion criteria were primary ACL reconstruction in active patients. The exclusion criteria were fractures, multiligamentous injuries, patients undergoing revision, or patients with contralateral ACL-deficient knees. In the femoral knot/press-fit technique, semitendinosus and gracilis tendons were prepared as 2 loops with knots. After passage through a bottleneck femoral tunnel, the grafts were fixed with a press-fit method (grafts' knots were stuck in the bottleneck of the femoral tunnel). A tie with Mersilene tape (Ethicon, Somerville, NJ) over a bone bridge for each tendon loop and an additional bioabsorbable interference screw were used for tibial fixation. The mean follow-up period was 38 months (range, 24 to 61 months). A significant improvement in knee function and symptoms was reported in most patients, as shown by improved Tegner scores, Lysholm knee scores, and International Knee Documentation Committee assessments (P < .01). The results of instrumented laxity testing, thigh muscle assessment, and radiologic assessment were clearly improved when compared with the preoperative status (P < .01). No statistically significant difference in outcomes could be observed between group A and group B (P = not significant). In this nonrandomized study, femoral knot/press-fit ACL reconstruction did not appear to provide increased anterior instability compared with that of conventional femoral interference screw ACL reconstruction. Favorable outcomes with regard to knee stability and patient satisfaction were achieved in most of our ACL-reconstructed patients using femoral knot/press-fit fixation with hamstring tendon autograft. Level IV, therapeutic case series. Copyright © 2014 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Plain Language to Communicate Physical Activity Information: A Website Content Analysis.
Paige, Samantha R; Black, David R; Mattson, Marifran; Coster, Daniel C; Stellefson, Michael
2018-04-01
Plain language techniques are health literacy universal precautions intended to enhance health care system navigation and health outcomes. Physical activity (PA) is a popular topic on the Internet, yet it is unknown if information is communicated in plain language. This study examined how plain language techniques are included in PA websites, and if the use of plain language techniques varies according to search procedures (keyword, search engine) and website host source (government, commercial, educational/organizational). Three keywords ("physical activity," "fitness," and "exercise") were independently entered into three search engines (Google, Bing, and Yahoo) to locate a nonprobability sample of websites ( N = 61). Fourteen plain language techniques were coded within each website to examine content formatting, clarity and conciseness, and multimedia use. Approximately half ( M = 6.59; SD = 1.68) of the plain language techniques were included in each website. Keyword physical activity resulted in websites with fewer clear and concise plain language techniques ( p < .05), whereas fitness resulted in websites with more clear and concise techniques ( p < .01). Plain language techniques did not vary by search engine or the website host source. Accessing PA information that is easy to understand and behaviorally oriented may remain a challenge for users. Transdisciplinary collaborations are needed to optimize plain language techniques while communicating online PA information.
Kaneko, Takahiro; Yamagishi, Kiyoshi; Horie, Norio; Shimoyama, Tetsuo
2013-01-01
To evaluate the clinical outcome of a novel open-tray impression technique for fabrication of a provisional prosthesis supported by immediately loaded implants in a completely edentulous arch. An open-tray impression technique was evaluated in this retrospective study that included patients treated between March 2006 and October 2009. Preoperatively, a diagnostic prosthesis was delivered, and a novel open tray was fabricated based on this prosthesis. After implant placement, the impression and interocclusal record were taken simultaneously using the novel open tray. Laboratory-fabricated, screw-retained, all-acrylic resin provisional restorations were delivered on the same day of surgery. The prosthesis was assessed from the day of surgery until replacement with a definitive prosthesis. The study included 21 patients (mean age, 64.5 years) and a total of 125 implants. Of these, 104 implants were immediately loaded. In all patients, well-fitting provisional restorations supported by a minimum of four implants were delivered. Fracture of the first molar cusp was observed in one case after 30 days. However, there was no extensive fracture in the framework or functional disorder of the prosthesis. No implant failed during the follow-up after implant surgery. This protocol enabled fabrication of a well-fitting acrylic resin provisional prosthesis supported by immediately loaded implants because the impression was taken while in centric occlusion and an occlusion identical to the diagnostic prosthesis could be reconstructed.
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
vFitness: a web-based computing tool for improving estimation of in vitro HIV-1 fitness experiments
2010-01-01
Background The replication rate (or fitness) between viral variants has been investigated in vivo and in vitro for human immunodeficiency virus (HIV). HIV fitness plays an important role in the development and persistence of drug resistance. The accurate estimation of viral fitness relies on complicated computations based on statistical methods. This calls for tools that are easy to access and intuitive to use for various experiments of viral fitness. Results Based on a mathematical model and several statistical methods (least-squares approach and measurement error models), a Web-based computing tool has been developed for improving estimation of virus fitness in growth competition assays of human immunodeficiency virus type 1 (HIV-1). Conclusions Unlike the two-point calculation used in previous studies, the estimation here uses linear regression methods with all observed data in the competition experiment to more accurately estimate relative viral fitness parameters. The dilution factor is introduced for making the computational tool more flexible to accommodate various experimental conditions. This Web-based tool is implemented in C# language with Microsoft ASP.NET, and is publicly available on the Web at http://bis.urmc.rochester.edu/vFitness/. PMID:20482791