Statistical methods for incomplete data: Some results on model misspecification.
McIsaac, Michael; Cook, R J
2017-02-01
Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy
1993-01-01
Climate changes traditionally have been detected from long series of observations and long after they happened. The 'inverse sequential' monitoring procedure is designed to detect changes as soon as they occur. Frequency distribution parameters are estimated both from the most recent existing set of observations and from the same set augmented by 1,2,...j new observations. Individual-value probability products ('likelihoods') are then calculated which yield probabilities for erroneously accepting the existing parameter(s) as valid for the augmented data set and vice versa. A parameter change is signaled when these probabilities (or a more convenient and robust compound 'no change' probability) show a progressive decrease. New parameters are then estimated from the new observations alone to restart the procedure. The detailed algebra is developed and tested for Gaussian means and variances, Poisson and chi-square means, and linear or exponential trends; a comprehensive and interactive Fortran program is provided in the appendix.
An alternative empirical likelihood method in missing response problems and causal inference.
Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao
2016-11-30
Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The incomplete inverse and its applications to the linear least squares problem
NASA Technical Reports Server (NTRS)
Morduch, G. E.
1977-01-01
A modified matrix product is explained, and it is shown that this product defiles a group whose inverse is called the incomplete inverse. It was proven that the incomplete inverse of an augmented normal matrix includes all the quantities associated with the least squares solution. An answer is provided to the problem that occurs when the data residuals are too large and when insufficient data to justify augmenting the model are available.
Identification of different geologic units using fuzzy constrained resistivity tomography
NASA Astrophysics Data System (ADS)
Singh, Anand; Sharma, S. P.
2018-01-01
Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
Karadima, Maria L; Saetta, Angelica A; Chatziandreou, Ilenia; Lazaris, Andreas C; Patsouris, Efstratios; Tsavaris, Nikolaos
2016-10-01
Our aim was to evaluate the predictive and prognostic influence of BRAF mutation and other molecular, clinical and laboratory parameters in stage IV colorectal cancer (CRC). 60 patients were included in this retrospective analysis, and 17 variables were examined for their relation with treatment response and survival. KRAS mutation was identified in 40.3 % of cases, BRAF and PIK3CA in 8.8 % and 10.5 % respectively. 29.8 % of patients responded to treatment. Median survival time was 14.3 months. Weight loss, fever, abdominal metastases, blood transfusion, hypoalbuminaimia, BRAF and PIK3CA mutations, CRP and DNA Index were associated with survival. In multivariate analysis, male patients had 3.8 times higher probability of response, increased DNA Index was inversely correlated with response and one unit raise of DNA Index augmented 6 times the probability of death. Our findings potentiate the prognostic role of BRAF, PIK3CA mutations and ploidy in advanced CRC.
Secondary outcome analysis for data from an outcome-dependent sampling design.
Pan, Yinghao; Cai, Jianwen; Longnecker, Matthew P; Zhou, Haibo
2018-04-22
Outcome-dependent sampling (ODS) scheme is a cost-effective way to conduct a study. For a study with continuous primary outcome, an ODS scheme can be implemented where the expensive exposure is only measured on a simple random sample and supplemental samples selected from 2 tails of the primary outcome variable. With the tremendous cost invested in collecting the primary exposure information, investigators often would like to use the available data to study the relationship between a secondary outcome and the obtained exposure variable. This is referred as secondary analysis. Secondary analysis in ODS designs can be tricky, as the ODS sample is not a random sample from the general population. In this article, we use the inverse probability weighted and augmented inverse probability weighted estimating equations to analyze the secondary outcome for data obtained from the ODS design. We do not make any parametric assumptions on the primary and secondary outcome and only specify the form of the regression mean models, thus allow an arbitrary error distribution. Our approach is robust to second- and higher-order moment misspecification. It also leads to more precise estimates of the parameters by effectively using all the available participants. Through simulation studies, we show that the proposed estimator is consistent and asymptotically normal. Data from the Collaborative Perinatal Project are analyzed to illustrate our method. Copyright © 2018 John Wiley & Sons, Ltd.
Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao
2016-05-19
Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
Evolution of single-particle structure and beta-decay near 78Ni
NASA Astrophysics Data System (ADS)
Borzov, I. N.
2012-12-01
The extended self-consistent beta-decay model has been applied for bet-decay rates and delayed neutron emission probabilities of spherical neutron-rich isotopes near the r-process paths. Unlike a popular global FRDM+RPA model, in our fully microscopic approach, the Gamow-Teller and first-forbidden decays are treated on the same footing. The model has been augmented by blocking of the odd particle in order to account for important ground-state spin-parity inversion effect which has been shown to exist in the region of the most neutron-rich doubly-magic nucleus 78Ni. Finally, a newly developed form of density functional DF3a has been employed which gives a better spin-orbit splitting due to the modified tensor components of the density functional.
Real-Time Minimization of Tracking Error for Aircraft Systems
NASA Technical Reports Server (NTRS)
Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John
2013-01-01
This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.
Principled Approaches to Missing Data in Epidemiologic Studies
Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Tchetgen Tchetgen, Eric J; Sun, BaoLuo; Mitchell, Emily M; Schisterman, Enrique F
2018-01-01
Abstract Principled methods with which to appropriately analyze missing data have long existed; however, broad implementation of these methods remains challenging. In this and 2 companion papers (Am J Epidemiol. 2018;187(3):576–584 and Am J Epidemiol. 2018;187(3):585–591), we discuss issues pertaining to missing data in the epidemiologic literature. We provide details regarding missing-data mechanisms and nomenclature and encourage the conduct of principled analyses through a detailed comparison of multiple imputation and inverse probability weighting. Data from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are used to create a masked data-analytical challenge with missing data induced by known mechanisms. We illustrate the deleterious effects of missing data with naive methods and show how principled methods can sometimes mitigate such effects. For example, when data were missing at random, naive methods showed a spurious protective effect of smoking on the risk of spontaneous abortion (odds ratio (OR) = 0.43, 95% confidence interval (CI): 0.19, 0.93), while implementation of principled methods multiple imputation (OR = 1.30, 95% CI: 0.95, 1.77) or augmented inverse probability weighting (OR = 1.40, 95% CI: 1.00, 1.97) provided estimates closer to the “true” full-data effect (OR = 1.31, 95% CI: 1.05, 1.64). We call for greater acknowledgement of and attention to missing data and for the broad use of principled missing-data methods in epidemiologic research. PMID:29165572
Principled Approaches to Missing Data in Epidemiologic Studies.
Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Tchetgen Tchetgen, Eric J; Sun, BaoLuo; Mitchell, Emily M; Schisterman, Enrique F
2018-03-01
Principled methods with which to appropriately analyze missing data have long existed; however, broad implementation of these methods remains challenging. In this and 2 companion papers (Am J Epidemiol. 2018;187(3):576-584 and Am J Epidemiol. 2018;187(3):585-591), we discuss issues pertaining to missing data in the epidemiologic literature. We provide details regarding missing-data mechanisms and nomenclature and encourage the conduct of principled analyses through a detailed comparison of multiple imputation and inverse probability weighting. Data from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are used to create a masked data-analytical challenge with missing data induced by known mechanisms. We illustrate the deleterious effects of missing data with naive methods and show how principled methods can sometimes mitigate such effects. For example, when data were missing at random, naive methods showed a spurious protective effect of smoking on the risk of spontaneous abortion (odds ratio (OR) = 0.43, 95% confidence interval (CI): 0.19, 0.93), while implementation of principled methods multiple imputation (OR = 1.30, 95% CI: 0.95, 1.77) or augmented inverse probability weighting (OR = 1.40, 95% CI: 1.00, 1.97) provided estimates closer to the "true" full-data effect (OR = 1.31, 95% CI: 1.05, 1.64). We call for greater acknowledgement of and attention to missing data and for the broad use of principled missing-data methods in epidemiologic research.
Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle
NASA Technical Reports Server (NTRS)
Shin, Yoonghyun; Calise, Anthony J.; Motter, Mark A.
2005-01-01
This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation.
Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.; Williams-Hayes, Peggy; Kaneshige, John T.; Stachowiak, Susan J.
2006-01-01
Description of the performance of a simplified dynamic inversion controller with neural network augmentation follows. Simulation studies focus on the results with and without neural network adaptation through the use of an F-15 aircraft simulator that has been modified to include canards. Simulated control law performance with a surface failure, in addition to an aerodynamic failure, is presented. The aircraft, with adaptation, attempts to minimize the inertial cross-coupling effect of the failure (a control derivative anomaly associated with a jammed control surface). The dynamic inversion controller calculates necessary surface commands to achieve desired rates. The dynamic inversion controller uses approximate short period and roll axis dynamics. The yaw axis controller is a sideslip rate command system. Methods are described to reduce the cross-coupling effect and maintain adequate tracking errors for control surface failures. The aerodynamic failure destabilizes the pitching moment due to angle of attack. The results show that control of the aircraft with the neural networks is easier (more damped) than without the neural networks. Simulation results show neural network augmentation of the controller improves performance with aerodynamic and control surface failures in terms of tracking error and cross-coupling reduction.
Adaptive Control Using Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.; Williams-Hayes, Peggy; Karneshige, J. T.; Stachowiak, Susan J.
2006-01-01
Description of the performance of a simplified dynamic inversion controller with neural network augmentation follows. Simulation studies focus on the results with and without neural network adaptation through the use of an F-15 aircraft simulator that has been modified to include canards. Simulated control law performance with a surface failure, in addition to an aerodynamic failure, is presented. The aircraft, with adaptation, attempts to minimize the inertial cross-coupling effect of the failure (a control derivative anomaly associated with a jammed control surface). The dynamic inversion controller calculates necessary surface commands to achieve desired rates. The dynamic inversion controller uses approximate short period and roll axis dynamics. The yaw axis controller is a sideslip rate command system. Methods are described to reduce the cross-coupling effect and maintain adequate tracking errors for control surface failures. The aerodynamic failure destabilizes the pitching moment due to angle of attack. The results show that control of the aircraft with the neural networks is easier (more damped) than without the neural networks. Simulation results show neural network augmentation of the controller improves performance with aerodynamic and control surface failures in terms of tracking error and cross-coupling reduction.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
NASA Technical Reports Server (NTRS)
Chung, W. W.; Mcneill, W. E.; Stortz, M. W.
1993-01-01
The nonlinear inverse transformation flight control system design method is applied to the Lockheed Ft. Worth Company's E-7D short takeoff and vertical land (STOVL) supersonic fighter/attack aircraft design with a modified General Electric F110 engine which has augmented propulsive lift capability. The system is fully augmented to provide flight path control and velocity control, and rate command attitude hold for angular axes during the transition and hover operations. In cruise mode, the flight control system is configured to provide direct thrust command, rate command attitude hold for pitch and roll axes, and sideslip command with turn coordination. A control selector based on the nonlinear inverse transformation method is designed specifically to be compatible with the propulsion system's physical configuration which has a two dimensional convergent-divergent aft nozzle, a vectorable ventral nozzle, and a thrust augmented ejector. The nonlinear inverse transformation is used to determine the propulsive forces and nozzle deflections, which in combination with the aerodynamic forces and moments (including propulsive induced contributions), and gravitational force, are required to achieve the longitudinal and vertical acceleration commands. The longitudinal control axes are fully decoupled within the propulsion system's performance envelope. A piloted motion-base flight simulation was conducted on the Vertical Motion Simulator (VMS) at NASA Ames Research Center to examine the handling qualities of this design. Based on results of the simulation, refinements to the control system have been made and will also be covered in the report.
NASA Technical Reports Server (NTRS)
Bayo, Eduardo; Ledesma, Ragnar
1993-01-01
A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.
Some tests on small-scale rectangular throat ejector. [thrust augmentation for V/STOL aircraft
NASA Technical Reports Server (NTRS)
Dean, W. N., Jr.; Franke, M. E.
1979-01-01
A small scale rectangular throat ejector with plane slot nozzles and a fixed throat area was tested to determine the effects of diffuser sidewall length, diffuser area ratio, and sidewall nozzle position on thrust and mass augmentation. The thrust augmentation ratio varied from approximately 0.9 to 1.1. Although the ejector did not have good thrust augmentation performance, the effects of the parameters studied are believed to indicate probable trends in thrust augmenting ejectors.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
An inverse dynamics approach to trajectory optimization for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
Daniel, Rhian M.; Tsiatis, Anastasios A.
2014-01-01
Two common features of clinical trials, and other longitudinal studies, are (1) a primary interest in composite endpoints, and (2) the problem of subjects withdrawing prematurely from the study. In some settings, withdrawal may only affect observation of some components of the composite endpoint, for example when another component is death, information on which may be available from a national registry. In this paper, we use the theory of augmented inverse probability weighted estimating equations to show how such partial information on the composite endpoint for subjects who withdraw from the study can be incorporated in a principled way into the estimation of the distribution of time to composite endpoint, typically leading to increased efficiency without relying on additional assumptions above those that would be made by standard approaches. We describe our proposed approach theoretically, and demonstrate its properties in a simulation study. PMID:23722304
Bayesian approach to inverse statistical mechanics.
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Bayesian approach to inverse statistical mechanics
NASA Astrophysics Data System (ADS)
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Pilots Rate Augmented Generalized Predictive Control for Reconfiguration
NASA Technical Reports Server (NTRS)
Soloway, Don; Haley, Pam
2004-01-01
The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.
Inverse modeling methods for indoor airborne pollutant tracking: literature review and fundamentals.
Liu, X; Zhai, Z
2007-12-01
Reduction in indoor environment quality calls for effective control and improvement measures. Accurate and prompt identification of contaminant sources ensures that they can be quickly removed and contaminated spaces isolated and cleaned. This paper discusses the use of inverse modeling to identify potential indoor pollutant sources with limited pollutant sensor data. The study reviews various inverse modeling methods for advection-dispersion problems and summarizes the methods into three major categories: forward, backward, and probability inverse modeling methods. The adjoint probability inverse modeling method is indicated as an appropriate model for indoor air pollutant tracking because it can quickly find source location, strength and release time without prior information. The paper introduces the principles of the adjoint probability method and establishes the corresponding adjoint equations for both multi-zone airflow models and computational fluid dynamics (CFD) models. The study proposes a two-stage inverse modeling approach integrating both multi-zone and CFD models, which can provide a rapid estimate of indoor pollution status and history for a whole building. Preliminary case study results indicate that the adjoint probability method is feasible for indoor pollutant inverse modeling. The proposed method can help identify contaminant source characteristics (location and release time) with limited sensor outputs. This will ensure an effective and prompt execution of building management strategies and thus achieve a healthy and safe indoor environment. The method can also help design optimal sensor networks.
Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M
2018-02-01
Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.
An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-01-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360
Viscosities of implantable biomaterials in vocal fold augmentation surgery.
Chan, R W; Titze, I R
1998-05-01
Vocal fold vibration depends critically on the viscoelasticity of vocal fold tissues. For instance, phonation threshold pressure, a measure of the "ease" of phonation, has been shown to be directly related to the viscosity of the vibrating mucosa. Various implantable biomaterials have been used in vocal fold augmentation surgery, with implantation sites sometimes close to or inside the mucosa. Yet their viscosities or other mechanical properties are seldom known. This study attempts to provide data on viscosities of commonly used phonosurgical biomaterials. Using a parallel-plate rotational rheometer, oscillatory shear experiments were performed on implantable polytetrafluoroethylene (Teflon or Polytef; Mentor Inc., Hingham, MA), collagen (Zyderm; Collagen Corp., Palo Alto, CA), glutaraldehyde crosslinked (GAX) collagen (Phonagel or Zyplast; Collagen Corp.), absorbable gelatin (Gelfoam; Upjohn Co., Kalamazoo, MI), and human abdominal subcutaneous fat. Samples of human vocal fold mucosal tissues were also tested. Under sinusoidal oscillatory shear at 10 Hz and at 37 degrees C, the dynamic viscosity was 116 Pascal-seconds (Pa-s) for polytetrafluoroethylene, 21 Pa-s for gelatin, 8-13 Pa-s for the two types of collagen, 3 Pa-s for fat, and 1 to 3 Pa-s for vocal fold mucosa. Results extrapolated to 100 Hz also show similar differences among the biomaterials, but all values are an order of magnitude lower because of the typical inverse frequency relation (shear thinning effect) for polymeric and biologic materials. The data suggest that the use of fat for vocal fold augmentation may be more conducive to the "ease" of phonation because of its relatively low viscosity, which is closest to physiologic levels. This implication is probably the most relevant in predicting initial outcome of the postoperative voice before there is any significant assimilation (e.g., resorption and fibrosis) of the implanted biomaterial.
Ramirez, Abelardo; Foxall, William
2014-05-28
Stochastic inversions of InSAR data were carried out to assess the probability that pressure perturbations resulting from CO 2 injection into well KB-502 at In Salah penetrated into the lower caprock seal above the reservoir. Inversions of synthetic data were employed to evaluate the factors that affect the vertical resolution of overpressure distributions, and to assess the impact of various sources of uncertainty in prior constraints on inverse solutions. These include alternative pressure-driven deformation modes within reservoir and caprock, the geometry of a sub-vertical fracture zone in the caprock identified in previous studies, and imperfect estimates of the rock mechanicalmore » properties. Inversions of field data indicate that there is a high probability that a pressure perturbation during the first phase of injection extended upwards along the fracture zone ~ 150 m above the reservoir, and less than 50% probability that it reached the Hot Shale unit at 1500 m depth. Within the uncertainty bounds considered, it was concluded that it is very unlikely that the pressure perturbation approached within 150 m of the top of the lower caprock at the Hercynian Unconformity. The results are consistent with previous deterministic inversion and forward modeling studies.« less
Estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers.
Li, Shanshan; Ning, Yang
2015-09-01
Covariate-specific time-dependent ROC curves are often used to evaluate the diagnostic accuracy of a biomarker with time-to-event outcomes, when certain covariates have an impact on the test accuracy. In many medical studies, measurements of biomarkers are subject to missingness due to high cost or limitation of technology. This article considers estimation of covariate-specific time-dependent ROC curves in the presence of missing biomarkers. To incorporate the covariate effect, we assume a proportional hazards model for the failure time given the biomarker and the covariates, and a semiparametric location model for the biomarker given the covariates. In the presence of missing biomarkers, we propose a simple weighted estimator for the ROC curves where the weights are inversely proportional to the selection probability. We also propose an augmented weighted estimator which utilizes information from the subjects with missing biomarkers. The augmented weighted estimator enjoys the double-robustness property in the sense that the estimator remains consistent if either the missing data process or the conditional distribution of the missing data given the observed data is correctly specified. We derive the large sample properties of the proposed estimators and evaluate their finite sample performance using numerical studies. The proposed approaches are illustrated using the US Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. © 2015, The International Biometric Society.
Augmenting Security on Department of Defense Installations to Defeat the Active Shooter Threat
2016-06-10
strategies to determine if the military could benefit from increased numbers of armed personnel to augment military and civilian law enforcement...personnel. The benefit to the DoD includes increased probability of prevention and deterrence of active shooter events, and a more efficient mitigation...strategies to determine if the military could benefit from increased numbers of armed personnel to augment military and civilian law enforcement
Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...
2015-06-12
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
Greenwood, J. Arthur; Landwehr, J. Maciunas; Matalas, N.C.; Wallis, J.R.
1979-01-01
Distributions whose inverse forms are explicitly defined, such as Tukey's lambda, may present problems in deriving their parameters by more conventional means. Probability weighted moments are introduced and shown to be potentially useful in expressing the parameters of these distributions.
Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin
2014-03-01
To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.
Cox regression analysis with missing covariates via nonparametric multiple imputation.
Hsu, Chiu-Hsieh; Yu, Mandi
2018-01-01
We consider the situation of estimating Cox regression in which some covariates are subject to missing, and there exists additional information (including observed event time, censoring indicator and fully observed covariates) which may be predictive of the missing covariates. We propose to use two working regression models: one for predicting the missing covariates and the other for predicting the missing probabilities. For each missing covariate observation, these two working models are used to define a nearest neighbor imputing set. This set is then used to non-parametrically impute covariate values for the missing observation. Upon the completion of imputation, Cox regression is performed on the multiply imputed datasets to estimate the regression coefficients. In a simulation study, we compare the nonparametric multiple imputation approach with the augmented inverse probability weighted (AIPW) method, which directly incorporates the two working models into estimation of Cox regression, and the predictive mean matching imputation (PMM) method. We show that all approaches can reduce bias due to non-ignorable missing mechanism. The proposed nonparametric imputation method is robust to mis-specification of either one of the two working models and robust to mis-specification of the link function of the two working models. In contrast, the PMM method is sensitive to misspecification of the covariates included in imputation. The AIPW method is sensitive to the selection probability. We apply the approaches to a breast cancer dataset from Surveillance, Epidemiology and End Results (SEER) Program.
A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography
NASA Astrophysics Data System (ADS)
Sun, S.; Chen, C.; WANG, H.; Wang, Q.
2014-12-01
The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).
[Determinants of pride and shame: outcome, expected success and attribution].
Schützwohl, A
1991-01-01
In two experiments we investigated the relationship between subjective probability of success and pride and shame. According to Atkinson (1957), pride (the incentive of success) is an inverse linear function of the probability of success, shame (the incentive of failure) being a negative linear function. Attribution theory predicts an inverse U-shaped relationship between subjective probability of success and pride and shame. The results presented here are at variance with both theories: Pride and shame do not vary with subjective probability of success. However, pride and shame are systematically correlated with internal attributions of action outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...
2012-05-01
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Taylor, Jeremy M G; Cheng, Wenting; Foster, Jared C
2015-03-01
A recent article (Zhang et al., 2012, Biometrics 168, 1010-1018) compares regression based and inverse probability based methods of estimating an optimal treatment regime and shows for a small number of covariates that inverse probability weighted methods are more robust to model misspecification than regression methods. We demonstrate that using models that fit the data better reduces the concern about non-robustness for the regression methods. We extend the simulation study of Zhang et al. (2012, Biometrics 168, 1010-1018), also considering the situation of a larger number of covariates, and show that incorporating random forests into both regression and inverse probability weighted based methods improves their properties. © 2014, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Das, Debottam; Ghosh, Kirtiman; Mitra, Manimala; Mondal, Subhadeep
2018-01-01
We consider an extension of the standard model (SM) augmented by two neutral singlet fermions per generation and a leptoquark. In order to generate the light neutrino masses and mixing, we incorporate inverse seesaw mechanism. The right-handed neutrino production in this model is significantly larger than the conventional inverse seesaw scenario. We analyze the different collider signatures of this model and find that the final states associated with three or more leptons, multijet and at least one b -tagged and (or) τ -tagged jet can probe larger RH neutrino mass scale. We have also proposed a same-sign dilepton signal region associated with multiple jets and missing energy that can be used to distinguish the present scenario from the usual inverse seesaw extended SM.
NASA Astrophysics Data System (ADS)
West, Michael; Gao, Wei; Grand, Stephen
2004-08-01
Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.
Analysis of capture-recapture models with individual covariates using data augmentation
Royle, J. Andrew
2009-01-01
I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.
Dushaw, Brian D; Sagen, Hanne
2017-12-01
Ocean acoustic tomography depends on a suitable reference ocean environment with which to set the basic parameters of the inverse problem. Some inverse problems may require a reference ocean that includes the small-scale variations from internal waves, small mesoscale, or spice. Tomographic inversions that employ data of stable shadow zone arrivals, such as those that have been observed in the North Pacific and Canary Basin, are an example. Estimating temperature from the unique acoustic data that have been obtained in Fram Strait is another example. The addition of small-scale variability to augment a smooth reference ocean is essential to understanding the acoustic forward problem in these cases. Rather than a hindrance, the stochastic influences of the small scale can be exploited to obtain accurate inverse estimates. Inverse solutions are readily obtained, and they give computed arrival patterns that matched the observations. The approach is not ad hoc, but universal, and it has allowed inverse estimates for ocean temperature variations in Fram Strait to be readily computed on several acoustic paths for which tomographic data were obtained.
ERIC Educational Resources Information Center
Conley, Quincy
2013-01-01
Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile…
From Inverse Problems in Mathematical Physiology to Quantitative Differential Diagnoses
Zenker, Sven; Rubin, Jonathan; Clermont, Gilles
2007-01-01
The improved capacity to acquire quantitative data in a clinical setting has generally failed to improve outcomes in acutely ill patients, suggesting a need for advances in computer-supported data interpretation and decision making. In particular, the application of mathematical models of experimentally elucidated physiological mechanisms could augment the interpretation of quantitative, patient-specific information and help to better target therapy. Yet, such models are typically complex and nonlinear, a reality that often precludes the identification of unique parameters and states of the model that best represent available data. Hypothesizing that this non-uniqueness can convey useful information, we implemented a simplified simulation of a common differential diagnostic process (hypotension in an acute care setting), using a combination of a mathematical model of the cardiovascular system, a stochastic measurement model, and Bayesian inference techniques to quantify parameter and state uncertainty. The output of this procedure is a probability density function on the space of model parameters and initial conditions for a particular patient, based on prior population information together with patient-specific clinical observations. We show that multimodal posterior probability density functions arise naturally, even when unimodal and uninformative priors are used. The peaks of these densities correspond to clinically relevant differential diagnoses and can, in the simplified simulation setting, be constrained to a single diagnosis by assimilating additional observations from dynamical interventions (e.g., fluid challenge). We conclude that the ill-posedness of the inverse problem in quantitative physiology is not merely a technical obstacle, but rather reflects clinical reality and, when addressed adequately in the solution process, provides a novel link between mathematically described physiological knowledge and the clinical concept of differential diagnoses. We outline possible steps toward translating this computational approach to the bedside, to supplement today's evidence-based medicine with a quantitatively founded model-based medicine that integrates mechanistic knowledge with patient-specific information. PMID:17997590
Time-domain wavefield reconstruction inversion
NASA Astrophysics Data System (ADS)
Li, Zhen-Chun; Lin, Yu-Zhao; Zhang, Kai; Li, Yuan-Yuan; Yu, Zhen-Nan
2017-12-01
Wavefield reconstruction inversion (WRI) is an improved full waveform inversion theory that has been proposed in recent years. WRI method expands the searching space by introducing the wave equation into the objective function and reconstructing the wavefield to update model parameters, thereby improving the computing efficiency and mitigating the influence of the local minimum. However, frequency-domain WRI is difficult to apply to real seismic data because of the high computational memory demand and requirement of time-frequency transformation with additional computational costs. In this paper, wavefield reconstruction inversion theory is extended into the time domain, the augmented wave equation of WRI is derived in the time domain, and the model gradient is modified according to the numerical test with anomalies. The examples of synthetic data illustrate the accuracy of time-domain WRI and the low dependency of WRI on low-frequency information.
NASA Astrophysics Data System (ADS)
Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.
2012-12-01
We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.
NASA Technical Reports Server (NTRS)
Campbell, Stefan F.; Kaneshige, John T.
2010-01-01
Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).
Group-theoretic models of the inversion process in bacterial genomes.
Egri-Nagy, Attila; Gebhardt, Volker; Tanaka, Mark M; Francis, Andrew R
2014-07-01
The variation in genome arrangements among bacterial taxa is largely due to the process of inversion. Recent studies indicate that not all inversions are equally probable, suggesting, for instance, that shorter inversions are more frequent than longer, and those that move the terminus of replication are less probable than those that do not. Current methods for establishing the inversion distance between two bacterial genomes are unable to incorporate such information. In this paper we suggest a group-theoretic framework that in principle can take these constraints into account. In particular, we show that by lifting the problem from circular permutations to the affine symmetric group, the inversion distance can be found in polynomial time for a model in which inversions are restricted to acting on two regions. This requires the proof of new results in group theory, and suggests a vein of new combinatorial problems concerning permutation groups on which group theorists will be needed to collaborate with biologists. We apply the new method to inferring distances and phylogenies for published Yersinia pestis data.
Doidge, James C
2018-02-01
Population-based cohort studies are invaluable to health research because of the breadth of data collection over time, and the representativeness of their samples. However, they are especially prone to missing data, which can compromise the validity of analyses when data are not missing at random. Having many waves of data collection presents opportunity for participants' responsiveness to be observed over time, which may be informative about missing data mechanisms and thus useful as an auxiliary variable. Modern approaches to handling missing data such as multiple imputation and maximum likelihood can be difficult to implement with the large numbers of auxiliary variables and large amounts of non-monotone missing data that occur in cohort studies. Inverse probability-weighting can be easier to implement but conventional wisdom has stated that it cannot be applied to non-monotone missing data. This paper describes two methods of applying inverse probability-weighting to non-monotone missing data, and explores the potential value of including measures of responsiveness in either inverse probability-weighting or multiple imputation. Simulation studies are used to compare methods and demonstrate that responsiveness in longitudinal studies can be used to mitigate bias induced by missing data, even when data are not missing at random.
Comparison of dynamic treatment regimes via inverse probability weighting.
Hernán, Miguel A; Lanoy, Emilie; Costagliola, Dominique; Robins, James M
2006-03-01
Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using inverse probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate inverse probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an inverse probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.
Assessing the causal effect of policies: an example using stochastic interventions.
Díaz, Iván; van der Laan, Mark J
2013-11-19
Assessing the causal effect of an exposure often involves the definition of counterfactual outcomes in a hypothetical world in which the stochastic nature of the exposure is modified. Although stochastic interventions are a powerful tool to measure the causal effect of a realistic intervention that intends to alter the population distribution of an exposure, their importance to answer questions about plausible policy interventions has been obscured by the generalized use of deterministic interventions. In this article, we follow the approach described in Díaz and van der Laan (2012) to define and estimate the effect of an intervention that is expected to cause a truncation in the population distribution of the exposure. The observed data parameter that identifies the causal parameter of interest is established, as well as its efficient influence function under the non-parametric model. Inverse probability of treatment weighted (IPTW), augmented IPTW and targeted minimum loss-based estimators (TMLE) are proposed, their consistency and efficiency properties are determined. An extension to longitudinal data structures is presented and its use is demonstrated with a real data example.
Hager, Rebecca; Tsiatis, Anastasios A; Davidian, Marie
2018-05-18
Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented. © 2018, The International Biometric Society.
[Relationship between physical activity and hemodynamic parameters in adults].
Gómez-Sánchez, L; García-Ortiz, L; Recio-Rodríguez, J I; Patino-Alonso, M C; Agudo-Conde, C; Gómez-Marcos, M A
2015-01-01
To analyze the relationship between physical activity, as assessed by accelerometer, with central and peripheral augmentation index and carotid intima media thickness (IMT) in adults. This study analyzed 263 subjects who were included in the EVIDENT study. Physical activity was assessed during 7 days using the ActigraphGT3X accelerometer (counts/min). Carotid ultrasound was used to measure carotid IMT. The Sphygmo Cor System was used to measure central and peripheral augmentation index (CAIx and PAIx). Mean age 55.85±12 years; 59.30% female; 26.7 body mass index and blood pressure 120/77mmHg. Mean physician activity counts/min was 244.37 and 2.63±10.26min/day of vigorous or very vigorous activity. Physical activity showed an inverse correlation with PAIx (r=-0.179; P<.01) and vigorous activity day time with IMT(r=-0.174; P<.01), CAIx (r=-0.217; P<.01) and PAIx (r=-0.324; P<.01). After adjusting for confounding factors in the multiple regression analysis, the inverse association of CAIx with counts/min and the time spent in vigorous/very vigorous activity was maintained. The results suggest that both physical activity and time spent in vigorous or vigorous activity are associated with the central augmentation index in adults. Copyright © 2015 SEHLELHA. Published by Elsevier Espana. All rights reserved.
An Interactive Graphical Modeling Game for Teaching Musical Concepts.
ERIC Educational Resources Information Center
Lamb, Martin
1982-01-01
Describes an interactive computer game in which players compose music at a computer screen. They experiment with pitch and melodic shape and the effects of transposition, augmentation, diminution, retrograde, and inversion. The user interface is simple enough for children to use and powerful enough for composers to work with. (EAO)
Dynamic Inversion based Control of a Docking Mechanism
NASA Technical Reports Server (NTRS)
Kulkarni, Nilesh V.; Ippolito, Corey; Krishnakumar, Kalmanje
2006-01-01
The problem of position and attitude control of the Stewart platform based docking mechanism is considered motivated by its future application in space missions requiring the autonomous docking capability. The control design is initiated based on the framework of the intelligent flight control architecture being developed at NASA Ames Research Center. In this paper, the baseline position and attitude control system is designed using dynamic inversion with proportional-integral augmentation. The inverse dynamics uses a Newton-Euler formulation that includes the platform dynamics, the dynamics of the individual legs along with viscous friction in the joints. Simulation results are presented using forward dynamics simulated by a commercial physics engine that builds the system as individual elements with appropriate joints and uses constrained numerical integration,
Utility of inverse probability weighting in molecular pathological epidemiology.
Liu, Li; Nevo, Daniel; Nishihara, Reiko; Cao, Yin; Song, Mingyang; Twombly, Tyler S; Chan, Andrew T; Giovannucci, Edward L; VanderWeele, Tyler J; Wang, Molin; Ogino, Shuji
2018-04-01
As one of causal inference methodologies, the inverse probability weighting (IPW) method has been utilized to address confounding and account for missing data when subjects with missing data cannot be included in a primary analysis. The transdisciplinary field of molecular pathological epidemiology (MPE) integrates molecular pathological and epidemiological methods, and takes advantages of improved understanding of pathogenesis to generate stronger biological evidence of causality and optimize strategies for precision medicine and prevention. Disease subtyping based on biomarker analysis of biospecimens is essential in MPE research. However, there are nearly always cases that lack subtype information due to the unavailability or insufficiency of biospecimens. To address this missing subtype data issue, we incorporated inverse probability weights into Cox proportional cause-specific hazards regression. The weight was inverse of the probability of biomarker data availability estimated based on a model for biomarker data availability status. The strategy was illustrated in two example studies; each assessed alcohol intake or family history of colorectal cancer in relation to the risk of developing colorectal carcinoma subtypes classified by tumor microsatellite instability (MSI) status, using a prospective cohort study, the Nurses' Health Study. Logistic regression was used to estimate the probability of MSI data availability for each cancer case with covariates of clinical features and family history of colorectal cancer. This application of IPW can reduce selection bias caused by nonrandom variation in biospecimen data availability. The integration of causal inference methods into the MPE approach will likely have substantial potentials to advance the field of epidemiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lin; Dai, Zhenxue; Gong, Huili
Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
NASA Astrophysics Data System (ADS)
Kreinovich, Vladik; Longpre, Luc; Koshelev, Misha
1998-09-01
Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.
Fuller, Robert William; Wong, Tony E; Keller, Klaus
2017-01-01
The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections.
Glossary of Foot and Ankle Terms
... or she will probably outgrow the condition naturally. Inversion - Twisting in toward the midline of the body. ... with the leg; the subtalar joint, which allows inversion and eversion of the foot with the leg; ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuenca, Jacques, E-mail: jcuenca@kth.se; Van der Kelen, Christophe; Göransson, Peter
2014-02-28
This paper proposes an inverse estimation method for the characterisation of the elastic and anelastic properties of the frame of anisotropic open-cell foams used for sound absorption. A model of viscoelasticity based on a fractional differential constitutive equation is used, leading to an augmented Hooke's law in the frequency domain, where the elastic and anelastic phenomena appear as distinctive terms in the stiffness matrix. The parameters of the model are nine orthotropic elastic moduli, three angles of orientation of the material principal directions and three parameters governing the anelastic frequency dependence. The inverse estimation consists in numerically fitting the modelmore » on a set of transfer functions extracted from a sample of material. The setup uses a seismic-mass measurement repeated in the three directions of space and is placed in a vacuum chamber in order to remove the air from the pores of the sample. The method allows to reconstruct the full frequency-dependent complex stiffness matrix of the frame of an anisotropic open-cell foam and in particular it provides the frequency of maximum energy dissipation by viscoelastic effects. The characterisation of a melamine foam sample is performed and the relation between the fractional-derivative model and other types of parameterisations of the augmented Hooke's law is discussed.« less
NASA Astrophysics Data System (ADS)
Cuenca, Jacques; Van der Kelen, Christophe; Göransson, Peter
2014-02-01
This paper proposes an inverse estimation method for the characterisation of the elastic and anelastic properties of the frame of anisotropic open-cell foams used for sound absorption. A model of viscoelasticity based on a fractional differential constitutive equation is used, leading to an augmented Hooke's law in the frequency domain, where the elastic and anelastic phenomena appear as distinctive terms in the stiffness matrix. The parameters of the model are nine orthotropic elastic moduli, three angles of orientation of the material principal directions and three parameters governing the anelastic frequency dependence. The inverse estimation consists in numerically fitting the model on a set of transfer functions extracted from a sample of material. The setup uses a seismic-mass measurement repeated in the three directions of space and is placed in a vacuum chamber in order to remove the air from the pores of the sample. The method allows to reconstruct the full frequency-dependent complex stiffness matrix of the frame of an anisotropic open-cell foam and in particular it provides the frequency of maximum energy dissipation by viscoelastic effects. The characterisation of a melamine foam sample is performed and the relation between the fractional-derivative model and other types of parameterisations of the augmented Hooke's law is discussed.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Goebel, Kai
2013-01-01
This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.
Acute changes in arterial stiffness following exercise in people with metabolic syndrome.
Radhakrishnan, Jeyasundar; Swaminathan, Narasimman; Pereira, Natasha M; Henderson, Keiran; Brodie, David A
This study aims to examine the changes in arterial stiffness immediately following sub-maximal exercise in people with metabolic syndrome. Ninety-four adult participants (19-80 years) with metabolic syndrome gave written consent and were measured for arterial stiffness using a SphygmoCor (SCOR-PVx, Version 8.0, Atcor Medical Private Ltd, USA) immediately before and within 5-10min after an incremental shuttle walk test. The arterial stiffness measures used were pulse wave velocity (PWV), aortic pulse pressure (PP), augmentation pressure, augmentation index (AI), subendocardial viability ratio (SEVR) and ejection duration (ED). There was a significant increase (p<0.05) in most of the arterial stiffness variables following exercise. Exercise capacity had a strong inverse correlation with arterial stiffness and age (p<0.01). Age influences arterial stiffness. Exercise capacity is inversely related to arterial stiffness and age in people with metabolic syndrome. Exercise induced changes in arterial stiffness measured using pulse wave analysis is an important tool that provides further evidence in studying cardiovascular risk in metabolic syndrome. Copyright © 2016 Diabetes India. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Varney, R. H.; Vlasov, M. N.; Nossa, E.; Watkins, B.; Pedersen, T.; Huba, J. D.
2012-02-01
The electron energy distribution during an F region ionospheric modification experiment at the HAARP facility near Gakona, Alaska, is inferred from spectrographic airglow emission data. Emission lines at 630.0, 557.7, and 844.6 nm are considered along with the absence of detectable emissions at 427.8 nm. Estimating the electron energy distribution function from the airglow data is a problem in classical linear inverse theory. We describe an augmented version of the method of Backus and Gilbert which we use to invert the data. The method optimizes the model resolution, the precision of the mapping between the actual electron energy distribution and its estimate. Here, the method has also been augmented so as to limit the model prediction error. Model estimates of the suprathermal electron energy distribution versus energy and altitude are incorporated in the inverse problem formulation as representer functions. Our methodology indicates a heater-induced electron energy distribution with a broad peak near 5 eV that decreases approximately exponentially by 30 dB between 5-50 eV.
Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro
2011-01-01
In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029
Inverse statistics and information content
NASA Astrophysics Data System (ADS)
Ebadi, H.; Bolgorian, Meysam; Jafari, G. R.
2010-12-01
Inverse statistics analysis studies the distribution of investment horizons to achieve a predefined level of return. This distribution provides a maximum investment horizon which determines the most likely horizon for gaining a specific return. There exists a significant difference between inverse statistics of financial market data and a fractional Brownian motion (fBm) as an uncorrelated time-series, which is a suitable criteria to measure information content in financial data. In this paper we perform this analysis for the DJIA and S&P500 as two developed markets and Tehran price index (TEPIX) as an emerging market. We also compare these probability distributions with fBm probability, to detect when the behavior of the stocks are the same as fBm.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
NASA Technical Reports Server (NTRS)
Backus, George
1987-01-01
Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub n :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity)(P sub n to the -1(B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.
Improving Conceptual Models Using AEM Data and Probability Distributions
NASA Astrophysics Data System (ADS)
Davis, A. C.; Munday, T. J.; Christensen, N. B.
2012-12-01
With emphasis being placed on uncertainty in groundwater modelling and prediction, coupled with questions concerning the value of geophysical methods in hydrogeology, it is important to ask meaningful questions of hydrogeophysical data and inversion results. For example, to characterise aquifers using electromagnetic (EM) data, we ask questions such as "Given that the electrical conductivity of aquifer 'A' is less than x, where is that aquifer elsewhere in the survey area?" The answer may be given by examining inversion models, selecting locations and layers that satisfy the condition 'conductivity <= x', and labelling them as aquifer 'A'. One difficulty with this approach is that the inversion model result often be considered to be the only model for the data. In reality it is just one image of the subsurface that, given the method and the regularisation imposed in the inversion, agrees with measured data within a given error bound. We have no idea whether the final model realised by the inversion satisfies the global minimum error, or whether it is simply in a local minimum. There is a distribution of inversion models that satisfy the error tolerance condition: the final model is not the only one, nor is it necessarily the correct one. AEM inversions are often linearised in the calculation of the parameter sensitivity: we rely on the second derivatives in the Taylor expansion, thus the minimum model has all layer parameters distributed about their mean parameter value with well-defined variance. We investigate the validity of the minimum model, and its uncertainty, by examining the full posterior covariance matrix. We ask questions of the minimum model, and answer them in a probabilistically. The simplest question we can pose is "What is the probability that all layer resistivity values are <= a cut-off value?" We can calculate through use of the erf or the erfc functions. The covariance values of the inversion become marginalised in the integration: only the main diagonal is used. Complications arise when we ask more specific questions, such as "What is the probability that the resistivity of layer 2 <= x, given that layer 1 <= y?" The probability then becomes conditional, calculation includes covariance terms, the integration is taken over many dimensions, and the cross-correlation of parameters becomes important. To illustrate, we examine the inversion results of a Tempest AEM survey over the Uley Basin aquifers in the Eyre Peninsula, South Australia. Key aquifers include the unconfined Bridgewater Formation that overlies the Uley and Wanilla Formations, which contain Tertiary clays and Tertiary sandstone. These Formations overlie weathered basement which define the lower bound of the Uley Basin aquifer systems. By correlating the conductivity of the sub-surface Formation types, we pose questions such as: "What is the probability-depth of the Bridgewater Formation in the Uley South Basin?", "What is the thickness of the Uley Formation?" and "What is the most probable depth to basement?" We use these questions to generate improved conceptual hydrogeological models of the Uley Basin in order to develop better estimates of aquifer extent and the available groundwater resource.
NASA Astrophysics Data System (ADS)
Shankar, Praveen
The performance of nonlinear control algorithms such as feedback linearization and dynamic inversion is heavily dependent on the fidelity of the dynamic model being inverted. Incomplete or incorrect knowledge of the dynamics results in reduced performance and may lead to instability. Augmenting the baseline controller with approximators which utilize a parametrization structure that is adapted online reduces the effect of this error between the design model and actual dynamics. However, currently existing parameterizations employ a fixed set of basis functions that do not guarantee arbitrary tracking error performance. To address this problem, we develop a self-organizing parametrization structure that is proven to be stable and can guarantee arbitrary tracking error performance. The training algorithm to grow the network and adapt the parameters is derived from Lyapunov theory. In addition to growing the network of basis functions, a pruning strategy is incorporated to keep the size of the network as small as possible. This algorithm is implemented on a high performance flight vehicle such as F-15 military aircraft. The baseline dynamic inversion controller is augmented with a Self-Organizing Radial Basis Function Network (SORBFN) to minimize the effect of the inversion error which may occur due to imperfect modeling, approximate inversion or sudden changes in aircraft dynamics. The dynamic inversion controller is simulated for different situations including control surface failures, modeling errors and external disturbances with and without the adaptive network. A performance measure of maximum tracking error is specified for both the controllers a priori. Excellent tracking error minimization to a pre-specified level using the adaptive approximation based controller was achieved while the baseline dynamic inversion controller failed to meet this performance specification. The performance of the SORBFN based controller is also compared to a fixed RBF network based adaptive controller. While the fixed RBF network based controller which is tuned to compensate for control surface failures fails to achieve the same performance under modeling uncertainty and disturbances, the SORBFN is able to achieve good tracking convergence under all error conditions.
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
Wong, Tony E.; Keller, Klaus
2017-01-01
The response of the Antarctic ice sheet (AIS) to changing global temperatures is a key component of sea-level projections. Current projections of the AIS contribution to sea-level changes are deeply uncertain. This deep uncertainty stems, in part, from (i) the inability of current models to fully resolve key processes and scales, (ii) the relatively sparse available data, and (iii) divergent expert assessments. One promising approach to characterizing the deep uncertainty stemming from divergent expert assessments is to combine expert assessments, observations, and simple models by coupling probabilistic inversion and Bayesian inversion. Here, we present a proof-of-concept study that uses probabilistic inversion to fuse a simple AIS model and diverse expert assessments. We demonstrate the ability of probabilistic inversion to infer joint prior probability distributions of model parameters that are consistent with expert assessments. We then confront these inferred expert priors with instrumental and paleoclimatic observational data in a Bayesian inversion. These additional constraints yield tighter hindcasts and projections. We use this approach to quantify how the deep uncertainty surrounding expert assessments affects the joint probability distributions of model parameters and future projections. PMID:29287095
Research In Nonlinear Flight Control for Tiltrotor Aircraft Operating in the Terminal Area
NASA Technical Reports Server (NTRS)
Calise, A. J.; Rysdyk, R.
1996-01-01
The research during the first year of the effort focused on the implementation of the recently developed combination of neural net work adaptive control and feedback linearization. At the core of this research is the comprehensive simulation code Generic Tiltrotor Simulator (GTRS) of the XV-15 tilt rotor aircraft. For this research the GTRS code has been ported to a Fortran environment for use on PC. The emphasis of the research is on terminal area approach procedures, including conversion from aircraft to helicopter configuration. This report focuses on the longitudinal control which is the more challenging case for augmentation. Therefore, an attitude command attitude hold (ACAH) control augmentation is considered which is typically used for the pitch channel during approach procedures. To evaluate the performance of the neural network adaptive control architecture it was necessary to develop a set of low order pilot models capable of performing such tasks as, follow desired altitude profiles, follow desired speed profiles, operate on both sides of powercurve, convert, including flaps as well as mastangle changes, operate with different stability and control augmentation system (SCAS) modes. The pilot models are divided in two sets, one for the backside of the powercurve and one for the frontside. These two sets are linearly blended with speed. The mastangle is also scheduled with speed. Different aspects of the proposed architecture for the neural network (NNW) augmented model inversion were also demonstrated. The demonstration involved implementation of a NNW architecture using linearized models from GTRS, including rotor states, to represent the XV-15 at various operating points. The dynamics used for the model inversion were based on the XV-15 operating at 30 Kts, with residualized rotor dynamics, and not including cross coupling between translational and rotational states. The neural network demonstrated ACAH control under various circumstances. Future efforts will include the implementation into the Fortran environment of GTRS, including pilot modeling and NNW augmentation for the lateral channels. These efforts should lead to the development of architectures that will provide for fully automated approach, using similar strategies.
NASA Astrophysics Data System (ADS)
Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji
2015-12-01
Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Analysis of multinomial models with unknown index using data augmentation
Royle, J. Andrew; Dorazio, R.M.; Link, W.A.
2007-01-01
Multinomial models with unknown index ('sample size') arise in many practical settings. In practice, Bayesian analysis of such models has proved difficult because the dimension of the parameter space is not fixed, being in some cases a function of the unknown index. We describe a data augmentation approach to the analysis of this class of models that provides for a generic and efficient Bayesian implementation. Under this approach, the data are augmented with all-zero detection histories. The resulting augmented dataset is modeled as a zero-inflated version of the complete-data model where an estimable zero-inflation parameter takes the place of the unknown multinomial index. Interestingly, data augmentation can be justified as being equivalent to imposing a discrete uniform prior on the multinomial index. We provide three examples involving estimating the size of an animal population, estimating the number of diabetes cases in a population using the Rasch model, and the motivating example of estimating the number of species in an animal community with latent probabilities of species occurrence and detection.
Elastic robot control - Nonlinear inversion and linear stabilization
NASA Technical Reports Server (NTRS)
Singh, S. N.; Schy, A. A.
1986-01-01
An approach to the control of elastic robot systems for space applications using inversion, servocompensation, and feedback stabilization is presented. For simplicity, a robot arm (PUMA type) with three rotational joints is considered. The third link is assumed to be elastic. Using an inversion algorithm, a nonlinear decoupling control law u(d) is derived such that in the closed-loop system independent control of joint angles by the three joint torquers is accomplished. For the stabilization of elastic oscillations, a linear feedback torquer control law u(s) is obtained applying linear quadratic optimization to the linearized arm model augmented with a servocompensator about the terminal state. Simulation results show that in spite of uncertainties in the payload and vehicle angular velocity, good joint angle control and damping of elastic oscillations are obtained with the torquer control law u = u(d) + u(s).
Inference of relativistic electron spectra from measurements of inverse Compton radiation
NASA Astrophysics Data System (ADS)
Craig, I. J. D.; Brown, J. C.
1980-07-01
The inference of relativistic electron spectra from spectral measurement of inverse Compton radiation is discussed for the case where the background photon spectrum is a Planck function. The problem is formulated in terms of an integral transform that relates the measured spectrum to the unknown electron distribution. A general inversion formula is used to provide a quantitative assessment of the information content of the spectral data. It is shown that the observations must generally be augmented by additional information if anything other than a rudimentary two or three parameter model of the source function is to be derived. It is also pointed out that since a similar equation governs the continuum spectra emitted by a distribution of black-body radiators, the analysis is relevant to the problem of stellar population synthesis from galactic spectra.
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
NASA Astrophysics Data System (ADS)
Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.
2015-04-01
The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.
Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization
NASA Astrophysics Data System (ADS)
Yamagishi, Masao; Yamada, Isao
2017-04-01
Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.
Duncan, L. W.; Graham, J. H.; Zellers, J.; Bright, D.; Dunn, D. C.; El-Borai, F. E.; Porazinska, D. L.
2007-01-01
Factorial treatments of entomopathogenic nematodes (EPN) and composted, manure mulches were evaluated for two years in a central Florida citrus orchard to study the post-application biology of EPN used to manage the root weevil, Diaprepes abbreviatus. Mulch treatments were applied once each year to study the effects of altering the community of EPN competitors (free-living bactivorous nematodes) and antagonists (nematophagous fungi (NF), predaceous nematodes and some microarthro-pods). EPN were augmented once with Steinernema riobrave in 2004 and twice in 2005. Adding EPN to soil affected the prevalence of organisms at several trophic levels, but the effects were often ephemeral and sometimes inconsistent. EPN augmentation always increased the mortality of sentinel weevil larvae, the prevalence of free-living nematodes in sentinel cadavers and the prevalence of trapping NF. Subsequent to the insecticidal effects of EPN augmentation in 2004, but not 2005, EPN became temporarily less prevalent, and fewer sentinel weevil larvae died in EPN-augmented compared to non-augmented plots. Manure mulch had variable effects on endoparasitic NF, but consistently decreased the prevalence of trapping NF and increased the prevalence of EPN and the sentinel mortality. Both temporal and spatial abundance of NF were inversely related to the prevalence of Steinernema diaprepesi, whereas Heterorhabditis zealandica prevalence was positively correlated with NF over time. The number of weevil larvae killed by EPN was likely greatest in 2005, due in part to non-target effects of augmentation on the endemic EPN community in 2004 that occurred during a period of peak weevil recruitment into the soil. PMID:19259487
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Z; Terry, N; Hubbard, S S
2013-02-12
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability distribution functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSim) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Zhangshuan; Terry, Neil C.; Hubbard, Susan S.
2013-02-22
In this study, we evaluate the possibility of monitoring soil moisture variation using tomographic ground penetrating radar travel time data through Bayesian inversion, which is integrated with entropy memory function and pilot point concepts, as well as efficient sampling approaches. It is critical to accurately estimate soil moisture content and variations in vadose zone studies. Many studies have illustrated the promise and value of GPR tomographic data for estimating soil moisture and associated changes, however, challenges still exist in the inversion of GPR tomographic data in a manner that quantifies input and predictive uncertainty, incorporates multiple data types, handles non-uniquenessmore » and nonlinearity, and honors time-lapse tomograms collected in a series. To address these challenges, we develop a minimum relative entropy (MRE)-Bayesian based inverse modeling framework that non-subjectively defines prior probabilities, incorporates information from multiple sources, and quantifies uncertainty. The framework enables us to estimate dielectric permittivity at pilot point locations distributed within the tomogram, as well as the spatial correlation range. In the inversion framework, MRE is first used to derive prior probability density functions (pdfs) of dielectric permittivity based on prior information obtained from a straight-ray GPR inversion. The probability distributions are then sampled using a Quasi-Monte Carlo (QMC) approach, and the sample sets provide inputs to a sequential Gaussian simulation (SGSIM) algorithm that constructs a highly resolved permittivity/velocity field for evaluation with a curved-ray GPR forward model. The likelihood functions are computed as a function of misfits, and posterior pdfs are constructed using a Gaussian kernel. Inversion of subsequent time-lapse datasets combines the Bayesian estimates from the previous inversion (as a memory function) with new data. The memory function and pilot point design takes advantage of the spatial-temporal correlation of the state variables. We first apply the inversion framework to a static synthetic example and then to a time-lapse GPR tomographic dataset collected during a dynamic experiment conducted at the Hanford Site in Richland, WA. We demonstrate that the MRE-Bayesian inversion enables us to merge various data types, quantify uncertainty, evaluate nonlinear models, and produce more detailed and better resolved estimates than straight-ray based inversion; therefore, it has the potential to improve estimates of inter-wellbore dielectric permittivity and soil moisture content and to monitor their temporal dynamics more accurately.« less
Compensation of significant parametric uncertainties using sliding mode online learning
NASA Astrophysics Data System (ADS)
Schnetter, Philipp; Kruger, Thomas
An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.
Method and Apparatus for Performance Optimization Through Physical Perturbation of Task Elements
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III (Inventor); Pope, Alan T. (Inventor); Palsson, Olafur S. (Inventor); Turner, Marsha J. (Inventor)
2016-01-01
The invention is an apparatus and method of biofeedback training for attaining a physiological state optimally consistent with the successful performance of a task, wherein the probability of successfully completing the task is made is inversely proportional to a physiological difference value, computed as the absolute value of the difference between at least one physiological signal optimally consistent with the successful performance of the task and at least one corresponding measured physiological signal of a trainee performing the task. The probability of successfully completing the task is made inversely proportional to the physiological difference value by making one or more measurable physical attributes of the environment in which the task is performed, and upon which completion of the task depends, vary in inverse proportion to the physiological difference value.
[Inverse probability weighting (IPW) for evaluating and "correcting" selection bias].
Narduzzi, Silvia; Golini, Martina Nicole; Porta, Daniela; Stafoggia, Massimo; Forastiere, Francesco
2014-01-01
the Inverse probability weighting (IPW) is a methodology developed to account for missingness and selection bias caused by non-randomselection of observations, or non-random lack of some information in a subgroup of the population. to provide an overview of IPW methodology and an application in a cohort study of the association between exposure to traffic air pollution (nitrogen dioxide, NO₂) and 7-year children IQ. this methodology allows to correct the analysis by weighting the observations with the probability of being selected. The IPW is based on the assumption that individual information that can predict the probability of inclusion (non-missingness) are available for the entire study population, so that, after taking account of them, we can make inferences about the entire target population starting from the nonmissing observations alone.The procedure for the calculation is the following: firstly, we consider the entire population at study and calculate the probability of non-missing information using a logistic regression model, where the response is the nonmissingness and the covariates are its possible predictors.The weight of each subject is given by the inverse of the predicted probability. Then the analysis is performed only on the non-missing observations using a weighted model. IPW is a technique that allows to embed the selection process in the analysis of the estimates, but its effectiveness in "correcting" the selection bias depends on the availability of enough information, for the entire population, to predict the non-missingness probability. In the example proposed, the IPW application showed that the effect of exposure to NO2 on the area of verbal intelligence quotient of children is stronger than the effect showed from the analysis performed without regard to the selection processes.
NASA Astrophysics Data System (ADS)
Lin, Hsien-I.; Nguyen, Xuan-Anh
2017-05-01
To operate a redundant manipulator to accomplish the end-effector trajectory planning and simultaneously control its gesture in online programming, incorporating the human motion is a useful and flexible option. This paper focuses on a manipulative instrument that can simultaneously control its arm gesture and end-effector trajectory via human teleoperation. The instrument can be classified by two parts; first, for the human motion capture and data processing, marker systems are proposed to capture human gesture. Second, the manipulator kinematics control is implemented by an augmented multi-tasking method, and forward and backward reaching inverse kinematics, respectively. Especially, the local-solution and divergence problems of a multi-tasking method are resolved by the proposed augmented multi-tasking method. Computer simulations and experiments with a 7-DOF (degree of freedom) redundant manipulator were used to validate the proposed method. Comparison among the single-tasking, original multi-tasking, and augmented multi-tasking algorithms were performed and the result showed that the proposed augmented method had a good end-effector position accuracy and the most similar gesture to the human gesture. Additionally, the experimental results showed that the proposed instrument was realized online.
NASA Astrophysics Data System (ADS)
Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.
2014-03-01
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Recursive recovery of Markov transition probabilities from boundary value data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patch, Sarah Kathyrn
1994-04-01
In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requiresmore » finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.« less
Hurdles and sorting by inversions: combinatorial, statistical, and experimental results.
Swenson, Krister M; Lin, Yu; Rajan, Vaibhav; Moret, Bernard M E
2009-10-01
As data about genomic architecture accumulates, genomic rearrangements have attracted increasing attention. One of the main rearrangement mechanisms, inversions (also called reversals), was characterized by Hannenhalli and Pevzner and this characterization in turn extended by various authors. The characterization relies on the concepts of breakpoints, cycles, and obstructions colorfully named hurdles and fortresses. In this paper, we study the probability of generating a hurdle in the process of sorting a permutation if one does not take special precautions to avoid them (as in a randomized algorithm, for instance). To do this we revisit and extend the work of Caprara and of Bergeron by providing simple and exact characterizations of the probability of encountering a hurdle in a random permutation. Using similar methods we provide the first asymptotically tight analysis of the probability that a fortress exists in a random permutation. Finally, we study other aspects of hurdles, both analytically and through experiments: when are they created in a sequence of sorting inversions, how much later are they detected, and how much work may need to be undone to return to a sorting sequence.
A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves
NASA Astrophysics Data System (ADS)
Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.
2017-12-01
This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.
Butler, Troy; Graham, L.; Estep, D.; ...
2015-02-03
The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less
Accommodating Chromosome Inversions in Linkage Analysis
Chen, Gary K.; Slaten, Erin; Ophoff, Roel A.; Lange, Kenneth
2006-01-01
This work develops a population-genetics model for polymorphic chromosome inversions. The model precisely describes how an inversion changes the nature of and approach to linkage equilibrium. The work also describes algorithms and software for allele-frequency estimation and linkage analysis in the presence of an inversion. The linkage algorithms implemented in the software package Mendel estimate recombination parameters and calculate the posterior probability that each pedigree member carries the inversion. Application of Mendel to eight Centre d'Étude du Polymorphisme Humain pedigrees in a region containing a common inversion on 8p23 illustrates its potential for providing more-precise estimates of the location of an unmapped marker or trait gene. Our expanded cytogenetic analysis of these families further identifies inversion carriers and increases the evidence of linkage. PMID:16826515
NASA Astrophysics Data System (ADS)
Saunders, J. K.; Haase, J. S.
2017-12-01
The rupture location of a Mw 8 megathrust earthquake can dramatically change the near-source tsunami impact, where a shallow earthquake can produce a disproportionally large tsunami for its magnitude. Because the locking pattern of the shallow Cascadia megathrust is unconstrained due to the lack of widespread seafloor geodetic observations, near-source tsunami early warning systems need to be able to identify shallow, near-trench earthquakes. Onshore GPS displacements provide low frequency ground motions and coseismic offsets for characterizing tsunamigenic earthquakes, however the one-sided distribution of data may not be able to uniquely determine the rupture region. We examine how augmenting the current real-time GPS network in Cascadia with different offshore station configurations improves static slip inversion solutions for Mw 8 earthquakes at different rupture depths. Two offshore coseismic data types are tested in this study: vertical-only, which would be available using existing technology for bottom pressure sensors, and all-component, which could be achieved by combining pressure sensors with real-time GPS-Acoustic observations. We find that both types of offshore data better constrain the rupture region for a shallow earthquake compared to onshore data alone when offshore stations are located above the rupture. However, inversions using vertical-only offshore data tend to underestimate the amount of slip for a shallow rupture, which we show underestimates the tsunami impact. Including offshore horizontal coseismic data into the inversions improves the slip solutions for a given offshore station configuration, especially in terms of maximum slip. This suggests that while real-time GPS-Acoustic sensors may have a long development timeline, they will have more impact for inversion-based tsunami early warning systems than bottom pressure sensors. We also conduct sensitivity studies using kinematic models with varying rupture speeds and rise times as a proxy for expected rigidity changes with depth along the megathrust. We find distinguishing features in displacement waveforms that can be used to infer primary rupture region. We discuss how kinematic inversion methods that use these characteristics in high-rate GPS data could be applied to the Cascadia subduction zone.
Umeda, Masataka; Corbin, Lisa W; Maluf, Katrina S
2015-01-01
This study aimed to compare muscle pain intensity during a sustained isometric contraction in women with and without fibromyalgia (FM), and examine the association between muscle pain and self-reported levels of physical activity. Fourteen women with FM and 14 healthy women completed the study, where muscle pain ratings (MPRs) were obtained every 30 s during a 3 min isometric handgrip task at 25% maximal strength, and self-reported physical activity was quantified using the Baecke Physical Activity Questionnaire. Women with FM were less physically active than healthy controls. During the isometric contraction, MPR progressively increased in both groups at a comparable rate, but women with FM generally reported a greater intensity of muscle pain than healthy controls. Among all women, average MPR scores were inversely associated with self-reported physical activity levels. Women with FM exhibit augmented muscle pain during isometric contractions and reduced physical activity than healthy controls. Furthermore, contraction-induced muscle pain is inversely associated with physical activity levels. These observations suggest that augmented muscle pain may serve as a behavioral correlate of reduced physical activity in women with FM. Implications for Rehabilitation Women with fibromyalgia experience a greater intensity of localized muscle pain in a contracting muscle compared to healthy women. The intensity of pain during muscle contraction is inversely associated with the amount of physical activity in women with and without fibromyalgia. Future studies should determine whether exercise adherence can be improved by considering the relationship between contraction-induced muscle pain and participation in routine physical activity.
Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.
2007-01-01
This paper describes the performance of a simplified dynamic inversion controller with neural network supplementation. This 6 DOF (Degree-of-Freedom) simulation study focuses on the results with and without adaptation of neural networks using a simulation of the NASA modified F-15 which has canards. One area of interest is the performance of a simulated surface failure while attempting to minimize the inertial cross coupling effect of a [B] matrix failure (a control derivative anomaly associated with a jammed or missing control surface). Another area of interest and presented is simulated aerodynamic failures ([A] matrix) such as a canard failure. The controller uses explicit models to produce desired angular rate commands. The dynamic inversion calculates the necessary surface commands to achieve the desired rates. The simplified dynamic inversion uses approximate short period and roll axis dynamics. Initial results indicated that the transient response for a [B] matrix failure using a Neural Network (NN) improved the control behavior when compared to not using a neural network for a given failure, However, further evaluation of the controller was comparable, with objections io the cross coupling effects (after changes were made to the controller). This paper describes the methods employed to reduce the cross coupling effect and maintain adequate tracking errors. The IA] matrix failure results show that control of the aircraft without adaptation is more difficult [leas damped) than with active neural networks, Simulation results show Neural Network augmentation of the controller improves performance in terms of backing error and cross coupling reduction and improved performance with aerodynamic-type failures.
NASA Astrophysics Data System (ADS)
Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao
2018-05-01
Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.
Accounting for Selection Bias in Studies of Acute Cardiac Events.
Banack, Hailey R; Harper, Sam; Kaufman, Jay S
2018-06-01
In cardiovascular research, pre-hospital mortality represents an important potential source of selection bias. Inverse probability of censoring weights are a method to account for this source of bias. The objective of this article is to examine and correct for the influence of selection bias due to pre-hospital mortality on the relationship between cardiovascular risk factors and all-cause mortality after an acute cardiac event. The relationship between the number of cardiovascular disease (CVD) risk factors (0-5; smoking status, diabetes, hypertension, dyslipidemia, and obesity) and all-cause mortality was examined using data from the Atherosclerosis Risk in Communities (ARIC) study. To illustrate the magnitude of selection bias, estimates from an unweighted generalized linear model with a log link and binomial distribution were compared with estimates from an inverse probability of censoring weighted model. In unweighted multivariable analyses the estimated risk ratio for mortality ranged from 1.09 (95% confidence interval [CI], 0.98-1.21) for 1 CVD risk factor to 1.95 (95% CI, 1.41-2.68) for 5 CVD risk factors. In the inverse probability of censoring weights weighted analyses, the risk ratios ranged from 1.14 (95% CI, 0.94-1.39) to 4.23 (95% CI, 2.69-6.66). Estimates from the inverse probability of censoring weighted model were substantially greater than unweighted, adjusted estimates across all risk factor categories. This shows the magnitude of selection bias due to pre-hospital mortality and effect on estimates of the effect of CVD risk factors on mortality. Moreover, the results highlight the utility of using this method to address a common form of bias in cardiovascular research. Copyright © 2018 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Over-Water Aspects of Ground-Effect Vehicles
NASA Technical Reports Server (NTRS)
Kuhn, Richard E.; Carter, Arthur W.; Schade, Robert O.
1960-01-01
The large thrust augmentation obtainable with annular-jet configurations in ground proximity has led to the serious investigation of ground-effect machines. The basic theoretical work on these phenomena has been done by Chaplin and Boehler. Large thrust-augmentation factors, however, can be obtained only at very low heights, that is, of the order of a few percent of the diameter of the vehicle. To take advantage of this thrust augmentation therefore the vehicle must be either very large or must operate over very smooth terrain. Over-land uses of these vehicles then will probably be rather limited. The water, however, is inherently smooth and those irregularities that do exist, that is waves, are statistically known. It appears therefore that some practical application of ground-effect machines may be made in over-water application.
NASA Astrophysics Data System (ADS)
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
Gunji, Yukio-Pegio; Shinohara, Shuji; Haruna, Taichi; Basios, Vasileios
2017-02-01
To overcome the dualism between mind and matter and to implement consciousness in science, a physical entity has to be embedded with a measurement process. Although quantum mechanics have been regarded as a candidate for implementing consciousness, nature at its macroscopic level is inconsistent with quantum mechanics. We propose a measurement-oriented inference system comprising Bayesian and inverse Bayesian inferences. While Bayesian inference contracts probability space, the newly defined inverse one relaxes the space. These two inferences allow an agent to make a decision corresponding to an immediate change in their environment. They generate a particular pattern of joint probability for data and hypotheses, comprising multiple diagonal and noisy matrices. This is expressed as a nondistributive orthomodular lattice equivalent to quantum logic. We also show that an orthomodular lattice can reveal information generated by inverse syllogism as well as the solutions to the frame and symbol-grounding problems. Our model is the first to connect macroscopic cognitive processes with the mathematical structure of quantum mechanics with no additional assumptions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew
2016-07-01
Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Amemiya, Kei; Meyers, Jennifer L; Rogers, Taralyn E; Fast, Randy L; Bassett, Anthony D; Worsham, Patricia L; Powell, Bradford S; Norris, Sarah L; Krieg, Arthur M; Adamovicz, Jeffrey J
2009-04-06
The current U.S. Department of Defense candidate plague vaccine is a fusion between two Yersinia pestis proteins: the F1 capsular protein, and the low calcium response (Lcr) V-protein. We hypothesized that an immunomodulator, such as CpG oligodeoxynucleotide (ODN)s, could augment the immune response to the plague F1-V vaccine in a mouse model for plague. CpG ODNs significantly augmented the antibody response and efficacy of a single dose of the plague vaccine in murine bubonic and pneumonic models of plague. In the latter study, we also found an overall significant augmentation the immune response to the individual subunits of the plague vaccine by CpG ODN 2006. In a long-term, prime-boost study, CpG ODN induced a significant early augmentation of the IgG response to the vaccine. The presence of CpG ODN induced a significant increase in the IgG2a subclass response to the vaccine up to 5 months after the boost. Our studies showed that CpG ODNs significantly augmented the IgG antibody response to the plague vaccine, which increased the probability of survival in murine models of plague (P<0.0001).
Analysis of space telescope data collection system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Schoggen, W. O.
1982-01-01
An analysis of the expected performance for the Multiple Access (MA) system is provided. The analysis covers the expected bit error rate performance, the effects of synchronization loss, the problem of self-interference, and the problem of phase ambiguity. The problem of false acceptance of a command word due to data inversion is discussed. A mathematical determination of the probability of accepting an erroneous command word due to a data inversion is presented. The problem is examined for three cases: (1) a data inversion only, (2) a data inversion and a random error within the same command word, and a block (up to 256 48-bit words) containing both a data inversion and a random error.
Transitionless driving on adiabatic search algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907
We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
Hypertension: variations in prevalence in the black population.
Kelly, E; Oni, A
1989-03-01
Prevalence trends in hypertension in black men and women show an inversion at about ages 45 to 54 years. Incidence, mortality, and treatment of hypertension after age 35 can probably be related to this inversion. Incidence data are inconsistent and scanty. Morbidity data are incomplete and mostly unreliable. Mortality data partially explain the inversion. Long-term epidemiologic studies of hypertension in black elderly persons are needed to explain these variations in prevalence, which may have a beneficial impact on treatment and prognosis.
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions
NASA Astrophysics Data System (ADS)
Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.
2014-12-01
One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.
Hybrid Adaptive Flight Control with Model Inversion Adaptation
NASA Technical Reports Server (NTRS)
Nguyen, Nhan
2011-01-01
This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.
Uruc, Vedat; Ozden, Raif; Dogramacı, Yunus; Kalacı, Aydıner; Hallaceli, Hasan; Küçükdurmaz, Fatih
2014-01-01
The aim of this study was to test a simple technique to augment the pullout resistance of an anchor in an over-drilled sheep humerus model. Sixty-four paired sheep humeri were harvested from 32 male sheep aged 18 months. Specimens were divided into an augmented group and non-augmented group. FASTIN RC 5-mm titanium screw anchors (DePuy Mitek, Raynham, MA) double loaded with suture material (braided polyester, nonabsorbable USP No. 2) were used in both groups. Osteoporosis was simulated by over-drilling with a 4.5-mm drill. Augmentation was performed by fixing 1 of the sutures 1.5 cm inferior to the anchor insertion site with a washer screw. This was followed by a pull-to-failure test at 50 mm/min. The ultimate load (the highest value of strength before anchor pullout) was recorded. A paired t test was used to compare the biomechanical properties of the augmented and non-augmented groups. In all specimens the failure mode was pullout of the anchor. The ultimate failure loads were statistically significantly higher in the augmented group (P < .0001). The mean pullout strength was 121.1 ± 10.17 N in the non-augmented group and 176.1 ± 10.34 N in the augmented group. The described augmentation technique, which is achieved by inferior-lateral fixation of 1 of the sutures of the double-loaded anchor to a fully threaded 6.5-mm cancellous screw with a washer, significantly increases the ultimate failure loads in the over-drilled sheep humerus model. Our technique is simple, safe, and inexpensive. It can be easily used in all osteoporotic patients and will contribute to the reduction of anchor failure. This technique might be difficult to apply arthroscopically. Cannulated smaller screws would probably be more practical for arthroscopic use. Further clinical studies are needed. Copyright © 2014 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Pàez, Olga; Alfie, José; Gorosito, Marta; Puleio, Pablo; de Maria, Marcelo; Prieto, Noemì; Majul, Claudio
2009-10-01
Pre-eclampsia not only complicates 5 to 8% of pregnancies but also increases the risk of maternal cardiovascular disease and mortality later in life. We analyzed three different aspects of arterial function (pulse wave velocity, augmentation index, and flow-mediated dilatation), in 55 nonpregnant, normotensive women (18-33 years old) according to their gestational history: 15 nulliparous, 20 with a previous normotensive, and 20 formerly pre-eclamptic pregnancy. Former pre-eclamptic women showed a significantly higher augmentation index and pulse wave velocity (P < 0.001 and P < 0.05, respectively) and lower flow-mediated dilatation (p = 0.01) compared to control groups. In contrast, sublingual nitroglycerine elicited a comparable vasodilatory response in the three groups. The augmentation index correlated significantly with pulse wave velocity and flow-mediated dilatation (R = 0.28 and R = -0.32, respectively, P < 0.05 for both). No significant correlations were observed between augmentation index or flow-mediated dilatation with age, body mass index (BMI), brachial blood pressure, heart rate, or metabolic parameters (plasma cholesterol, glucose, insulin, or insulin resistance). Birth weight maintained a significantly inverse correlation with the augmentation index (R = -0.51, p < 0.002) but not with flow-mediated dilatation. Our findings revealed a parallel decrease in arterial distensibility and endothelium-dependent dilatation in women with a history of pre-eclampsia compared to nulliparous women and women with a previous normal pregnancy. A high augmentation index was the most consistent alteration associated with a history of pre-eclampsia. The study supports the current view that the generalized arterial dysfunction associated with pre-eclampsia persists subclinically after delivery.
ERIC Educational Resources Information Center
Pence, Harry E.
2012-01-01
The media environment is currently being dramatically changed by social networking, mobile computing, augmented reality, and transmedia. Of these four, transmedia is probably the least familiar to most educators. Transmedia enhances a central story idea with a variety of media components that provide additional information, give increased…
NASA Astrophysics Data System (ADS)
Lewkowicz, A. G.; Smith, K. M.
2004-12-01
The BTS (Basal Temperature of Snow) method to predict permafrost probability in mountain basins uses elevation as an easily available and spatially distributed independent variable. The elevation coefficient in the BTS regression model is, in effect, a substitute for ground temperature lapse rates. Previous work in Wolf Creek (60° 8'N 135° W), a mountain basin near Whitehorse, has shown that the model breaks down in a mid-elevation valley (1250 m asl) where actual permafrost probability is roughly twice that predicted by the model (60% vs. 20-30%). The existence of a double tree-line at the site suggested that air temperature inversions might be the cause of this inaccuracy (Lewkowicz and Ednie, 2004). This paper reports on a first year (08/2003-08/2004) of hourly air and ground temperature data collected along an altitudinal transect within the valley in upper Wolf Creek. Measurements were made at sites located 4, 8, 22, 82 and 162 m above the valley floor. Air temperature inversions between the lowest and highest measurement points occurred 42% of the time and in all months, but were most frequent and intense in winter (>60% of December and January) and least frequent in September (<25% of time). They generally developed after sunset and reached a maximum amplitude before sunrise. Only 11 inversions that lasted through more than one day occurred during the year, and only from October to February. The longest continuous duration was 145 h while the greatest inversion magnitude measured over the 160 m transect was 19° C. Ground surface temperatures are more difficult to interpret because of differences in soils and vegetation cover along the transect and the effects of seasonal snow cover. In many cases, however, air temperature inversions are not duplicated in the ground temperature record. Nevertheless, the annual altitudinal ground temperature gradient is much lower than would be expected from a standard atmospheric lapse rate, suggesting that the inversions do have an important impact on permafrost distribution at this site. More generally, therefore, it appears probable that any reduction in inversion frequency resulting from a more vigorous atmospheric circulation in the context of future climate change, would have a significant effect on permafrost distribution in mountain basins.
Estimating Effects with Rare Outcomes and High Dimensional Covariates: Knowledge is Power
Ahern, Jennifer; Galea, Sandro; van der Laan, Mark
2016-01-01
Many of the secondary outcomes in observational studies and randomized trials are rare. Methods for estimating causal effects and associations with rare outcomes, however, are limited, and this represents a missed opportunity for investigation. In this article, we construct a new targeted minimum loss-based estimator (TMLE) for the effect or association of an exposure on a rare outcome. We focus on the causal risk difference and statistical models incorporating bounds on the conditional mean of the outcome, given the exposure and measured confounders. By construction, the proposed estimator constrains the predicted outcomes to respect this model knowledge. Theoretically, this bounding provides stability and power to estimate the exposure effect. In finite sample simulations, the proposed estimator performed as well, if not better, than alternative estimators, including a propensity score matching estimator, inverse probability of treatment weighted (IPTW) estimator, augmented-IPTW and the standard TMLE algorithm. The new estimator yielded consistent estimates if either the conditional mean outcome or the propensity score was consistently estimated. As a substitution estimator, TMLE guaranteed the point estimates were within the parameter range. We applied the estimator to investigate the association between permissive neighborhood drunkenness norms and alcohol use disorder. Our results highlight the potential for double robust, semiparametric efficient estimation with rare events and high dimensional covariates. PMID:28529839
Both sustained orthostasis and inverse-orthostasis may elicit hypertension in conscious rat
NASA Astrophysics Data System (ADS)
Raffai, Gábor; Dézsi, László; Mészáros, Márta; Kollai, Márk; Monos, Emil
2007-02-01
The organism is exposed to diverse orthostatic stimuli, which can induce several acute and chronic adaptive responses. In this study, we investigated hemodynamic responses elicited by short-term and intermediate-term various orthostatic stimuli, using normotensive and hypertensive rat models. Arterial blood pressure and heart rate were measured by telemetry. Hypertension was induced by NO-synthase blockade. Effect of orthostatic and inverse-orthostatic body positions were examined in 45∘ head-up (HUT) or head-down tilt (HDT), either for 5 min duration repeated 3 times each with a 5-min pause " R", or as sustained tilting for 120 min " S". Data are given as mean±SEM. In normotensives, horizontal control blood pressure was R115.4±1.4/S113.7±1.6mmHg and heart rate was R386.4±7.0/S377.9±8.8BPM. HUT changed blood pressure by R<±1(ns)/S4.6mmHg(p<0.05). HDT resulted in augmented blood pressure increase by R6.2(p<0.05)/S14.4mmHg(p<0.05). In NO-deprived hypertension, horizontal control hemodynamic parameters were R138.4±2.6/S140.3±2.7mmHg and R342.1±12.0/S346.0±8.3BPM, respectively. HUT and HDT changed blood pressure further by R<±1(ns)/S5.6mmHg(p<0.05) and by R8.9(p<0.05)/S14.4mmHg(p<0.05), respectively. Heart rate changed only slightly or non-specifically. These data demonstrate that both normotensive and hypertensive conscious rats restricted from longitudinal locomotion respond to sustained orthostasis or inverse-orthostasis related gravitational stimuli with moderate or augmented hypertension, respectively.
Cuenca, Jacques; Göransson, Peter
2012-08-01
This paper presents a method for simultaneously identifying both the elastic and anelastic properties of the porous frame of anisotropic open-cell foams. The approach is based on an inverse estimation procedure of the complex stiffness matrix of the frame by performing a model fit of a set of transfer functions of a sample of material subjected to compression excitation in vacuo. The material elastic properties are assumed to have orthotropic symmetry and the anelastic properties are described using a fractional-derivative model within the framework of an augmented Hooke's law. The inverse estimation problem is formulated as a numerical optimization procedure and solved using the globally convergent method of moving asymptotes. To show the feasibility of the approach a numerically generated target material is used here as a benchmark. It is shown that the method provides the full frequency-dependent orthotropic complex stiffness matrix within a reasonable degree of accuracy.
Research on fission fragment excitation of gases and nuclear pumping of lasers
NASA Technical Reports Server (NTRS)
Schneider, R. T.; Davie, R. N.; Davis, J. F.; Fuller, J. L.; Paternoster, R. R.; Shipman, G. R.; Sterritt, D. E.; Helmick, H. H.
1974-01-01
Experimental investigations of fission fragment excited gases are reported along with a theoretical analysis of population inversions in fission fragment excited helium. Other studies reported include: nuclear augmentation of gas lasers, direct nuclear pumping of a helium-xenon laser, measurements of a repetitively pulsed high-power CO2 laser, thermodynamic properties of UF6 and UF6/He mixtures, and nuclear waste disposal utilizing a gaseous core reactor.
NASA Technical Reports Server (NTRS)
Gross, S. H.; Pirraglia, J. A.
1972-01-01
A method for augmenting the occultation experiment is described for slightly refractive media. This method which permits separation of the components of the gradient of refractivity, appears applicable to most of the planets for a major portion of their atmospheres and ionospheres. The analytic theory is given, and the results of numerical tests with a radially and angularly varying model of an ionosphere are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mallick, S.
1999-03-01
In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte-Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of thesemore » PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.« less
14 CFR 25.941 - Inlet, engine, and exhaust compatibility.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., engine, and exhaust compatibility. For airplanes using variable inlet or exhaust system geometry, or both— (a) The system comprised of the inlet, engine (including thrust augmentation systems, if incorporated... configurations; (b) The dynamic effects of the operation of these (including consideration of probable...
Giza, Eric; Whitlow, Scott R; Williams, Brady T; Acevedo, Jorge I; Mangone, Peter G; Haytmanek, C Thomas; Curry, Eugene E; Turnbull, Travis Lee; LaPrade, Robert F; Wijdicks, Coen A; Clanton, Thomas O
2015-07-01
Secondary surgical repair of ankle ligaments is often indicated in cases of chronic lateral ankle instability. Recently, arthroscopic Broström techniques have been described, but biomechanical information is limited. The purpose of the present study was to analyze the biomechanical properties of an arthroscopic Broström repair and augmented repair with a proximally placed suture anchor. It was hypothesized that the arthroscopic Broström repairs would compare favorably to open techniques and that augmentation would increase the mean repair strength at time zero. Twenty (10 matched pairs) fresh-frozen foot and ankle cadaveric specimens were obtained. After sectioning of the lateral ankle ligaments, an arthroscopic Broström procedure was performed on each ankle using two 3.0-mm suture anchors with #0 braided polyethylene/polyester multifilament sutures. One specimen from each pair was augmented with a 2.9-mm suture anchor placed 3 cm proximal to the inferior tip of the lateral malleolus. Repairs were isolated and positioned in 20 degrees of inversion and 10 degrees of plantarflexion and loaded to failure using a dynamic tensile testing machine. Maximum load (N), stiffness (N/mm), and displacement at maximum load (mm) were recorded. There were no significant differences between standard arthroscopic repairs and the augmented repairs for mean maximum load and stiffness (154.4 ± 60.3 N, 9.8 ± 2.6 N/mm vs 194.2 ± 157.7 N, 10.5 ± 4.7 N/mm, P = .222, P = .685). Repair augmentation did not confer a significantly higher mean strength or stiffness at time zero. Mean strength and stiffness for the arthroscopic Broström repair compared favorably with previous similarly tested open repair and reconstruction methods, validating the clinical feasibility of an arthroscopic repair. However, augmentation with an additional proximal suture anchor did not significantly strengthen the repair. © The Author(s) 2015.
Controlling dynamical entanglement in a Josephson tunneling junction
NASA Astrophysics Data System (ADS)
Ziegler, K.
2017-12-01
We analyze the evolution of an entangled many-body state in a Josephson tunneling junction and its dependence on the number of bosons and interaction strength. A N00N state, which is a superposition of two complementary Fock states, appears in the evolution with sufficient probability only for a moderate many-body interaction on an intermediate time scale. This time scale is inversely proportional to the tunneling rate. Many-body interaction strongly supports entanglement: The probability for creating an entangled state decays exponentially with the number of particles without many-body interaction, whereas it decays only like the inverse square root of the number of particles in the presence of many-body interaction.
NASA Technical Reports Server (NTRS)
VanZwieten, Tannen S.; Gilligan, Eric T.; Wall, John H.; Miller, Christopher J.; Hanson, Curtis E.; Orr, Jeb S.
2015-01-01
NASA's Space Launch System (SLS) Flight Control System (FCS) includes an Adaptive Augmenting Control (AAC) component which employs a multiplicative gain update law to enhance the performance and robustness of the baseline control system for extreme off-nominal scenarios. The SLS FCS algorithm including AAC has been flight tested utilizing a specially outfitted F/A-18 fighter jet in which the pitch axis control of the aircraft was performed by a Non-linear Dynamic Inversion (NDI) controller, SLS reference models, and the SLS flight software prototype. This paper describes test cases from the research flight campaign in which the fundamental F/A-18 airframe structural mode was identified using post-flight frequency-domain reconstruction, amplified to result in closed loop instability, and suppressed in-flight by the SLS adaptive control system.
A Further Note on Generalized Hyperexponential Distributions
1989-11-15
functions. The inverse transform of each of m factors is of the form The requirement that 0, < r7 thus yields a mixture of an atom at the origin and a...real and (0, + 0,+,)/2 < Re(r/,) when (7h, 77t4) are a complex conjugate pair. Then the inverse transform of f*(s) is a probability distribution. To
Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali
2013-04-01
The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha
The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.
2012-08-01
small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this
Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies
Theis, Fabian J.
2017-01-01
Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464
Trimming and procrastination as inversion techniques
NASA Astrophysics Data System (ADS)
Backus, George E.
1996-12-01
By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.
Self-Handicapping Behavior: A Critical Review of Empirical Research.
ERIC Educational Resources Information Center
Carsrud, Robert Steven
Since the identification of self-handicapping strategies in 1978, considerable attention has been paid to this phenomenon. Self-handicapping is a strategy for discounting ability attributions for probable failure while augmenting ability attributions for possible success. Behavioral self-handicaps are conceptually distinct from self-reported…
Present and Probable CATV/Broadband-Communication Technology.
ERIC Educational Resources Information Center
Ward, John E.
The study reports on technical and cost factors affecting future growth of Cable TV (CATV) systems and the development of the "wired nation." Comparisons are made between alternatives for distributing CATV signals and alternative prototypes for subscriber home terminals. Multi-cable, augmented-channel (with converter), and switched CATV…
Kaur, Ramanpreet; Vikas
2015-02-21
2-Aminopropionitrile (APN), a probable candidate as a chiral astrophysical molecule, is a precursor to amino-acid alanine. Stereochemical pathways in 2-APN are explored using Global Reaction Route Mapping (GRRM) method employing high-level quantum-mechanical computations. Besides predicting the conventional mechanism for chiral inversion that proceeds through an achiral intermediate, a counterintuitive flipping mechanism is revealed for 2-APN through chiral intermediates explored using the GRRM. The feasibility of the proposed stereochemical pathways, in terms of the Gibbs free-energy change, is analyzed at the temperature conditions akin to the interstellar medium. Notably, the stereoinversion in 2-APN is observed to be more feasible than the dissociation of 2-APN and intermediates involved along the stereochemical pathways, and the flipping barrier is observed to be as low as 3.68 kJ/mol along one of the pathways. The pathways proposed for the inversion of chirality in 2-APN may provide significant insight into the extraterrestrial origin of life.
Volonté, Francesco; Pugin, François; Bucher, Pascal; Sugimoto, Maki; Ratib, Osman; Morel, Philippe
2011-07-01
New technologies can considerably improve preoperative planning, enhance the surgeon's skill and simplify the approach to complex procedures. Augmented reality techniques, robot assisted operations and computer assisted navigation tools will become increasingly important in surgery and in residents' education. We obtained 3D reconstructions from simple spiral computed tomography (CT) slides using OsiriX, an open source processing software package dedicated to DICOM images. These images were then projected on the patient's body with a beamer fixed to the operating table to enhance spatial perception during surgical intervention (augmented reality). Changing a window's deepness level allowed the surgeon to navigate through the patient's anatomy, highlighting regions of interest and marked pathologies. We used image overlay navigation for laparoscopic operations such cholecystectomy, abdominal exploration, distal pancreas resection and robotic liver resection. Augmented reality techniques will transform the behaviour of surgeons, making surgical interventions easier, faster and probably safer. These new techniques will also renew methods of surgical teaching, facilitating transmission of knowledge and skill to young surgeons.
NASA Astrophysics Data System (ADS)
Haris, A.; Novriyani, M.; Suparno, S.; Hidayat, R.; Riyanto, A.
2017-07-01
This study presents the integration of seismic stochastic inversion and multi-attributes for delineating the reservoir distribution in term of lithology and porosity in the formation within depth interval between the Top Sihapas and Top Pematang. The method that has been used is a stochastic inversion, which is integrated with multi-attribute seismic by applying neural network Probabilistic Neural Network (PNN). Stochastic methods are used to predict the probability mapping sandstone as the result of impedance varied with 50 realizations that will produce a good probability. Analysis of Stochastic Seismic Tnversion provides more interpretive because it directly gives the value of the property. Our experiment shows that AT of stochastic inversion provides more diverse uncertainty so that the probability value will be close to the actual values. The produced AT is then used for an input of a multi-attribute analysis, which is used to predict the gamma ray, density and porosity logs. To obtain the number of attributes that are used, stepwise regression algorithm is applied. The results are attributes which are used in the process of PNN. This PNN method is chosen because it has the best correlation of others neural network method. Finally, we interpret the product of the multi-attribute analysis are in the form of pseudo-gamma ray volume, density volume and volume of pseudo-porosity to delineate the reservoir distribution. Our interpretation shows that the structural trap is identified in the southeastern part of study area, which is along the anticline.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2008-03-01
This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.
Is electroconvulsive therapy effective as augmentation in clozapine-resistant schizophrenia?
Kittsteiner Manubens, Lucas; Lobos Urbina, Diego; Aceituno, David
2016-10-14
Clozapine is considered to be the most effective antipsychotic drug for patients with treatment resistant schizophrenia, but up to a third of the patients do not respond to this treatment. Various strategies have been tried to augment the effect of clozapine in non-responders, one of these strategies being electroconvulsive therapy. However, its efficacy and safety are not yet clear. Searching in Epistemonikos database, which is maintained by screening 30 databases, we identified six systematic reviews including 55 studies, among them six randomized controlled trials addressing clozapine-resistant schizophrenia. We combined the evidence using meta-analysis and generated a summary of findings following the GRADE approach. We concluded electroconvulsive therapy probably augments response to clozapine in patients with treatment resistant schizophrenia, but it is not possible to determine if it leads to cognitive adverse effects because the certainty of the evidence is very low.
Brilhault, Jean; Noël, Vincent
2012-10-01
The decision to offer surgery for Stage II posterior tibial tendon deficiency (PTTD) is a difficult one since orthotic treatment has been documented to be a viable alternative to surgery at this stage. Taking this into consideration we limited our treatment to bony realignment by a lengthening calcaneus Evans osteotomy and tendon balancing. The goal of the study was to clinically evaluate PTT functional recovery with this procedure. The patient population included 17 feet in 13 patients. Inclusion was limited to early Stage II PTTD flatfeet with grossly intact but deficient PTT. Deficiency was assessed by the lack of hindfoot inversion during single heel rise test. The surgical procedure included an Evans calcaneal opening wedge osteotomy with triceps surae and peroneus brevis tendon lengthening. PTT function at follow up was evaluated by an independent examiner. Evaluation was performed at an average of 4 (range, 2 to 6.3) years. One case presented postoperative subtalar pain that required subtalar fusion. Every foot could perform a single heel rise with 13 feet having active inversion of the hindfoot during elevation. The results of this study provide evidence of PTT functional recovery without augmentation in early Stage II. It challenges our understanding of early Stage II PTTD as well as the surgical guidelines recommending PTT augmentation at this specific stage.
Continuous Strategy Development for Effects-Based Operations
2006-02-01
the probability of COA success. The time slider from the “Time Selector” choice in the View menu may also be used to animate the probability coloring...will Deploy WMD, since this can be assumed to have the inverse probability (1-P) of our objective. Clausewitz theory teaches us that an enemy must be... using XSLT, a concise language for transforming XML documents, for forward and reverse conversion between the SDT and SMS plan formats. 2. Develop a
A Model-Based Architecture Approach to Ship Design Linking Capability Needs to System Solutions
2012-06-01
NSSM NATO Sea Sparrow Missile RAM Rolling Airframe Missile CIWS Close-In Weapon System 3D Three Dimensional Ps Probability of Survival PHit ...example effectiveness model. The primary MOP is the inverse of the probability of taking a hit (1- PHit ), which in, this study, will be referred to as
Family History as an Indicator of Risk for Reading Disability.
ERIC Educational Resources Information Center
Volger, George P.; And Others
1984-01-01
Self-reported reading ability of parents of 174 reading-disabled children and of 182 controls was used to estimate the probability that a child will become reading disabled. Using Bayesian inverse probability analysis, it was found that the risk for reading disability is increased substantially if either parent has had difficulty in learning to…
Size distribution of submarine landslides along the U.S. Atlantic margin
Chaytor, J.D.; ten Brink, Uri S.; Solow, A.R.; Andrews, B.D.
2009-01-01
Assessment of the probability for destructive landslide-generated tsunamis depends on the knowledge of the number, size, and frequency of large submarine landslides. This paper investigates the size distribution of submarine landslides along the U.S. Atlantic continental slope and rise using the size of the landslide source regions (landslide failure scars). Landslide scars along the margin identified in a detailed bathymetric Digital Elevation Model (DEM) have areas that range between 0.89??km2 and 2410??km2 and volumes between 0.002??km3 and 179??km3. The area to volume relationship of these failure scars is almost linear (inverse power-law exponent close to 1), suggesting a fairly uniform failure thickness of a few 10s of meters in each event, with only rare, deep excavating landslides. The cumulative volume distribution of the failure scars is very well described by a log-normal distribution rather than by an inverse power-law, the most commonly used distribution for both subaerial and submarine landslides. A log-normal distribution centered on a volume of 0.86??km3 may indicate that landslides preferentially mobilize a moderate amount of material (on the order of 1??km3), rather than large landslides or very small ones. Alternatively, the log-normal distribution may reflect an inverse power law distribution modified by a size-dependent probability of observing landslide scars in the bathymetry data. If the latter is the case, an inverse power-law distribution with an exponent of 1.3 ?? 0.3, modified by a size-dependent conditional probability of identifying more failure scars with increasing landslide size, fits the observed size distribution. This exponent value is similar to the predicted exponent of 1.2 ?? 0.3 for subaerial landslides in unconsolidated material. Both the log-normal and modified inverse power-law distributions of the observed failure scar volumes suggest that large landslides, which have the greatest potential to generate damaging tsunamis, occur infrequently along the margin. ?? 2008 Elsevier B.V.
Appraisal of geodynamic inversion results: a data mining approach
NASA Astrophysics Data System (ADS)
Baumann, T. S.
2016-11-01
Bayesian sampling based inversions require many thousands or even millions of forward models, depending on how nonlinear or non-unique the inverse problem is, and how many unknowns are involved. The result of such a probabilistic inversion is not a single `best-fit' model, but rather a probability distribution that is represented by the entire model ensemble. Often, a geophysical inverse problem is non-unique, and the corresponding posterior distribution is multimodal, meaning that the distribution consists of clusters with similar models that represent the observations equally well. In these cases, we would like to visualize the characteristic model properties within each of these clusters of models. However, even for a moderate number of inversion parameters, a manual appraisal for a large number of models is not feasible. This poses the question whether it is possible to extract end-member models that represent each of the best-fit regions including their uncertainties. Here, I show how a machine learning tool can be used to characterize end-member models, including their uncertainties, from a complete model ensemble that represents a posterior probability distribution. The model ensemble used here results from a nonlinear geodynamic inverse problem, where rheological properties of the lithosphere are constrained from multiple geophysical observations. It is demonstrated that by taking vertical cross-sections through the effective viscosity structure of each of the models, the entire model ensemble can be classified into four end-member model categories that have a similar effective viscosity structure. These classification results are helpful to explore the non-uniqueness of the inverse problem and can be used to compute representative data fits for each of the end-member models. Conversely, these insights also reveal how new observational constraints could reduce the non-uniqueness. The method is not limited to geodynamic applications and a generalized MATLAB code is provided to perform the appraisal analysis.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
NASA Technical Reports Server (NTRS)
Wall, John H.; VanZwieten, Tannen S.; Gilligan, Eric T.; Miller, Christopher J.; Hanson, Curtis E.; Orr, Jeb S.
2015-01-01
NASA's Space Launch System (SLS) Flight Control System (FCS) includes an Adaptive Augmenting Control (AAC) component which employs a multiplicative gain update law to enhance the performance and robustness of the baseline control system for extreme off nominal scenarios. The SLS FCS algorithm including AAC has been flight tested utilizing a specially outfitted F/A-18 fighter jet in which the pitch axis control of the aircraft was performed by a Non-linear Dynamic Inversion (NDI) controller, SLS reference models, and the SLS flight software prototype. This paper describes test cases from the research flight campaign in which the fundamental F/A-18 airframe structural mode was identified using frequency-domain reconstruction of flight data, amplified to result in closed loop instability, and suppressed in-flight by the SLS adaptive control system.
Coley's Lessons Remembered: Augmenting Mistletoe Therapy.
Orange, Maurice; Reuter, Uwe; Hobohm, Uwe
2016-12-01
The following four observations point in the same direction, namely that there is an unleveraged potential for stimulating the innate immune system against cancer: (1) experimental treatments with bacterial extracts more than 100 years ago by Coley and contemporaries, (2) a positive correlation between spontaneous regressions and febrile infection, (3) epidemiological data suggesting an inverse correlation between a history of infection and the likelihood of developing cancer, and (4) our recent finding that a cocktail of pattern recognition receptor ligands (PRRLs) can eradicate solid tumors in cancer mice if applied metronomically. Because the main immunostimulating component of mistletoe extract (ME), mistletoe lectin, has been shown to be a PRRL as well, we suggest to apply ME in combination with additional PRRLs. Additional PRRLs can be found in approved drugs already on the market. Therefore, augmentation of ME might be feasible, with the aim of reattaining the old successes using approved drugs rather than bacterial extracts. © The Author(s) 2016.
More than Just Convenient: The Scientific Merits of Homogeneous Convenience Samples
Jager, Justin; Putnick, Diane L.; Bornstein, Marc H.
2017-01-01
Despite their disadvantaged generalizability relative to probability samples, non-probability convenience samples are the standard within developmental science, and likely will remain so because probability samples are cost-prohibitive and most available probability samples are ill-suited to examine developmental questions. In lieu of focusing on how to eliminate or sharply reduce reliance on convenience samples within developmental science, here we propose how to augment their advantages when it comes to understanding population effects as well as subpopulation differences. Although all convenience samples have less clear generalizability than probability samples, we argue that homogeneous convenience samples have clearer generalizability relative to conventional convenience samples. Therefore, when researchers are limited to convenience samples, they should consider homogeneous convenience samples as a positive alternative to conventional or heterogeneous) convenience samples. We discuss future directions as well as potential obstacles to expanding the use of homogeneous convenience samples in developmental science. PMID:28475254
On the inverse Magnus effect in free molecular flow
NASA Astrophysics Data System (ADS)
Weidman, Patrick D.; Herczynski, Andrzej
2004-02-01
A Newton-inspired particle interaction model is introduced to compute the sideways force on spinning projectiles translating through a rarefied gas. The simple model reproduces the inverse Magnus force on a sphere reported by Borg, Söderholm and Essén [Phys. Fluids 15, 736 (2003)] using probability theory. Further analyses given for cylinders and parallelepipeds of rectangular and regular polygon section point to a universal law for this class of geometric shapes: when the inverse Magnus force is steady, it is proportional to one-half the mass M of gas displaced by the body.
Pólya number and first return of bursty random walk: Rigorous solutions
NASA Astrophysics Data System (ADS)
Wan, J.; Xu, X. P.
2012-03-01
The recurrence properties of random walks can be characterized by Pólya number, i.e., the probability that the walker has returned to the origin at least once. In this paper, we investigate Pólya number and first return for bursty random walk on a line, in which the walk has different step size and moving probabilities. Using the concept of the Catalan number, we obtain exact results for first return probability, the average first return time and Pólya number for the first time. We show that Pólya number displays two different functional behavior when the walk deviates from the recurrent point. By utilizing the Lagrange inversion formula, we interpret our findings by transferring Pólya number to the closed-form solutions of an inverse function. We also calculate Pólya number using another approach, which corroborates our results and conclusions. Finally, we consider the recurrence properties and Pólya number of two variations of the bursty random walk model.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Complex Study of the Physical Properties of Reticulated Vitreous Carbon
NASA Astrophysics Data System (ADS)
Alifanov, O. M.; Cherepanov, V. V.; Morzhukhina, A. V.
2015-01-01
We give an example of using a two-level identifi cation system incorporating an augmented mathematical model covering the structure, the thermal, electrophysical, and optical properties of nonmetallic ultraporous reticulated materials. The model, when combined with a nonstationary thermal experiment and methods of the theory of inverse heat transfer problems, permits determining the little studied characteristics of the above materials. We present some of the results of investigations of reticulated vitreous carbon confirming the possibility of using it in a number of engineering applications.
Force and Moment Approach for Achievable Dynamics Using Nonlinear Dynamic Inversion
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.; Bacon, Barton J.
1999-01-01
This paper describes a general form of nonlinear dynamic inversion control for use in a generic nonlinear simulation to evaluate candidate augmented aircraft dynamics. The implementation is specifically tailored to the task of quickly assessing an aircraft's control power requirements and defining the achievable dynamic set. The achievable set is evaluated while undergoing complex mission maneuvers, and perfect tracking will be accomplished when the desired dynamics are achievable. Variables are extracted directly from the simulation model each iteration, so robustness is not an issue. Included in this paper is a description of the implementation of the forces and moments from simulation variables, the calculation of control effectiveness coefficients, methods for implementing different types of aerodynamic and thrust vectoring controls, adjustments for control effector failures, and the allocation approach used. A few examples illustrate the perfect tracking results obtained.
Acoustic measurement of bubble size in an inkjet printhead.
Jeurissen, Roger; van der Bos, Arjan; Reinten, Hans; van den Berg, Marc; Wijshoff, Herman; de Jong, Jos; Versluis, Michel; Lohse, Detlef
2009-11-01
The volume of a bubble in a piezoinkjet printhead is measured acoustically. The method is based on a numerical model of the investigated system. The piezo not only drives the system but it is also used as a sensor by measuring the current it generates. The numerical model is used to predict this current for a given bubble volume. The inverse problem is to infer the bubble volume from an experimentally obtained piezocurrent. By solving this inverse problem, the size and position of the bubble can thus be measured acoustically. The method is experimentally validated with an inkjet printhead that is augmented with a glass connection channel, through which the bubble was observed optically, while at the same time the piezocurrent was measured. The results from the acoustical measurement method correspond closely to the results from the optical measurement.
Schaubel, Douglas E; Wei, Guanghui
2011-03-01
In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
Developing the (d,p γ) reaction as a surrogate for (n, γ) in inverse kinematics
NASA Astrophysics Data System (ADS)
Lepailleur, Alexandre; Baugher, Travis; Cizewski, Jolie; Ratkiewicz, Andrew; Walter, David; Pain, Steven; Smith, Karl; Garland, Heather; Goddess Collaboration
2016-09-01
The r-process that proceeds via (n, γ) reactions on neutron-rich nuclei is responsible for the synthesis of about half of the elements heavier than iron. Because (n, γ) measurements on short-lived isotopes are not possible, the (d,p γ) reaction is being investigated as a surrogate for (n, γ). Of particular importance is validating a surrogate in inverse kinematics. Therefore, the 95Mo(d,p γ) reaction was measured in inverse kinematics with stable beams from ATLAS and CD2 targets. Reaction protons were measured in coincidence with gamma rays with GODDESS - Gammasphere ORRUBA: Dual Detectors for Experimental Structure Studies. The Oak Ridge Rutgers University Barrel Array (ORRUBA) of position-sensitive silicon strip detectors was augmented with annular arrays of segmented strip detectors at backward and forward angles, resulting in a high-angular coverage for light ejectiles. Preliminary results from the 95Mo(d,p γ) study will be presented. This work was supported in part by the U.S. Department of Energy and National Science Foundation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A
2015-01-15
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.
Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.
2014-01-01
Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152
a Novel Discrete Optimal Transport Method for Bayesian Inverse Problems
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Myers, A.; Wang, K.; Thiery, A.
2017-12-01
We present the Augmented Ensemble Transform (AET) method for generating approximate samples from a high-dimensional posterior distribution as a solution to Bayesian inverse problems. Solving large-scale inverse problems is critical for some of the most relevant and impactful scientific endeavors of our time. Therefore, constructing novel methods for solving the Bayesian inverse problem in more computationally efficient ways can have a profound impact on the science community. This research derives the novel AET method for exploring a posterior by solving a sequence of linear programming problems, resulting in a series of transport maps which map prior samples to posterior samples, allowing for the computation of moments of the posterior. We show both theoretical and numerical results, indicating this method can offer superior computational efficiency when compared to other SMC methods. Most of this efficiency is derived from matrix scaling methods to solve the linear programming problem and derivative-free optimization for particle movement. We use this method to determine inter-well connectivity in a reservoir and the associated uncertainty related to certain parameters. The attached file shows the difference between the true parameter and the AET parameter in an example 3D reservoir problem. The error is within the Morozov discrepancy allowance with lower computational cost than other particle methods.
Exploring the Subtleties of Inverse Probability Weighting and Marginal Structural Models.
Breskin, Alexander; Cole, Stephen R; Westreich, Daniel
2018-05-01
Since being introduced to epidemiology in 2000, marginal structural models have become a commonly used method for causal inference in a wide range of epidemiologic settings. In this brief report, we aim to explore three subtleties of marginal structural models. First, we distinguish marginal structural models from the inverse probability weighting estimator, and we emphasize that marginal structural models are not only for longitudinal exposures. Second, we explore the meaning of the word "marginal" in "marginal structural model." Finally, we show that the specification of a marginal structural model can have important implications for the interpretation of its parameters. Each of these concepts have important implications for the use and understanding of marginal structural models, and thus providing detailed explanations of them may lead to better practices for the field of epidemiology.
Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin
2018-04-20
We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.
NASA Astrophysics Data System (ADS)
Afanas'ev, V. P.; Gryazev, A. S.; Efremenko, D. S.; Kaplya, P. S.; Kuznetcova, A. V.
2017-12-01
Precise knowledge of the differential inverse inelastic mean free path (DIIMFP) and differential surface excitation probability (DSEP) of Tungsten is essential for many fields of material science. In this paper, a fitting algorithm is applied for extracting DIIMFP and DSEP from X-ray photoelectron spectra and electron energy loss spectra. The algorithm uses the partial intensity approach as a forward model, in which a spectrum is given as a weighted sum of cross-convolved DIIMFPs and DSEPs. The weights are obtained as solutions of the Riccati and Lyapunov equations derived from the invariant imbedding principle. The inversion algorithm utilizes the parametrization of DIIMFPs and DSEPs on the base of a classical Lorentz oscillator. Unknown parameters of the model are found by using the fitting procedure, which minimizes the residual between measured spectra and forward simulations. It is found that the surface layer of Tungsten contains several sublayers with corresponding Langmuir resonances. The thicknesses of these sublayers are proportional to the periods of corresponding Langmuir oscillations, as predicted by the theory of R.H. Ritchie.
Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia
NASA Astrophysics Data System (ADS)
Mather, B.; Moresi, L. N.; Rayner, P. J.
2017-12-01
The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaur, Ramanpreet; Vikas, E-mail: qlabspu@pu.ac.in, E-mail: qlabspu@yahoo.com
2015-02-21
2-Aminopropionitrile (APN), a probable candidate as a chiral astrophysical molecule, is a precursor to amino-acid alanine. Stereochemical pathways in 2-APN are explored using Global Reaction Route Mapping (GRRM) method employing high-level quantum-mechanical computations. Besides predicting the conventional mechanism for chiral inversion that proceeds through an achiral intermediate, a counterintuitive flipping mechanism is revealed for 2-APN through chiral intermediates explored using the GRRM. The feasibility of the proposed stereochemical pathways, in terms of the Gibbs free-energy change, is analyzed at the temperature conditions akin to the interstellar medium. Notably, the stereoinversion in 2-APN is observed to be more feasible than themore » dissociation of 2-APN and intermediates involved along the stereochemical pathways, and the flipping barrier is observed to be as low as 3.68 kJ/mol along one of the pathways. The pathways proposed for the inversion of chirality in 2-APN may provide significant insight into the extraterrestrial origin of life.« less
Modulating light propagation in ZnO-Cu₂O-inverse opal solar cells for enhanced photocurrents.
Yantara, Natalia; Pham, Thi Thu Trang; Boix, Pablo P; Mathews, Nripan
2015-09-07
The advantages of employing an interconnected periodic ZnO morphology, i.e. an inverse opal structure, in electrodeposited ZnO/Cu2O devices are presented. The solar cells are fabricated using low cost solution based methods such as spin coating and electrodeposition. The impact of inverse opal geometry, mainly the diameter and thickness, is scrutinized. By employing 3 layers of an inverse opal structure with a 300 nm pore diameter, higher short circuit photocurrents (∼84% improvement) are observed; however the open circuit voltages decrease with increasing interfacial area. Optical simulation using a finite difference time domain method shows that the inverse opal structure modulates light propagation within the devices such that more photons are absorbed close to the ZnO/Cu2O junction. This increases the collection probability resulting in improved short circuit currents.
Wang, Guoliang; Liu, Shenghua; Wang, Li; Meng, Liukun; Cui, Chuanjue; Zhang, Hao; Hu, Shengshou; Ma, Ning; Wei, Yingjie
2017-01-01
Endoplasmic reticulum (ER) stress, a feature of many conditions associated with pulmonary hypertension (PH), is increasingly recognized as a common response to promote proliferation in the walls of pulmonary arteries. Increased expression of Lipocalin-2 in PH led us to test the hypothesis that Lipocalin-2, a protein known to sequester iron and regulate it intracellularly, might facilitate the ER stress and proliferation in pulmonary arterial smooth muscle cells (PASMCs). In this study, we observed greatly increased Lcn2 expression accompanied with increased ATF6 cleavage in a standard rat model of pulmonary hypertension induced by monocrotaline. In cultured human PASMCs, Lcn2 significantly promoted ER stress (determined by augmented cleavage and nuclear localization of ATF6, up-regulated transcription of GRP78 and NOGO, increased expression of SOD2, and mild augmented mitochondrial membrane potential) and proliferation (assessed by Ki67 staining and BrdU incorporation). Lcn2 promoted ER stress accompanied with augmented intracellular iron levels in human PASMCs. Treatment human PASMCs with FeSO4 induced the similar ER stress and proliferation response and iron chelator (deferoxamine) abrogated the ER stress and proliferation induced by Lcn2 in cultured human PASMCs. In conclusion, Lcn2 significantly promoted human PASMC ER stress and proliferation by augmenting intracellular iron. The up-regulation of Lcn2 probably involved in the pathogenesis and progression of PH. PMID:28255266
Reticular influences on primary and augmenting responses in the somatosensory cortex.
Steriade, M; Morin, D
1981-01-26
The effects of brief, conditioning trains of high-frequency pulses to the midbrain reticular formation (RF) on primary and augmenting responses of somatosensory (SI) cortex were investigated. Testing stimulation was applied to the ventrobasal (VB) thalamus or to the white matter (WM) beneath SI in VB-lesioned animals. The RF-elicited EEG activation was associated with increased firing rates of SI neurons, enhanced probability of early synaptic discharges to VB or WM stimuli, and significantly reduced duration of the suppressed firing period following an afferent VB or WM volley. The diminished latency of the postinhibitory rebound under RF stimulation had the consequence that, within 10/sec shock-train, the second stimulus was delivered following completion of the rebound component and, instead of an augmented potential, generated a field response of primary-type. The dependence of the RF-induced change in augmenting potentials upon the sharpening effect exerted on the preceding inhibitory-rebound sequence was corroborated by analyzing the RF influence on neurons with different time-course of recovery from inhibition. The replacement of augmenting potentials by primary responses under RF stimulation is advanced as the mechanism behind the obliteration of spontaneously developing 'type I' spindle-waves during EEG arousal. The demonstration of RF influences on SI responses to WM stimulation in VB-lesioned animals points out the cortical level of the effects. The reticulo-thalamo-cortical pathways underlying these influences are discussed.
Augmenting superpopulation capture-recapture models with population assignment data
Wen, Zhi; Pollock, Kenneth; Nichols, James; Waser, Peter
2011-01-01
Ecologists applying capture-recapture models to animal populations sometimes have access to additional information about individuals' populations of origin (e.g., information about genetics, stable isotopes, etc.). Tests that assign an individual's genotype to its most likely source population are increasingly used. Here we show how to augment a superpopulation capture-recapture model with such information. We consider a single superpopulation model without age structure, and split each entry probability into separate components due to births in situ and immigration. We show that it is possible to estimate these two probabilities separately. We first consider the case of perfect information about population of origin, where we can distinguish individuals born in situ from immigrants with certainty. Then we consider the more realistic case of imperfect information, where we use genetic or other information to assign probabilities to each individual's origin as in situ or outside the population. We use a resampling approach to impute the true population of origin from imperfect assignment information. The integration of data on population of origin with capture-recapture data allows us to determine the contributions of immigration and in situ reproduction to the growth of the population, an issue of importance to ecologists. We illustrate our new models with capture-recapture and genetic assignment data from a population of banner-tailed kangaroo rats Dipodomys spectabilis in Arizona.
Estimating the concordance probability in a survival analysis with a discrete number of risk groups.
Heller, Glenn; Mo, Qianxing
2016-04-01
A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.
Gomez-Marcos, Manuel A; Recio-Rodríguez, José I; Patino-Alonso, Maria C; Agudo-Conde, Cristina; Lasaosa-Medina, Lourdes; Rodriguez-Sanchez, Emiliano; Maderuelo-Fernandez, José A; García-Ortiz, Luis
2014-06-01
To analyze the relationship between regular physical activity, as assessed by accelerometer and 7-day physical activity recall (PAR) with vascular structure and function based on carotid intima-media thickness, pulse wave velocity, central and peripheral augmentation index and the ambulatory arterial stiffness index in adults. This study analyzed 263 subjects who were included in the EVIDENT study (mean age 55.85 ± 12.21 years; 59.30% female). Physical activity was assessed during 7 days using the Actigraph GT3X accelerometer (counts/minute) and 7-day PAR (metabolic equivalents (METs)/hour/week). Carotid ultrasound was used to measure carotid intima media thickness (IMT). The SphygmoCor System was used to measure pulse wave velocity (PWV), and central and peripheral augmentation index (CAIx and PAIx). The B-pro device was used to measure ambulatory arterial stiffness index (AASI). Median counts/minute was 244.37 and mean METs/hour/week was 11.49. Physical activity showed an inverse correlation with PAIx (r = -0.179; p < 0.01) and vigorous activity day time with IMT (r = -0.174), CAIx (r = -0.217) and PAIx (r = -0.324) (p < 0.01, all). Sedentary activity day time was correlated positively with CAIx (r = 0.103; p < 0.05). In multiple regression analysis, after adjusting for confounding factors, the inverse association of CAIx with counts/minute and the time spent in moderate and vigorous activity were maintained as well as the positive association with sedentary activity day time (p < 0.05). Physical activity, assessed by counts/minute, and the amount of time spent in moderate, vigorous/very vigorous physical activity, showed an inverse association with CAIx. Likewise, the time spent in sedentary activity was positively associated with the CAIx. Clinical Trials.gov Identifier: NCT01083082. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J
2017-01-01
Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.
Reconstruction of stochastic temporal networks through diffusive arrival times
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xiang
2017-06-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.
Reconstruction of stochastic temporal networks through diffusive arrival times
Li, Xun; Li, Xiang
2017-01-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687
Generating probabilistic Boolean networks from a prescribed transition probability matrix.
Ching, W-K; Chen, X; Tsing, N-K
2009-11-01
Probabilistic Boolean networks (PBNs) have received much attention in modeling genetic regulatory networks. A PBN can be regarded as a Markov chain process and is characterised by a transition probability matrix. In this study, the authors propose efficient algorithms for constructing a PBN when its transition probability matrix is given. The complexities of the algorithms are also analysed. This is an interesting inverse problem in network inference using steady-state data. The problem is important as most microarray data sets are assumed to be obtained from sampling the steady-state.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Radiation from violently accelerated bodies
NASA Astrophysics Data System (ADS)
Gerlach, Ulrich H.
2001-11-01
A determination is made of the radiation emitted by a linearly uniformly accelerated uncharged dipole transmitter. It is found that, first of all, the radiation rate is given by the familiar Larmor formula, but it is augmented by an amount which becomes dominant for sufficiently high acceleration. For an accelerated dipole oscillator, the criterion is that the center of mass motion become relativistic within one oscillation period. The augmented formula and the measurements which it summarizes presuppose an expanding inertial observation frame. A static inertial reference frame will not do. Secondly, it is found that the radiation measured in the expanding inertial frame is received with 100% fidelity. There is no blueshift or redshift due to the accelerative motion of the transmitter. Finally, it is found that a pair of coherently radiating oscillators accelerating (into opposite directions) in their respective causally disjoint Rindler-coordinatized sectors produces an interference pattern in the expanding inertial frame. Like the pattern of a Young double slit interferometer, this Rindler interferometer pattern has a fringe spacing which is inversely proportional to the proper separation and the proper frequency of the accelerated sources. The interferometer, as well as the augmented Larmor formula, provide a unifying perspective. It joins adjacent Rindler-coordinatized neighborhoods into a single spacetime arena for scattering and radiation from accelerated bodies.
NASA Astrophysics Data System (ADS)
Sheng, Zheng
2013-02-01
The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.
Long-term Changes in Extreme Air Pollution Meteorology and the Implications for Air Quality.
Hou, Pei; Wu, Shiliang
2016-03-31
Extreme air pollution meteorological events, such as heat waves, temperature inversions and atmospheric stagnation episodes, can significantly affect air quality. Based on observational data, we have analyzed the long-term evolution of extreme air pollution meteorology on the global scale and their potential impacts on air quality, especially the high pollution episodes. We have identified significant increasing trends for the occurrences of extreme air pollution meteorological events in the past six decades, especially over the continental regions. Statistical analysis combining air quality data and meteorological data further indicates strong sensitivities of air quality (including both average air pollutant concentrations and high pollution episodes) to extreme meteorological events. For example, we find that in the United States the probability of severe ozone pollution when there are heat waves could be up to seven times of the average probability during summertime, while temperature inversions in wintertime could enhance the probability of severe particulate matter pollution by more than a factor of two. We have also identified significant seasonal and spatial variations in the sensitivity of air quality to extreme air pollution meteorology.
Daza, Eric J; Hudgens, Michael G; Herring, Amy H
Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241-258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study.
Isotropic probability measures in infinite-dimensional spaces
NASA Technical Reports Server (NTRS)
Backus, George
1987-01-01
Let R be the real numbers, R(n) the linear space of all real n-tuples, and R(infinity) the linear space of all infinite real sequences x = (x sub 1, x sub 2,...). Let P sub in :R(infinity) approaches R(n) be the projection operator with P sub n (x) = (x sub 1,...,x sub n). Let p(infinity) be a probability measure on the smallest sigma-ring of subsets of R(infinity) which includes all of the cylinder sets P sub n(-1) (B sub n), where B sub n is an arbitrary Borel subset of R(n). Let p sub n be the marginal distribution of p(infinity) on R(n), so p sub n(B sub n) = p(infinity) (P sub n to the -1 (B sub n)) for each B sub n. A measure on R(n) is isotropic if it is invariant under all orthogonal transformations of R(n). All members of the set of all isotropic probability distributions on R(n) are described. The result calls into question both stochastic inversion and Bayesian inference, as currently used in many geophysical inverse problems.
Hudgens, Michael G.; Herring, Amy H.
2017-01-01
Individuals may drop out of a longitudinal study, rendering their outcomes unobserved but still well defined. However, they may also undergo truncation (for example, death), beyond which their outcomes are no longer meaningful. Kurland and Heagerty (2005, Biostatistics 6: 241–258) developed a method to conduct regression conditioning on nontruncation, that is, regression conditioning on continuation (RCC), for longitudinal outcomes that are monotonically missing at random (for example, because of dropout). This method first estimates the probability of dropout among continuing individuals to construct inverse-probability weights (IPWs), then fits generalized estimating equations (GEE) with these IPWs. In this article, we present the xtrccipw command, which can both estimate the IPWs required by RCC and then use these IPWs in a GEE estimator by calling the glm command from within xtrccipw. In the absence of truncation, the xtrccipw command can also be used to run a weighted GEE analysis. We demonstrate the xtrccipw command by analyzing an example dataset and the original Kurland and Heagerty (2005) data. We also use xtrccipw to illustrate some empirical properties of RCC through a simulation study. PMID:29755297
A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola
2018-04-01
This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.
Computer simulation radiation damages in condensed matters
NASA Astrophysics Data System (ADS)
Kupchishin, A. I.; Kupchishin, A. A.; Voronova, N. A.; Kirdyashkin, V. I.; Gyngazov, V. A.
2016-02-01
As part of the cascade-probability method were calculated the energy spectra of primary knocked-out atoms and the concentration of radiation-induced defects in a number of metals irradiated by electrons. As follows from the formulas, the number of Frenkel pairs at a given depth depends on three variables having certain physical meaning: firstly, Cd (Ea h) is proportional to the average energy of the considered depth of the PKA (if it is higher, than the greater number of atoms it will displace); secondly is inversely proportional to the path length λ2 for the formation of the PKA (if λ1 is higher than is the smaller the probability of interaction) and thirdly is inversely proportional to Ed. In this case calculations are in satisfactory agreement with the experimental data (for example, copper and aluminum).
Universal characteristics of fractal fluctuations in prime number distribution
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2014-11-01
The frequency of occurrence of prime numbers at unit number spacing intervals exhibits self-similar fractal fluctuations concomitant with inverse power law form for power spectrum generic to dynamical systems in nature such as fluid flows, stock market fluctuations and population dynamics. The physics of long-range correlations exhibited by fractals is not yet identified. A recently developed general systems theory visualizes the eddy continuum underlying fractals to result from the growth of large eddies as the integrated mean of enclosed small scale eddies, thereby generating a hierarchy of eddy circulations or an inter-connected network with associated long-range correlations. The model predictions are as follows: (1) The probability distribution and power spectrum of fractals follow the same inverse power law which is a function of the golden mean. The predicted inverse power law distribution is very close to the statistical normal distribution for fluctuations within two standard deviations from the mean of the distribution. (2) Fractals signify quantum-like chaos since variance spectrum represents probability density distribution, a characteristic of quantum systems such as electron or photon. (3) Fractal fluctuations of frequency distribution of prime numbers signify spontaneous organization of underlying continuum number field into the ordered pattern of the quasiperiodic Penrose tiling pattern. The model predictions are in agreement with the probability distributions and power spectra for different sets of frequency of occurrence of prime numbers at unit number interval for successive 1000 numbers. Prime numbers in the first 10 million numbers were used for the study.
Descalzo, Miguel Á; Garcia, Virginia Villaverde; González-Alvaro, Isidoro; Carbonell, Jordi; Balsa, Alejandro; Sanmartí, Raimon; Lisbona, Pilar; Hernandez-Barrera, Valentín; Jiménez-Garcia, Rodrigo; Carmona, Loreto
2013-02-01
To describe the results of different statistical ways of addressing radiographic outcome affected by missing data--multiple imputation technique, inverse probability weights and complete case analysis--using data from an observational study. A random sample of 96 RA patients was selected for a follow-up study in which radiographs of hands and feet were scored. Radiographic progression was tested by comparing the change in the total Sharp-van der Heijde radiographic score (TSS) and the joint erosion score (JES) from baseline to the end of the second year of follow-up. MI technique, inverse probability weights in weighted estimating equation (WEE) and CC analysis were used to fit a negative binomial regression. Major predictors of radiographic progression were JES and joint space narrowing (JSN) at baseline, together with baseline disease activity measured by DAS28 for TSS and MTX use for JES. Results from CC analysis show larger coefficients and s.e.s compared with MI and weighted techniques. The results from the WEE model were quite in line with those of MI. If it seems plausible that CC or MI analysis may be valid, then MI should be preferred because of its greater efficiency. CC analysis resulted in inefficient estimates or, translated into non-statistical terminology, could guide us into inaccurate results and unwise conclusions. The methods discussed here will contribute to the use of alternative approaches for tackling missing data in observational studies.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Comparing hard and soft prior bounds in geophysical inverse problems
NASA Technical Reports Server (NTRS)
Backus, George E.
1987-01-01
In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.
Automated Pole Placement Algorithm for Multivariable Optimal Control Synthesis.
1985-09-01
set of Q and F The effective Qe and F, after n reassignments are given by .Q, Q Q. .. (eqn 4.11) and Fe =F, + Fa+... Fn (eqn 4.12) The above pole...Inverse transformation and determination of Q, and Fe are identical to the distinct eigenvalue case with M in equation 4.9 replaced by T. 3. System...and F and the augmented plant matrix become, Q -2.998 -149. 0.9994 -49.978 -149.9 7499 -0.00841 0.4211 The effective Q. and Fe required to move both
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Spinal Meninges and Their Role in Spinal Cord Injury: A Neuroanatomical Review.
Grassner, Lukas; Grillhösl, Andreas; Griessenauer, Christoph J; Thomé, Claudius; Bühren, Volker; Strowitzki, Martin; Winkler, Peter A
2018-02-01
Current recommendations support early surgical decompression and blood pressure augmentation after traumatic spinal cord injury (SCI). Elevated intraspinal pressure (ISP), however, has probably been underestimated in the pathophysiology of SCI. Recent studies provide some evidence that ISP measurements and durotomy may be beneficial for individuals suffering from SCI. Compression of the spinal cord against the meninges in SCI patients causes a "compartment-like" syndrome. In such cases, intentional durotomy with augmentative duroplasty to reduce ISP and improve spinal cord perfusion pressure (SCPP) may be indicated. Prior to performing these procedures routinely, profound knowledge of the spinal meninges is essential. Here, we provide an in-depth review of relevant literature along with neuroanatomical illustrations and imaging correlates.
Korsgaard, Inge Riis; Lund, Mogens Sandø; Sorensen, Daniel; Gianola, Daniel; Madsen, Per; Jensen, Just
2003-01-01
A fully Bayesian analysis using Gibbs sampling and data augmentation in a multivariate model of Gaussian, right censored, and grouped Gaussian traits is described. The grouped Gaussian traits are either ordered categorical traits (with more than two categories) or binary traits, where the grouping is determined via thresholds on the underlying Gaussian scale, the liability scale. Allowances are made for unequal models, unknown covariance matrices and missing data. Having outlined the theory, strategies for implementation are reviewed. These include joint sampling of location parameters; efficient sampling from the fully conditional posterior distribution of augmented data, a multivariate truncated normal distribution; and sampling from the conditional inverse Wishart distribution, the fully conditional posterior distribution of the residual covariance matrix. Finally, a simulated dataset was analysed to illustrate the methodology. This paper concentrates on a model where residuals associated with liabilities of the binary traits are assumed to be independent. A Bayesian analysis using Gibbs sampling is outlined for the model where this assumption is relaxed. PMID:12633531
Inverse reasoning processes in obsessive-compulsive disorder.
Wong, Shiu F; Grisham, Jessica R
2017-04-01
The inference-based approach (IBA) is one cognitive model that aims to explain the aetiology and maintenance of obsessive-compulsive disorder (OCD). The model proposes that certain reasoning processes lead an individual with OCD to confuse an imagined possibility with an actual probability, a state termed inferential confusion. One such reasoning process is inverse reasoning, in which hypothetical causes form the basis of conclusions about reality. Although previous research has found associations between a self-report measure of inferential confusion and OCD symptoms, evidence of a specific association between inverse reasoning and OCD symptoms is lacking. In the present study, we developed a task-based measure of inverse reasoning in order to investigate whether performance on this task is associated with OCD symptoms in an online sample. The results provide some evidence for the IBA assertion: greater endorsement of inverse reasoning was significantly associated with OCD symptoms, even when controlling for general distress and OCD-related beliefs. Future research is needed to replicate this result in a clinical sample and to investigate a potential causal role for inverse reasoning in OCD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Eigenvectors phase correction in inverse modal problem
NASA Astrophysics Data System (ADS)
Qiao, Guandong; Rahmatalla, Salam
2017-12-01
The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.
PVC biodeterioration and DEHP leaching by DEHP-degrading bacteria
Latorre, Isomar; Hwang, Sangchul; Sevillano, Maria; Montalvo-Rodriguez, Rafael
2012-01-01
Newly isolated, not previously reported, di-(2-ethylhexyl) phthalate (DEHP)-degraders were augmented to assess their role in polyvinyl chloride (PVC) shower curtain deterioration and DEHP leaching. The biofilms that developed on the surfaces of the bioaugmented shower curtains with Gram-positive strains LHM1 and LHM2 were thicker than those of the biostimulated and Gram-negative strain LHM3-augmented shower curtains. The first derivative thermogravimetric (DTG) peaks of the bioaugmented shower curtains with the Gram-positive bacteria were observed at ~287°C, whereas the control and Gram-negative strain LHM3-augmented shower curtains were detected at ~283°C. This slight delay in the first DTG peak temperature is indicative of lower plasticizer concentrations in the shower curtains that were bioaugmented with Gram positive bacteria. Despite bioaugmentation with DEHP-degraders, aqueous solutions of the bioaugmentation reactors were not DEHP-free due probably to the presence of co-solutes that must have supported microbial growth. Generally, the bioaugmented reactors with the Gram-positive strains LHM1 and LHM2 had greater aqueous DEHP concentrations in the first-half (<3 wk) of the biodeterioration experiment than the biostimulated and strain LHM3-augmented reactors. Therefore, strains LHM1 and LHM2 may play an important role in DEHP leaching to the environment and PVC biodeterioration. PMID:22736894
Stable Lévy motion with inverse Gaussian subordinator
NASA Astrophysics Data System (ADS)
Kumar, A.; Wyłomańska, A.; Gajda, J.
2017-09-01
In this paper we study the stable Lévy motion subordinated by the so-called inverse Gaussian process. This process extends the well known normal inverse Gaussian (NIG) process introduced by Barndorff-Nielsen, which arises by subordinating ordinary Brownian motion (with drift) with inverse Gaussian process. The NIG process found many interesting applications, especially in financial data description. We discuss here the main features of the introduced subordinated process, such as distributional properties, existence of fractional order moments and asymptotic tail behavior. We show the connection of the process with continuous time random walk. Further, the governing fractional partial differential equations for the probability density function is also obtained. Moreover, we discuss the asymptotic distribution of sample mean square displacement, the main tool in detection of anomalous diffusion phenomena (Metzler et al., 2014). In order to apply the stable Lévy motion time-changed by inverse Gaussian subordinator we propose a step-by-step procedure of parameters estimation. At the end, we show how the examined process can be useful to model financial time series.
Model selection and Bayesian inference for high-resolution seabed reflection inversion.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2009-02-01
This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.
Symbolic inversion of control relationships in model-based expert systems
NASA Technical Reports Server (NTRS)
Thomas, Stan
1988-01-01
Symbolic inversion is examined from several perspectives. First, a number of symbolic algebra and mathematical tool packages were studied in order to evaluate their capabilities and methods, specifically with respect to symbolic inversion. Second, the KATE system (without hardware interface) was ported to a Zenith Z-248 microcomputer running Golden Common Lisp. The interesting thing about the port is that it allows the user to have measurements vary and components fail in a non-deterministic manner based upon random value from probability distributions. Third, INVERT was studied as currently implemented in KATE, its operation documented, some of its weaknesses identified, and corrections made to it. The corrections and enhancements are primarily in the way that logical conditions involving AND's and OR's and inequalities are processed. In addition, the capability to handle equalities was also added. Suggestions were also made regarding the handling of ranges in INVERT. Last, other approaches to the inversion process were studied and recommendations were made as to how future versions of KATE should perform symbolic inversion.
Control of a high beta maneuvering reentry vehicle using dynamic inversion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watts, Alfred Chapman
2005-05-01
The design of flight control systems for high performance maneuvering reentry vehicles presents a significant challenge to the control systems designer. These vehicles typically have a much higher ballistic coefficient than crewed vehicles like as the Space Shuttle or proposed crew return vehicles such as the X-38. Moreover, the missions of high performance vehicles usually require a steeper reentry flight path angle, followed by a pull-out into level flight. These vehicles then must transit the entire atmosphere and robustly perform the maneuvers required for the mission. The vehicles must also be flown with small static margins in order to performmore » the required maneuvers, which can result in highly nonlinear aerodynamic characteristics that frequently transition from being aerodynamically stable to unstable as angle of attack increases. The control system design technique of dynamic inversion has been applied successfully to both high performance aircraft and low beta reentry vehicles. The objective of this study was to explore the application of this technique to high performance maneuvering reentry vehicles, including the basic derivation of the dynamic inversion technique, followed by the extension of that technique to the use of tabular trim aerodynamic models in the controller. The dynamic inversion equations are developed for high performance vehicles and augmented to allow the selection of a desired response for the control system. A six degree of freedom simulation is used to evaluate the performance of the dynamic inversion approach, and results for both nominal and off nominal aerodynamic characteristics are presented.« less
Empirical data on 220 families with de novo or inherited paracentric inversions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eyre, J.; McConkie-Rosell, A.; Tripp, T.
Six new cases of paracentric inversions (3 detected prenatally) are presented and added to an expanding database of paracentric inversions. Three inversions were associated with an abnormal phenotype and detected postnatally: inv(2)(p21p23), inv(13)(q14q34), and inv(18)(q12.3q23). The present database of paracentric inversions includes 220 families reported. All chromosomes were involved except chromosome 20. The most frequent inversions were found on chromosomes 1, 3, 7, 11, and 14. 48 index cases had an abnormal phenotype not explainable by other causes such as additional chromosome abnormalities. Of these, 12 were de novo and 36 familial. By contrast, of the 122 index cases withmore » normal phenotype, there were 8 de novo and 87 familial cases (rest unknown). Ascertainment bias probably accounts for some of the abnormal inherited inversions cases. Maternally inherited inversions were more frequent than paternally inherited (72 versus 55). Inversions were found in males more than females (ratio of 4 to 3). There were some paracentric inversions that appear to be less involved with abnormal phenotypes (e.g., 11q21q23) than other inversions (e.g., inv X and Turner syndrome). An interesting observation which warrants further investigation is the excess number of fetal losses and karyotypically abnormal progeny in paracentric inversion carriers. The presence of additional karyotypic abnormalities in the children might be explainable by interchromosomal effects and chromosome position changes in the nucleus. Genetic counseling for paracentric inversions should take into consideration mode of ascertainment, inheritance, and chromosome involved. We solicit other cases of paracentric inversions to make this database more useful in counseling patients and families.« less
Afsar, Baris; Elsurer, Rengin; Soypacaci, Zeki; Kanbay, Mehmet
2016-02-01
Although anthropometric measurements are related with clinical outcomes; these relationships are not universal and differ in some disease states such as in chronic kidney disease (CKD). The current study was aimed to analyze the relationship between height, weight and BMI with hemodynamic and arterial stiffness parameters both in normal and CKD patients separately. This cross-sectional study included 381 patients with (N 226) and without CKD (N 155) with hypertension. Routine laboratory and 24-h urine collection were performed. Augmentation index (Aix) which is the ratio of augmentation pressure to pulse pressure was calculated from the blood pressure waveform after adjusted heart rate at 75 [Aix@75 (%)]. Pulse wave velocity (PWV) is a simple measure of the time taken by the pressure wave to travel over a specific distance. Both [Aix@75 (%)] and PWV which are measures of arterial stiffness were measured by validated oscillometric methods using mobil-O-Graph device. In patients without CKD, height is inversely correlated with [Aix@75 (%)]. Additionally, weight and BMI were positively associated with PWV in multivariate analysis. However, in patients with CKD, weight and BMI were inversely and independently related with PWV. In CKD patients, as weight and BMI increased stiffness parameters such as Aix@75 (%) and PWV decreased. While BMI and weight are positively associated with arterial stiffness in normal patients, this association is negative in patients with CKD. In conclusion, height, weight and BMI relationship with hemodynamic and arterial stiffness parameters differs in patients with and without CKD.
Lee, Inn-Chi; Chen, Yung-Jung; Lee, Hong-Shen; Li, Shuan-Yow
2014-12-01
The outcomes of children with cryptogenic seizures most probably arising from the frontal lobe are difficult to predict. We retrospectively collected data on 865 pediatric patients with epilepsy. In 78 patients with cryptogenic frontal lobe epilepsy, the age at first seizure was inversely correlated with the outcome, including the degree of intellectual disability/developmental delay (P = .002) and seizure frequency (P = .02) after adequate treatment. Intellectual disability was more prevalent in children with a first seizure at 0 to 3 years old (P = .002), and seizures were more frequent in those with a first seizure at 0 to 6 years old than at 7 to 16 years old (P = .026). For pediatric cryptogenic frontal lobe epilepsy, the age at first seizure is important and inversely correlated with outcome, including seizure frequency and intellectual disability. © The Author(s) 2013.
Objectified quantification of uncertainties in Bayesian atmospheric inversions
NASA Astrophysics Data System (ADS)
Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.
2015-05-01
Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.
Setó-Salvia, Núria; Sánchez-Quinto, Federico; Carbonell, Eudald; Lorenzo, Carlos; Comas, David; Clarimón, Jordi
2012-12-01
The polymorphic inversion on 17q21, that includes the MAPT gene, represents a unique locus in the human genome characterized by a large region with strong linkage disequilibrium. Two distinct haplotypes, H1 and H2, exist in modern humans, and H1 has been unequivocally related to several neurodegenerative disorders. Recent data indicate that recurrent inversions of this genomic region have occurred through primate evolution, with the H2 haplotype being the ancestral state. Neandertals harbored the H1 haplotype; however, until now, no data were available for the Denisova hominin. Neandertals and Denisovans are sister groups that share a common ancestor with modern humans. We analyzed the MAPT sequence and assessed the differences between modern humans, Neandertals, Denisovans, and great apes. Our analysis indicated that the Denisova hominin carried the H1 haplotype, and the Neandertal and Denisova common ancestor probably shared the same subhaplotype (H1j). We also found 68 intronic variants within the MAPT gene, 23 exclusive to Denisova hominin, 6 limited to Neandertals, and 24 exclusive to present-day humans. Our results reinforce previous data; this suggests that the 17q21 inversion arose within the modern human lineage. The data also indicate that archaic hominins that coexisted in Eurasia probably shared the same MAPT subhaplotype, and this can be found in almost 2% of chromosomes from European ancestry. Copyright © 2013 Wayne State University Press, Detroit, Michigan 48201-1309.
Inverse Theory for Petroleum Reservoir Characterization and History Matching
NASA Astrophysics Data System (ADS)
Oliver, Dean S.; Reynolds, Albert C.; Liu, Ning
This book is a guide to the use of inverse theory for estimation and conditional simulation of flow and transport parameters in porous media. It describes the theory and practice of estimating properties of underground petroleum reservoirs from measurements of flow in wells, and it explains how to characterize the uncertainty in such estimates. Early chapters present the reader with the necessary background in inverse theory, probability and spatial statistics. The book demonstrates how to calculate sensitivity coefficients and the linearized relationship between models and production data. It also shows how to develop iterative methods for generating estimates and conditional realizations. The text is written for researchers and graduates in petroleum engineering and groundwater hydrology and can be used as a textbook for advanced courses on inverse theory in petroleum engineering. It includes many worked examples to demonstrate the methodologies and a selection of exercises.
NASA Astrophysics Data System (ADS)
Pankratov, Oleg; Kuvshinov, Alexey
2016-01-01
Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
Thomas, Duncan C
2017-07-01
Screening behavior depends on previous screening history and family members' behaviors, which can act as both confounders and intermediate variables on a causal pathway from screening to disease risk. Conventional analyses that adjust for these variables can lead to incorrect inferences about the causal effect of screening if high-risk individuals are more likely to be screened. Analyzing the data in a manner that treats screening as randomized conditional on covariates allows causal parameters to be estimated; inverse probability weighting based on propensity of exposure scores is one such method considered here. I simulated family data under plausible models for the underlying disease process and for screening behavior to assess the performance of alternative methods of analysis and whether a targeted screening approach based on individuals' risk factors would lead to a greater reduction in cancer incidence in the population than a uniform screening policy. Simulation results indicate that there can be a substantial underestimation of the effect of screening on subsequent cancer risk when using conventional analysis approaches, which is avoided by using inverse probability weighting. A large case-control study of colonoscopy and colorectal cancer from Germany shows a strong protective effect of screening, but inverse probability weighting makes this effect even stronger. Targeted screening approaches based on either fixed risk factors or family history yield somewhat greater reductions in cancer incidence with fewer screens needed to prevent one cancer than population-wide approaches, but the differences may not be large enough to justify the additional effort required. See video abstract at, http://links.lww.com/EDE/B207.
Sun, Yanqing; Qi, Li; Yang, Guangren; Gilbert, Peter B
2018-05-01
This article develops hypothesis testing procedures for the stratified mark-specific proportional hazards model with missing covariates where the baseline functions may vary with strata. The mark-specific proportional hazards model has been studied to evaluate mark-specific relative risks where the mark is the genetic distance of an infecting HIV sequence to an HIV sequence represented inside the vaccine. This research is motivated by analyzing the RV144 phase 3 HIV vaccine efficacy trial, to understand associations of immune response biomarkers on the mark-specific hazard of HIV infection, where the biomarkers are sampled via a two-phase sampling nested case-control design. We test whether the mark-specific relative risks are unity and how they change with the mark. The developed procedures enable assessment of whether risk of HIV infection with HIV variants close or far from the vaccine sequence are modified by immune responses induced by the HIV vaccine; this question is interesting because vaccine protection occurs through immune responses directed at specific HIV sequences. The test statistics are constructed based on augmented inverse probability weighted complete-case estimators. The asymptotic properties and finite-sample performances of the testing procedures are investigated, demonstrating double-robustness and effectiveness of the predictive auxiliaries to recover efficiency. The finite-sample performance of the proposed tests are examined through a comprehensive simulation study. The methods are applied to the RV144 trial. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal
2009-01-01
The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455
Assessing Hospital Performance After Percutaneous Coronary Intervention Using Big Data.
Spertus, Jacob V; T Normand, Sharon-Lise; Wolf, Robert; Cioffi, Matt; Lovett, Ann; Rose, Sherri
2016-11-01
Although risk adjustment remains a cornerstone for comparing outcomes across hospitals, optimal strategies continue to evolve in the presence of many confounders. We compared conventional regression-based model to approaches particularly suited to leveraging big data. We assessed hospital all-cause 30-day excess mortality risk among 8952 adults undergoing percutaneous coronary intervention between October 1, 2011, and September 30, 2012, in 24 Massachusetts hospitals using clinical registry data linked with billing data. We compared conventional logistic regression models with augmented inverse probability weighted estimators and targeted maximum likelihood estimators to generate more efficient and unbiased estimates of hospital effects. We also compared a clinically informed and a machine-learning approach to confounder selection, using elastic net penalized regression in the latter case. Hospital excess risk estimates range from -1.4% to 2.0% across methods and confounder sets. Some hospitals were consistently classified as low or as high excess mortality outliers; others changed classification depending on the method and confounder set used. Switching from the clinically selected list of 11 confounders to a full set of 225 confounders increased the estimation uncertainty by an average of 62% across methods as measured by confidence interval length. Agreement among methods ranged from fair, with a κ statistic of 0.39 (SE: 0.16), to perfect, with a κ of 1 (SE: 0.0). Modern causal inference techniques should be more frequently adopted to leverage big data while minimizing bias in hospital performance assessments. © 2016 American Heart Association, Inc.
Interpreting the handling qualities of aircraft with stability and control augmentation
NASA Technical Reports Server (NTRS)
Hodgkinson, J.; Potsdam, E. H.; Smith, R. E.
1990-01-01
The general process of designing an aircraft for good flying qualities is first discussed. Lessons learned are pointed out, with piloted evaluation emerging as a crucial element. Two sources of rating variability in performing these evaluations are then discussed. First, the finite endpoints of the Cooper-Harper scale do not bias parametric statistical analyses unduly. Second, the wording of the scale does introduce some scatter. Phase lags generated by augmentation systems, as represented by equivalent time delays, often cause poor flying qualities. An analysis is introduced which allows a designer to relate any level of time delay to a probability of loss of aircraft control. This view of time delays should, it is hoped, allow better visibility of the time delays in the design process.
Nonlinear Spatial Inversion Without Monte Carlo Sampling
NASA Astrophysics Data System (ADS)
Curtis, A.; Nawaz, A.
2017-12-01
High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable, so these do not need to be estimated from samples as is required in MC methods. On a 2-D test example the method is shown to outperform previous methods significantly, and at a fraction of the computational cost. In many foreseeable applications there are therefore no serious impediments to extending the method to 3-D spatial models.
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Toronyi, B.; Puszta, S.
2012-01-01
In this study we interpret the magnetic anomalies at satellite altitude over a part of Europe and the Pannonian Basin. These anomalies are derived from the total magnetic measurements from the CHAMP satellite. The anomalies reduced to an elevation of 324 km. An inversion method is used to interpret the total magnetic anomalies over the Pannonian Basin. A three dimensional triangular model is used in the inversion. Two parameter distributions: Laplacian and Gaussian are investigated. The regularized inversion is numerically calculated with the Simplex and Simulated Annealing methods and the anomalous source is located in the upper crust. A probable source of the magnetization is due to the exsolution of the hematite-ilmenite minerals.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
NASA Astrophysics Data System (ADS)
Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.
2018-03-01
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
Ranjan, N.; Terzić, B.; Krafft, G. A.; ...
2018-03-06
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. Here in this article, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model tomore » describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016)], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.« less
Sattler, Sebastian; Mehlkop, Guido; Graeff, Peter; Sauer, Carsten
2014-02-01
The use of cognitive enhancement (CE) by means of pharmaceutical agents has been the subject of intense debate both among scientists and in the media. This study investigates several drivers of and obstacles to the willingness to use prescription drugs non-medically for augmenting brain capacity. We conducted a web-based study among 2,877 students from randomly selected disciplines at German universities. Using a factorial survey, respondents expressed their willingness to take various hypothetical CE-drugs; the drugs were described by five experimentally varied characteristics and the social environment by three varied characteristics. Personal characteristics and demographic controls were also measured. We found that 65.3% of the respondents staunchly refused to use CE-drugs. The results of a multivariate negative binomial regression indicated that respondents' willingness to use CE-drugs increased if the potential drugs promised a significant augmentation of mental capacity and a high probability of achieving this augmentation. Willingness decreased when there was a high probability of side effects and a high price. Prevalent CE-drug use among peers increased willingness, whereas a social environment that strongly disapproved of these drugs decreased it. Regarding the respondents' characteristics, pronounced academic procrastination, high cognitive test anxiety, low intrinsic motivation, low internalization of social norms against CE-drug use, and past experiences with CE-drugs increased willingness. The potential severity of side effects, social recommendations about using CE-drugs, risk preferences, and competencies had no measured effects upon willingness. These findings contribute to understanding factors that influence the willingness to use CE-drugs. They support the assumption of instrumental drug use and may contribute to the development of prevention, policy, and educational strategies.
2014-01-01
Background The use of cognitive enhancement (CE) by means of pharmaceutical agents has been the subject of intense debate both among scientists and in the media. This study investigates several drivers of and obstacles to the willingness to use prescription drugs non-medically for augmenting brain capacity. Methods We conducted a web-based study among 2,877 students from randomly selected disciplines at German universities. Using a factorial survey, respondents expressed their willingness to take various hypothetical CE-drugs; the drugs were described by five experimentally varied characteristics and the social environment by three varied characteristics. Personal characteristics and demographic controls were also measured. Results We found that 65.3% of the respondents staunchly refused to use CE-drugs. The results of a multivariate negative binomial regression indicated that respondents’ willingness to use CE-drugs increased if the potential drugs promised a significant augmentation of mental capacity and a high probability of achieving this augmentation. Willingness decreased when there was a high probability of side effects and a high price. Prevalent CE-drug use among peers increased willingness, whereas a social environment that strongly disapproved of these drugs decreased it. Regarding the respondents’ characteristics, pronounced academic procrastination, high cognitive test anxiety, low intrinsic motivation, low internalization of social norms against CE-drug use, and past experiences with CE-drugs increased willingness. The potential severity of side effects, social recommendations about using CE-drugs, risk preferences, and competencies had no measured effects upon willingness. Conclusions These findings contribute to understanding factors that influence the willingness to use CE-drugs. They support the assumption of instrumental drug use and may contribute to the development of prevention, policy, and educational strategies. PMID:24484640
Genotyping the factor VIII intron 22 inversion locus using fluorescent in situ hybridization.
Sheen, Campbell R; McDonald, Margaret A; George, Peter M; Smith, Mark P; Morris, Christine M
2011-02-15
The factor VIII intron 22 inversion is the most common cause of hemophilia A, accounting for approximately 40% of all severe cases of the disease. Southern hybridization and multiplex long distance PCR are the most commonly used techniques to detect the inversion in a diagnostic setting, although both have significant limitations. Here we describe our experience establishing a multicolor fluorescent in situ hybridization (FISH) based assay as an alternative to existing methods for genetic diagnosis of the inversion. Our assay was designed to apply three differentially labelled BAC DNA probes that when hybridized to interphase nuclei would exhibit signal patterns that are consistent with the normal or the inversion locus. When the FISH assay was applied to five normal and five inversion male samples, the correct genotype was assignable with p<0.001 for all samples. When applied to carrier female samples the assay could not assign a genotype to all female samples, probably due to a lower proportion of informative nuclei in female samples caused by the added complexity of a second X chromosome. Despite this complication, these pilot findings show that the assay performs favourably compared to the commonly used methods. Copyright © 2010 Elsevier Inc. All rights reserved.
Lidar measurements of mesospheric temperature inversion at a low latitude
NASA Astrophysics Data System (ADS)
Siva Kumar, V.; Bhavani Kumar, Y.; Raghunath, K.; Rao, P. B.; Krishnaiah, M.; Mizutani, K.; Aoki, T.; Yasui, M.; Itabe, T.
2001-08-01
The Rayleigh lidar data collected on 119 nights from March 1998 to February 2000 were used to study the statistical characteristics of the low latitude mesospheric temperature inversion observed over Gadanki (13.5° N, 79.2° E), India. The occurrence frequency of the inversion showed semiannual variation with maxima in the equinoxes and minima in the summer and winter, which was quite different from that reported for the mid-latitudes. The peak of the inversion layer was found to be confined to the height range of 73 to 79 km with the maximum occurrence centered around 76 km, with a weak seasonal dependence that fits well to an annual cycle with a maximum in June and a minimum in December. The magnitude of the temperature deviation associated with the inversion was found to be as high as 32 K, with the most probable value occurring at about 20 K. Its seasonal dependence seems to follow an annual cycle with a maximum in April and a minimum in October. The observed characteristics of the inversion layer are compared with that of the mid-latitudes and discussed in light of the current understanding of the source mechanisms.
II. MORE THAN JUST CONVENIENT: THE SCIENTIFIC MERITS OF HOMOGENEOUS CONVENIENCE SAMPLES.
Jager, Justin; Putnick, Diane L; Bornstein, Marc H
2017-06-01
Despite their disadvantaged generalizability relative to probability samples, nonprobability convenience samples are the standard within developmental science, and likely will remain so because probability samples are cost-prohibitive and most available probability samples are ill-suited to examine developmental questions. In lieu of focusing on how to eliminate or sharply reduce reliance on convenience samples within developmental science, here we propose how to augment their advantages when it comes to understanding population effects as well as subpopulation differences. Although all convenience samples have less clear generalizability than probability samples, we argue that homogeneous convenience samples have clearer generalizability relative to conventional convenience samples. Therefore, when researchers are limited to convenience samples, they should consider homogeneous convenience samples as a positive alternative to conventional (or heterogeneous) convenience samples. We discuss future directions as well as potential obstacles to expanding the use of homogeneous convenience samples in developmental science. © 2017 The Society for Research in Child Development, Inc.
A new exact method for line radiative transfer
NASA Astrophysics Data System (ADS)
Elitzur, Moshe; Asensio Ramos, Andrés
2006-01-01
We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.
Bayesian seismic tomography by parallel interacting Markov chains
NASA Astrophysics Data System (ADS)
Gesret, Alexandrine; Bottero, Alexis; Romary, Thomas; Noble, Mark; Desassis, Nicolas
2014-05-01
The velocity field estimated by first arrival traveltime tomography is commonly used as a starting point for further seismological, mineralogical, tectonic or similar analysis. In order to interpret quantitatively the results, the tomography uncertainty values as well as their spatial distribution are required. The estimated velocity model is obtained through inverse modeling by minimizing an objective function that compares observed and computed traveltimes. This step is often performed by gradient-based optimization algorithms. The major drawback of such local optimization schemes, beyond the possibility of being trapped in a local minimum, is that they do not account for the multiple possible solutions of the inverse problem. They are therefore unable to assess the uncertainties linked to the solution. Within a Bayesian (probabilistic) framework, solving the tomography inverse problem aims at estimating the posterior probability density function of velocity model using a global sampling algorithm. Markov chains Monte-Carlo (MCMC) methods are known to produce samples of virtually any distribution. In such a Bayesian inversion, the total number of simulations we can afford is highly related to the computational cost of the forward model. Although fast algorithms have been recently developed for computing first arrival traveltimes of seismic waves, the complete browsing of the posterior distribution of velocity model is hardly performed, especially when it is high dimensional and/or multimodal. In the latter case, the chain may even stay stuck in one of the modes. In order to improve the mixing properties of classical single MCMC, we propose to make interact several Markov chains at different temperatures. This method can make efficient use of large CPU clusters, without increasing the global computational cost with respect to classical MCMC and is therefore particularly suited for Bayesian inversion. The exchanges between the chains allow a precise sampling of the high probability zones of the model space while avoiding the chains to end stuck in a probability maximum. This approach supplies thus a robust way to analyze the tomography imaging uncertainties. The interacting MCMC approach is illustrated on two synthetic examples of tomography of calibration shots such as encountered in induced microseismic studies. On the second application, a wavelet based model parameterization is presented that allows to significantly reduce the dimension of the problem, making thus the algorithm efficient even for a complex velocity model.
Associations of blood pressure, sunlight, and vitamin D in community-dwelling adults.
Rostand, Stephen G; McClure, Leslie A; Kent, Shia T; Judd, Suzanne E; Gutiérrez, Orlando M
2016-09-01
Vitamin D deficiency/insufficiency is associated with hypertension. Blood pressure (BP) and circulating vitamin D concentrations vary with the seasons and distance from the equator suggesting BP varies inversely with the sunshine available (insolation) for cutaneous vitamin D photosynthesis. To determine if the association between insolation and BP is partly explained by vitamin D, we evaluated 1104 participants in the Reasons for Racial and Geographic Differences in Stroke study whose BP and plasma 25-hydroxyvitamin D [25(OH)D] concentrations were measured. We found a significant inverse association between SBP and 25(OH)D concentration and an inverse association between insolation and BP in unadjusted analyses. After adjusting for other confounding variables, the association of solar insolation and BP was augmented, -0.3.5 ± SEM 0.01 mmHg/1 SD higher solar insolation, P = 0.01. The greatest of effects of insolation on SBP were observed in whites (-5.2 ± SEM 0.92 mmHg/1 SD higher solar insolation, P = 0.005) and in women (-3.8 ± SEM 1.7 mmHg, P = 0.024). We found that adjusting for 25(OH)D had no effect on the association of solar insolation with SBP. We conclude that although 25(OH)D concentration is inversely associated with SBP, it did not explain the association of greater sunlight exposure with lower BP.
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Blocking for Sequential Political Experiments
Moore, Sally A.
2013-01-01
In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061
Wave energy focusing to subsurface poroelastic formations to promote oil mobilization
NASA Astrophysics Data System (ADS)
Karve, Pranav M.; Kallivokas, Loukas F.
2015-07-01
We discuss an inverse source formulation aimed at focusing wave energy produced by ground surface sources to target subsurface poroelastic formations. The intent of the focusing is to facilitate or enhance the mobility of oil entrapped within the target formation. The underlying forward wave propagation problem is cast in two spatial dimensions for a heterogeneous poroelastic target embedded within a heterogeneous elastic semi-infinite host. The semi-infiniteness of the elastic host is simulated by augmenting the (finite) computational domain with a buffer of perfectly matched layers. The inverse source algorithm is based on a systematic framework of partial-differential-equation-constrained optimization. It is demonstrated, via numerical experiments, that the algorithm is capable of converging to the spatial and temporal characteristics of surface loads that maximize energy delivery to the target formation. Consequently, the methodology is well-suited for designing field implementations that could meet a desired oil mobility threshold. Even though the methodology, and the results presented herein are in two dimensions, extensions to three dimensions are straightforward.
Bayesian performance metrics of binary sensors in homeland security applications
NASA Astrophysics Data System (ADS)
Jannson, Tomasz P.; Forrester, Thomas C.
2008-04-01
Bayesian performance metrics, based on such parameters, as: prior probability, probability of detection (or, accuracy), false alarm rate, and positive predictive value, characterizes the performance of binary sensors; i.e., sensors that have only binary response: true target/false target. Such binary sensors, very common in Homeland Security, produce an alarm that can be true, or false. They include: X-ray airport inspection, IED inspections, product quality control, cancer medical diagnosis, part of ATR, and many others. In this paper, we analyze direct and inverse conditional probabilities in the context of Bayesian inference and binary sensors, using X-ray luggage inspection statistical results as a guideline.
Model Following and High Order Augmentation for Rotorcraft Control, Applied via Partial Authority
NASA Astrophysics Data System (ADS)
Spires, James Michael
This dissertation consists of two main studies, a few small studies, and design documentation, all aimed at improving rotorcraft control by employing multi-input multi-output (MIMO) command-modelfollowing control as a baseline, together with a selectable (and de-selectable) MIMO high order compensator that augments the baseline. Two methods of MIMO command-model-following control design are compared for rotorcraft flight control. The first, Explicit Model Following (EMF), employs SISO inverse plants with a dynamic decoupling matrix, which is a purely feed-forward approach to inverting the plant. The second is Dynamic Inversion (DI), which involves both feed-forward and feedback path elements to invert the plant. The EMF design is purely linear, while the DI design has some nonlinear elements in vertical rate control. For each of these methods, an architecture is presented that provides angular rate model-following with selectable vertical rate model-following. Implementation challenges of both EMF and DI are covered, and methods of dealing with them are presented. These two MIMO model-following approaches are evaluated regarding (1) fidelity to the command model, and (2) turbulence rejection. Both are found to provide good tracking of commands and reduction of cross coupling. Next, an architecture and design methodology for high order compensator (HOC) augmentation of a baseline controller for rotorcraft is presented. With this architecture, the HOC compensator is selectable and can easily be authority-limited, which might ease certification. Also, the plant for this augmentative MIMO compensator design is a stabilized helicopter system, so good flight test data could be safely gathered for more accurate plant identification. The design methodology is carried out twice on an example helicopter model, once with turbulence rejection as the objective, and once with the additional objective of closely following pilot commands. The turbulence rejection HOC is feedback only (HOC_FB), while the combined objective HOC has both feedback and feedforward elements (HOC_FBFF). The HOC_FB was found to be better at improving turbulence rejection but generally degrades the following of pilot commands. The HOC_FBFF improves turbulence rejection relative to the Baseline controller, but not by as much as HOC_FB. However, HOC_FBFF also generally improves the following of pilot commands. Future work is suggested and facilitated in the areas of DI, MIMO EMF, and HOC augmentation. High frequency dynamics, neglected in the DI design, unexpectedly change the low frequency behavior of the DI-plant system, in addition to the expected change in high frequency dynamics. This dissertation shows why, and suggests a technique for designing a pseudo-command pre-filter that at least partially restores the intended DI-plant dynamics. For EMF, a procedure is presented that avoids use of a reducedorder model, and instead uses a full-order model or even frequency-domain flight test data. With HOC augmentation, future research might investigate the utility of adding an H? constraint to the design objective, which is known as an equal-weighting mixed-norm H2/H infinity design. Because all the formulas in the published literature either require solution of three coupled Riccati Equations (for which there is no readily available tool), or make assumptions that do not fit the present problem, appropriate equalweighting H2/H infinity design formulas are derived which involve two de-coupled Riccati Equations.
Wu, V W C; Sham, J S T; Kwong, D L W
2004-07-01
The aim of this study is to demonstrate the use of inverse planning in three-dimensional conformal radiation therapy (3DCRT) of oesophageal cancer patients and to evaluate its dosimetric results by comparing them with forward planning of 3DCRT and inverse planning of intensity-modulated radiotherapy (IMRT). For each of the 15 oesophageal cancer patients in this study, the forward 3DCRT, inverse 3DCRT and inverse IMRT plans were produced using the FOCUS treatment planning system. The dosimetric results and the planner's time associated with each of the treatment plans were recorded for comparison. The inverse 3DCRT plans showed similar dosimetric results to the forward plans in the planning target volume (PTV) and organs at risk (OARs). However, they were inferior to that of the IMRT plans in terms of tumour control probability and target dose conformity. Furthermore, the inverse 3DCRT plans were less effective in reducing the percentage lung volume receiving a dose below 25 Gy when compared with the IMRT plans. The inverse 3DCRT plans delivered a similar heart dose as in the forward plans, but higher dose than the IMRT plans. The inverse 3DCRT plans significantly reduced the operator's time by 2.5 fold relative to the forward plans. In conclusion, inverse planning for 3DCRT is a reasonable alternative to the forward planning for oesophageal cancer patients with reduction of the operator's time. However, IMRT has the better potential to allow further dose escalation and improvement of tumour control.
Statistical computation of tolerance limits
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1993-01-01
Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.
The capacity credit of grid-connected photovoltaic systems
NASA Astrophysics Data System (ADS)
Alsema, E. A.; van Wijk, A. J. M.; Turkenburg, W. C.
The capacity credit due photovoltaic (PV) power plants if integrated into the Netherlands grid was investigated, together with an estimate of the total allowable penetration. An hourly simulation was performed based on meteorological data from five stations and considering tilted surfaces, the current grid load pattern, and the load pattern after PV-power augmentation. The reliability of the grid was assessed in terms of a loss of load probability analysis, assuming power drops were limited to 1 GW. A projected tolerance for 2.5 GW of PV power was calculated. Peak demands were determined to be highest in winter, contrary to highest insolation levels; however, daily insolation levels coincided with daily peak demands. Combining the PV input with an equal amount of wind turbine power production was found to augment the capacity credit for both at aggregate outputs of 2-4 GW.
Analysis of the Westland Data Set
NASA Technical Reports Server (NTRS)
Wen, Fang; Willett, Peter; Deb, Somnath
2001-01-01
The "Westland" set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters call improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior oil training data and is thus able to quantify probability of error in all exact manner, such that features may be discarded or coarsened appropriately.
Adjusting for Confounding in Early Postlaunch Settings: Going Beyond Logistic Regression Models.
Schmidt, Amand F; Klungel, Olaf H; Groenwold, Rolf H H
2016-01-01
Postlaunch data on medical treatments can be analyzed to explore adverse events or relative effectiveness in real-life settings. These analyses are often complicated by the number of potential confounders and the possibility of model misspecification. We conducted a simulation study to compare the performance of logistic regression, propensity score, disease risk score, and stabilized inverse probability weighting methods to adjust for confounding. Model misspecification was induced in the independent derivation dataset. We evaluated performance using relative bias confidence interval coverage of the true effect, among other metrics. At low events per coefficient (1.0 and 0.5), the logistic regression estimates had a large relative bias (greater than -100%). Bias of the disease risk score estimates was at most 13.48% and 18.83%. For the propensity score model, this was 8.74% and >100%, respectively. At events per coefficient of 1.0 and 0.5, inverse probability weighting frequently failed or reduced to a crude regression, resulting in biases of -8.49% and 24.55%. Coverage of logistic regression estimates became less than the nominal level at events per coefficient ≤5. For the disease risk score, inverse probability weighting, and propensity score, coverage became less than nominal at events per coefficient ≤2.5, ≤1.0, and ≤1.0, respectively. Bias of misspecified disease risk score models was 16.55%. In settings with low events/exposed subjects per coefficient, disease risk score methods can be useful alternatives to logistic regression models, especially when propensity score models cannot be used. Despite better performance of disease risk score methods than logistic regression and propensity score models in small events per coefficient settings, bias, and coverage still deviated from nominal.
Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha
2017-04-01
Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Diamond, J M; Serveiss, V B
2001-12-15
The free-flowing Clinch and Powell River Basin, located in southwestern Virginia, United States, historically had one of the richest assemblages of native fish and freshwater mussels in the world. Nearly half of the species once residing here are now extinct, threatened, or endangered. The United States Environmental Protection Agency's framework for conducting an ecological risk assessment was used to structure a watershed-scale analysis of human land use, in-stream habitat quality, and their relationship to native fish and mussel populations in order to develop future management strategies and prioritize areas in need of enhanced protection. Our analyses indicate that agricultural and urban land uses as well as proximity to mining activities and transportation corridors are inversely related to fish index of biotic integrity (IBI) and mussel species diversity. Forward stepwise multiple regression analyses indicated that coal mining had the most impact on fish IBI followed by percent cropland and urban area in the riparian corridor (R2 = 0.55, p = 0.02); however, these analyses suggest that other site-specific factors are important. Habitat quality measures accounted for as much as approximately half of the variability in fish IBI values if the analysis was limited to sites within a relatively narrow elevation range. These results, in addition to other data collected in this watershed, suggest that nonhabitat-related stressors (e.g., accidental chemical spills) also have significant effects on biota in this basin. The number of co-occurring human land uses was inversely related to fish IBI (r = -0.49, p < 0.01). Sites with > or = 2 co-occurring land uses had >90% probability of having <2 mussel species present. Our findings predict that many mussel concentration sites are vulnerable to future extirpation. In addition, our results suggest that protection and enhancement of naturally vegetated riparian corridors, better controls of mine effluents and urban runoff, and increased safeguards against accidental chemical spills, as well as reintroduction or augmentation of threatened and endangered species, may help sustain native fish and mussel populations in this watershed.
Hand surgery volume and the US economy: is there a statistical correlation?
Gordon, Chad R; Pryor, Landon; Afifi, Ahmed M; Gatherwright, James R; Evans, Peter J; Hendrickson, Mark; Bernard, Steven; Zins, James E
2010-11-01
To the best of our knowledge, there have been no previous studies evaluating the correlation of the US economy and hand surgery volume. Therefore, in light of the current recession, our objective was to study our institution's hand surgery volume over the last 17 years in relation to the nation's economy. A retrospective analysis of our institution's hand surgery volume, as represented by our most common procedure (ie, carpal tunnel release), was performed between January 1992 and October 2008. Liposuction and breast augmentation volumes were chosen to serve as cosmetic plastic surgery comparison groups. Pearson correlation statistics were used to estimate the relationship between the surgical volume and the US economy, as represented by the 3 market indices (Dow Jones, NASDAQ, and S&P500). A combined total of 7884 hand surgery carpal tunnel release (open or endoscopic) patients were identified. There were 1927 (24%) and 5957 (76%) patients within the departments of plastic and orthopedic surgery, respectively. In the plastic surgery department, there was a strong negative (ie, inverse relationship) correlation between hand surgery volume and the economy (P < 0.001). In converse, the orthopedic department's hand surgery volume demonstrated a positive (ie, parallel) correlation (P < 0.001). The volumes of liposuction and breast augmentation also showed a positive correlation (P < 0.001). To our knowledge, we have demonstrated for the first time an inverse (ie, negative) correlation between hand surgery volumes performed by plastic surgeons in relation to the US economy, as represented by the 3 major market indices. In contrast, orthopedic hand surgery volume and cosmetic surgery show a parallel (ie, positive) correlation. This data suggests that plastic surgeons are increasing their cosmetic surgery-to-reconstructive/hand surgery ratio during strong economic times and vice versa during times of economic slowdown.
Hypothermia augments non-cholinergic neuronal bronchoconstriction in pithed guinea-pigs.
Rechtman, M P; King, R G; Boura, A L
1991-08-16
Electrical stimulation at C4-C7 in the spinal canal of pithed guinea-pigs injected with atropine, d-tubocurarine and pentolinium caused frequency-dependent bronchoconstriction. Such non-cholinergic responses to electrical stimulation, unlike responses to substance P, were abolished by pretreatment with capsaicin but not by mepyramine or propranolol. Bronchoconstrictor responses to electrical stimulation were inversely related to rectal temperature (between 30-40 degrees C) whereas responses to substance P increased with increasing temperature over the same range. Ouabain (i.v.) augmented responses to electrical stimulation at 35-37 degrees C but depressed those at 30-32 degrees C. Both morphine and the alpha 2-adrenoceptor agonist B-HT920 (i.v.) inhibited non-cholinergic-mediated bronchoconstrictor responses at 30-32 degrees C. These results stress the importance of adequate control of body temperature in this preparation. Lowered body temperature may increase neuronal output of neuropeptides whilst depressing bronchial smooth muscle sensitivity. The data support previous conclusions regarding the role of Na+/K+ activated ATPase in temperature-induced changes in sensitivity to bronchoconstrictor stimuli.
Kinetics of diffusion-controlled annihilation with sparse initial conditions
Ben-Naim, Eli; Krapivsky, Paul
2016-12-16
Here, we study diffusion-controlled single-species annihilation with sparse initial conditions. In this random process, particles undergo Brownian motion, and when two particles meet, both disappear. We also focus on sparse initial conditions where particles occupy a subspace of dimension δ that is embedded in a larger space of dimension d. Furthermore, we find that the co-dimension Δ = d - δ governs the behavior. All particles disappear when the co-dimension is sufficiently small, Δ ≤ 2; otherwise, a finite fraction of particles indefinitely survive. We establish the asymptotic behavior of the probability S(t) that a test particle survives until time t. When the subspace is a line, δ = 1, we find inverse logarithmic decay,more » $$S\\sim {(\\mathrm{ln}t)}^{-1}$$, in three dimensions, and a modified power-law decay, $$S\\sim (\\mathrm{ln}t){t}^{-1/2}$$, in two dimensions. In general, the survival probability decays algebraically when Δ < 2, and there is an inverse logarithmic decay at the critical co-dimension Δ = 2.« less
Gillaizeau, Florence; Sénage, Thomas; Le Borgne, Florent; Le Tourneau, Thierry; Roussel, Jean-Christian; Leffondrè, Karen; Porcher, Raphaël; Giraudeau, Bruno; Dantan, Etienne; Foucher, Yohann
2018-04-15
Multistate models with interval-censored data, such as the illness-death model, are still not used to any considerable extent in medical research regardless of the significant literature demonstrating their advantages compared to usual survival models. Possible explanations are their uncommon availability in classical statistical software or, when they are available, by the limitations related to multivariable modelling to take confounding into consideration. In this paper, we propose a strategy based on propensity scores that allows population causal effects to be estimated: the inverse probability weighting in the illness semi-Markov model with interval-censored data. Using simulated data, we validated the performances of the proposed approach. We also illustrated the usefulness of the method by an application aiming to evaluate the relationship between the inadequate size of an aortic bioprosthesis and its degeneration or/and patient death. We have updated the R package multistate to facilitate the future use of this method. Copyright © 2017 John Wiley & Sons, Ltd.
Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach
NASA Astrophysics Data System (ADS)
Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.
2016-12-01
Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.
Gender recognition from vocal source
NASA Astrophysics Data System (ADS)
Sorokin, V. N.; Makarov, I. S.
2008-07-01
Efficiency of automatic recognition of male and female voices based on solving the inverse problem for glottis area dynamics and for waveform of the glottal airflow volume velocity pulse is studied. The inverse problem is regularized through the use of analytical models of the voice excitation pulse and of the dynamics of the glottis area, as well as the model of one-dimensional glottal airflow. Parameters of these models and spectral parameters of the volume velocity pulse are considered. The following parameters are found to be most promising: the instant of maximum glottis area, the maximum derivative of the area, the slope of the spectrum of the glottal airflow volume velocity pulse, the amplitude ratios of harmonics of this spectrum, and the pitch. On the plane of the first two main components in the space of these parameters, an almost twofold decrease in the classification error relative to that for the pitch alone is attained. The male voice recognition probability is found to be 94.7%, and the female voice recognition probability is 95.9%.
The emergence of different tail exponents in the distributions of firm size variables
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Watanabe, Tsutomu; Mizuno, Takayuki
2013-05-01
We discuss a mechanism through which inversion symmetry (i.e., invariance of a joint probability density function under the exchange of variables) and Gibrat’s law generate power-law distributions with different tail exponents. Using a dataset of firm size variables, that is, tangible fixed assets K, the number of workers L, and sales Y, we confirm that these variables have power-law tails with different exponents, and that inversion symmetry and Gibrat’s law hold. Based on these findings, we argue that there exists a plane in the three dimensional space (logK,logL,logY), with respect to which the joint probability density function for the three variables is invariant under the exchange of variables. We provide empirical evidence suggesting that this plane fits the data well, and argue that the plane can be interpreted as the Cobb-Douglas production function, which has been extensively used in various areas of economics since it was first introduced almost a century ago.
Armstrong, Simon; Ried, Karin; Sali, Avni; McLaughlin, Patrick
2013-07-01
Breast augmentation, post-mastectomy patients as well as some women with natural breast tissue, and lactating, women often experience discomfort in prone activities. Our study, for the first time, examines pain levels, mechanical force and peak pressure in natural, reconstructed and augmented breast tissues with and without a new orthosis designed for reduction of displacement, compression and loading forces through the breast tissue during prone activities. Twelve females with natural, lactating or augmented breast tissue, and cup-sizes C-F volunteered for the study. Pain perception was measured using an 11-point visual-analogue-scale without and with different sizes/textures of the orthosis. Magnetic-Resonance-Imaging captured segmental transverse and para-sagittal mid-breast views, and provided linear measurements of breast tissue displacement and deformation. Capacitance-pliance® sensorstrips were used to measure force and pressure between the breast tissue and the surface of a standard treatment table. Measurements were taken whilst the participants were load bearing in prone positions with and without the orthosis. The new orthosis significantly reduced pain and mechanical forces in participants with natural or augmented breast tissue with cup-sizes C-F. Larger orthotic sizes were correlated with greater reduction in pain and mechanical forces, with all participants reporting no pain with the largest size orthotic. A size-3 orthotic decreased load on the breast tissue by 82% and reduced peak pressure by 42%. The same orthotic decreased medio-lateral spread of breast tissue and implant whilst increasing height. The new orthosis significantly reduced pain and mechanical forces in all women with natural or augmented tissues. Results are of clinical significance, as reduced mechanical forces are associated with greater comfort and reduced pressure and displacement which may lower the probability of breast implant complication. In clinical settings the orthosis is recommended for all augmentation patients when undergoing prone treatment by therapists and clinicians for improved comfort and safety. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Neural computations underlying inverse reinforcement learning in the human brain
Pauli, Wolfgang M; Bossaerts, Peter; O'Doherty, John
2017-01-01
In inverse reinforcement learning an observer infers the reward distribution available for actions in the environment solely through observing the actions implemented by another agent. To address whether this computational process is implemented in the human brain, participants underwent fMRI while learning about slot machines yielding hidden preferred and non-preferred food outcomes with varying probabilities, through observing the repeated slot choices of agents with similar and dissimilar food preferences. Using formal model comparison, we found that participants implemented inverse RL as opposed to a simple imitation strategy, in which the actions of the other agent are copied instead of inferring the underlying reward structure of the decision problem. Our computational fMRI analysis revealed that anterior dorsomedial prefrontal cortex encoded inferences about action-values within the value space of the agent as opposed to that of the observer, demonstrating that inverse RL is an abstract cognitive process divorceable from the values and concerns of the observer him/herself. PMID:29083301
Inversion Monophyly in African Anopheline Malaria Vectors
Garcia, B. A.; Caccone, A.; Mathiopoulos, K. D.; Powell, J. R.
1996-01-01
The African Anopheles gambiae complex of six sibling species has many polymorphic and fixed paracentric inversions detectable in polytene chromosomes. These have been used to infer phylogenetic relationships as classically done with Drosophila. Two species, A. gambiae and A. merus, were thought to be sister taxa based on a shared X inversion designated X(ag). Recent DNA data have conflicted with this phylogenetic inference as they have supported a sister taxa relationship of A. gambiae and A. arabiensis. A possible explanation is that the X(ag) is not monophyletic. Here we present data from a gene (soluble guanylate cyclase) within the X(ag) that strongly supports the monophyly of the X(ag). We conjecture that introgression may be occurring between the widely sympatric species A. gambiae and A. arabiensis and that the previous DNA phylogenies have been detecting the introgression. Evidently, introgression is not uniform across the genome, and species-specific regions, like the X-chromosome inversions, do not introgress probably due to selective elimination in hybrids and backcrosses. PMID:8807303
Acoustic sounding in the planetary boundary layer
NASA Technical Reports Server (NTRS)
Kelly, E. H.
1974-01-01
Three case studies are presented involving data from an acoustic radar. The first two cases examine data collected during the passage of a mesoscale cold-air intrusion, probably thunderstorm outflow, and a synoptic-scale cold front. In these studies the radar data are compared to conventional meteorological data obtained from the WKY tower facility for the purpose of radar data interpretation. It is shown that the acoustic radar echoes reveal the boundary between warm and cold air and other areas of turbulent mixing, regions of strong vertical temperature gradients, and areas of weak or no wind shear. The third case study examines the relationship between the nocturnal radiation inversion and the low-level wind maximum or jet in the light of conclusions presented by Blackadar (1957). The low-level jet is seen forming well above the top of the inversion. Sudden rapid growth of the inversion occurs which brings the top of the inversion to a height equal that of the jet. Coincident with the rapid growth of the inversion is a sudden decrease in the intensity of the acoustic radar echoes in the inversion layer. It is suggested that the decrease in echo intensity reveals a decrease in turbulent mixing in the inversion layer as predicted by Blackadar. It is concluded that the acoustic radar can be a valuable tool for study in the lower atmosphere.
Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.
NASA Astrophysics Data System (ADS)
Zhou, S.; Huang, Q.
2017-12-01
Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.
NASA Astrophysics Data System (ADS)
Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.
2011-10-01
In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.
Cotten, M; Wagner, E; Zatloukal, K; Birnstiel, M L
1993-01-01
Delivery of genes via receptor-mediated endocytosis is severely limited by the poor exit of endocytosed DNA from the endosome. A large enhancement in delivery efficiency has been obtained by including human adenovirus particles in the delivery system. This enhancement is probably a function of the natural adenovirus entry mechanism, which must include passage through or disruption of the endosomal membrane. In an effort to identify safer virus particles useful in this application, we have tested the chicken adenovirus CELO virus for its ability to augment receptor-mediated gene delivery. We report here that CELO virus possesses pH-dependent, liposome disruption activity similar to that of human adenovirus type 5. Furthermore, the chicken adenovirus can be used to augment receptor-mediated gene delivery to levels comparable to those found for the human adenovirus when it is physically linked to polylysine ligand-condensed DNA particles. The chicken adenovirus has the advantage of being produced inexpensively in embryonated eggs, and the virus is naturally replication defective in mammalian cells, even in the presence of wild-type human adenovirus. Images PMID:8099627
Abdomen and spinal cord segmentation with augmented active shape models.
Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A
2016-07-01
Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
Bayesian multiple-source localization in an uncertain ocean environment.
Dosso, Stan E; Wilmut, Michael J
2011-06-01
This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America
NASA Astrophysics Data System (ADS)
Torres-Verdin, C.
2007-05-01
This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.
A 1.5 million–base pair inversion polymorphism in families with Williams-Beuren syndrome
Osborne, Lucy R.; Li, Martin; Pober, Barbara; Chitayat, David; Bodurtha, Joann; Mandel, Ariane; Costa, Teresa; Grebe, Theresa; Cox, Sarah; Tsui, Lap-Chee; Scherer, Stephen W.
2010-01-01
Williams-Beuren syndrome (WBS) is most often caused by hemizygous deletion of a 1.5-Mb interval encompassing at least 17 genes at 7q11.23 (refs. 1, 2). As with many other haploinsufficiency diseases, the mechanism underlying the WBS deletion is thought to be unequal meiotic recombination, probably mediated by the highly homologous DNA that flanks the commonly deleted region3. Here, we report the use of interphase fluorescence in situ hybridization (FISH) and pulsed-field gel electrophoresis (PFGE) to identify a genomic polymorphism in families with WBS, consisting of an inversion of the WBS region. We have observed that the inversion is hemizygous in 3 of 11 (27%) atypical affected individuals who show a subset of the WBS phenotypic spectrum but do not carry the typical WBS microdeletion. Two of these individuals also have a parent who carries the inversion. In addition, in 4 of 12 (33%) families with a proband carrying the WBS deletion, we observed the inversion exclusively in the parent transmitting the disease-related chromosome. These results suggest the presence of a newly identified genomic variant within the population that may be associated with the disease. It may result in predisposition to primarily WBS-causing microdeletions, but may also cause translocations and inversions. PMID:11685205
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Methods for Handling Missing Secondary Respondent Data
ERIC Educational Resources Information Center
Young, Rebekah; Johnson, David
2013-01-01
Secondary respondent data are underutilized because researchers avoid using these data in the presence of substantial missing data. The authors reviewed, evaluated, and tested solutions to this problem. Five strategies of dealing with missing partner data were reviewed: (a) complete case analysis, (b) inverse probability weighting, (c) correction…
New Finite Difference Methods Based on IIM for Inextensible Interfaces in Incompressible Flows
Li, Zhilin; Lai, Ming-Chih
2012-01-01
In this paper, new finite difference methods based on the augmented immersed interface method (IIM) are proposed for simulating an inextensible moving interface in an incompressible two-dimensional flow. The mathematical models arise from studying the deformation of red blood cells in mathematical biology. The governing equations are incompressible Stokes or Navier-Stokes equations with an unknown surface tension, which should be determined in such a way that the surface divergence of the velocity is zero along the interface. Thus, the area enclosed by the interface and the total length of the interface should be conserved during the evolution process. Because of the nonlinear and coupling nature of the problem, direct discretization by applying the immersed boundary or immersed interface method yields complex nonlinear systems to be solved. In our new methods, we treat the unknown surface tension as an augmented variable so that the augmented IIM can be applied. Since finding the unknown surface tension is essentially an inverse problem that is sensitive to perturbations, our regularization strategy is to introduce a controlled tangential force along the interface, which leads to a least squares problem. For Stokes equations, the forward solver at one time level involves solving three Poisson equations with an interface. For Navier-Stokes equations, we propose a modified projection method that can enforce the pressure jump condition corresponding directly to the unknown surface tension. Several numerical experiments show good agreement with other results in the literature and reveal some interesting phenomena. PMID:23795308
Iterative methods for mixed finite element equations
NASA Technical Reports Server (NTRS)
Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.
1985-01-01
Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.
Probabilistic Reasoning for Robustness in Automated Planning
NASA Technical Reports Server (NTRS)
Schaffer, Steven; Clement, Bradley; Chien, Steve
2007-01-01
A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.
Hadar, R; Vengeliene, V; Barroeta Hlusicke, E; Canals, S; Noori, H R; Wieske, F; Rummel, J; Harnack, D; Heinz, A; Spanagel, R; Winter, C
2016-01-01
Case reports indicate that deep-brain stimulation in the nucleus accumbens may be beneficial to alcohol-dependent patients. The lack of clinical trials and our limited knowledge of deep-brain stimulation call for translational experiments to validate these reports. To mimic the human situation, we used a chronic-continuous brain-stimulation paradigm targeting the nucleus accumbens and other brain sites in alcohol-dependent rats. To determine the network effects of deep-brain stimulation in alcohol-dependent rats, we combined electrical stimulation of the nucleus accumbens with functional magnetic resonance imaging (fMRI), and studied neurotransmitter levels in nucleus accumbens-stimulated versus sham-stimulated rats. Surprisingly, we report here that electrical stimulation of the nucleus accumbens led to augmented relapse behavior in alcohol-dependent rats. Our associated fMRI data revealed some activated areas, including the medial prefrontal cortex and caudate putamen. However, when we applied stimulation to these areas, relapse behavior was not affected, confirming that the nucleus accumbens is critical for generating this paradoxical effect. Neurochemical analysis of the major activated brain sites of the network revealed that the effect of stimulation may depend on accumbal dopamine levels. This was supported by the finding that brain-stimulation-treated rats exhibited augmented alcohol-induced dopamine release compared with sham-stimulated animals. Our data suggest that deep-brain stimulation in the nucleus accumbens enhances alcohol-liking probably via augmented dopamine release and can thereby promote relapse. PMID:27327255
Quantum Jeffreys prior for displaced squeezed thermal states
NASA Astrophysics Data System (ADS)
Kwek, L. C.; Oh, C. H.; Wang, Xiang-Bin
1999-09-01
It is known that, by extending the equivalence of the Fisher information matrix to its quantum version, the Bures metric, the quantum Jeffreys prior can be determined from the volume element of the Bures metric. We compute the Bures metric for the displaced squeezed thermal state and analyse the quantum Jeffreys prior and its marginal probability distributions. To normalize the marginal probability density function, it is necessary to provide a range of values of the squeezing parameter or the inverse temperature. We find that if the range of the squeezing parameter is kept narrow, there are significant differences in the marginal probability density functions in terms of the squeezing parameters for the displaced and undisplaced situations. However, these differences disappear as the range increases. Furthermore, marginal probability density functions against temperature are very different in the two cases.
Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo
2013-01-01
Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson’s equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both of the mean-field theory and MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling. PMID:22680474
Wen, Jiayi; Zhou, Shenggao; Xu, Zhenli; Li, Bo
2012-04-01
Competitive adsorption of counterions of multiple species to charged surfaces is studied by a size-effect-included mean-field theory and Monte Carlo (MC) simulations. The mean-field electrostatic free-energy functional of ionic concentrations, constrained by Poisson's equation, is numerically minimized by an augmented Lagrangian multiplier method. Unrestricted primitive models and canonical ensemble MC simulations with the Metropolis criterion are used to predict the ionic distributions around a charged surface. It is found that, for a low surface charge density, the adsorption of ions with a higher valence is preferable, agreeing with existing studies. For a highly charged surface, both the mean-field theory and the MC simulations demonstrate that the counterions bind tightly around the charged surface, resulting in a stratification of counterions of different species. The competition between mixed entropy and electrostatic energetics leads to a compromise that the ionic species with a higher valence-to-volume ratio has a larger probability to form the first layer of stratification. In particular, the MC simulations confirm the crucial role of ionic valence-to-volume ratios in the competitive adsorption to charged surfaces that had been previously predicted by the mean-field theory. The charge inversion for ionic systems with salt is predicted by the MC simulations but not by the mean-field theory. This work provides a better understanding of competitive adsorption of counterions to charged surfaces and calls for further studies on the ionic size effect with application to large-scale biomolecular modeling.
Using global unique identifiers to link autism collections.
Johnson, Stephen B; Whitney, Glen; McAuliffe, Matthew; Wang, Hailong; McCreedy, Evan; Rozenblit, Leon; Evans, Clark C
2010-01-01
To propose a centralized method for generating global unique identifiers to link collections of research data and specimens. The work is a collaboration between the Simons Foundation Autism Research Initiative and the National Database for Autism Research. The system is implemented as a web service: an investigator inputs identifying information about a participant into a client application and sends encrypted information to a server application, which returns a generated global unique identifier. The authors evaluated the system using a volume test of one million simulated individuals and a field test on 2000 families (over 8000 individual participants) in an autism study. Inverse probability of hash codes; rate of false identity of two individuals; rate of false split of single individual; percentage of subjects for which identifying information could be collected; percentage of hash codes generated successfully. Large-volume simulation generated no false splits or false identity. Field testing in the Simons Foundation Autism Research Initiative Simplex Collection produced identifiers for 96% of children in the study and 77% of parents. On average, four out of five hash codes per subject were generated perfectly (only one perfect hash is required for subsequent matching). The system must achieve balance among the competing goals of distinguishing individuals, collecting accurate information for matching, and protecting confidentiality. Considerable effort is required to obtain approval from institutional review boards, obtain consent from participants, and to achieve compliance from sites during a multicenter study. Generic unique identifiers have the potential to link collections of research data, augment the amount and types of data available for individuals, support detection of overlap between collections, and facilitate replication of research findings.
A Bayesian approach to modeling 2D gravity data using polygon states
NASA Astrophysics Data System (ADS)
Titus, W. J.; Titus, S.; Davis, J. R.
2015-12-01
We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.
Iterative updating of model error for Bayesian inversion
NASA Astrophysics Data System (ADS)
Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew
2018-02-01
In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.
Health effects of protein intake in healthy adults: a systematic literature review
Pedersen, Agnes N.; Kondrup, Jens; Børsheim, Elisabet
2013-01-01
The purpose of this systematic review is to assess the evidence behind the dietary requirement of protein and to assess the health effects of varying protein intake in healthy adults. The literature search covered the years 2000–2011. Prospective cohort, case-control, and intervention studies were included. Out of a total of 5,718 abstracts, 412 full papers were identified as potentially relevant, and after careful scrutiny, 64 papers were quality graded as A (highest), B, or C. The grade of evidence was classified as convincing, probable, suggestive or inconclusive. The evidence is assessed as: probable for an estimated average requirement of 0.66 g good-quality protein/kg body weight (BW)/day based on nitrogen balance studies, suggestive for a relationship between increased all-cause mortality risk and long-term low-carbohydrate–high-protein (LCHP) diets; but inconclusive for a relationship between all-cause mortality risk and protein intake per se; suggestive for an inverse relationship between cardiovascular mortality and vegetable protein intake; inconclusive for relationships between cancer mortality and cancer diseases, respectively, and protein intake; inconclusive for a relationship between cardiovascular diseases and total protein intake; suggestive for an inverse relationship between blood pressure (BP) and vegetable protein; probable to convincing for an inverse relationship between soya protein intake and LDL cholesterol; inconclusive for a relationship between protein intake and bone health, energy intake, BW control, body composition, renal function, and risk of kidney stones, respectively; suggestive for a relationship between increased risk of type 2 diabetes (T2D) and long-term LCHP-high-fat diets; inconclusive for impact of physical training on protein requirement; and suggestive for effect of physical training on whole-body protein retention. In conclusion, the evidence is assessed as probable regarding the estimated requirement based on nitrogen balance studies, and suggestive to inconclusive for protein intake and mortality and morbidity. Vegetable protein intake was associated with decreased risk in many studies. Potentially adverse effects of a protein intake exceeding 20–23 E% remain to be investigated. PMID:23908602
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Steven H., E-mail: SHLin@mdanderson.org; Wang Lu; Myles, Bevan
2012-12-01
Purpose: Although 3-dimensional conformal radiotherapy (3D-CRT) is the worldwide standard for the treatment of esophageal cancer, intensity modulated radiotherapy (IMRT) improves dose conformality and reduces the radiation exposure to normal tissues. We hypothesized that the dosimetric advantages of IMRT should translate to substantive benefits in clinical outcomes compared with 3D-CRT. Methods and Materials: An analysis was performed of 676 nonrandomized patients (3D-CRT, n=413; IMRT, n=263) with stage Ib-IVa (American Joint Committee on Cancer 2002) esophageal cancers treated with chemoradiotherapy at a single institution from 1998-2008. An inverse probability of treatment weighting and inclusion of propensity score (treatment probability) as amore » covariate were used to compare overall survival time, interval to local failure, and interval to distant metastasis, while accounting for the effects of other clinically relevant covariates. The propensity scores were estimated using logistic regression analysis. Results: A fitted multivariate inverse probability weighted-adjusted Cox model showed that the overall survival time was significantly associated with several well-known prognostic factors, along with the treatment modality (IMRT vs 3D-CRT, hazard ratio 0.72, P<.001). Compared with IMRT, 3D-CRT patients had a significantly greater risk of dying (72.6% vs 52.9%, inverse probability of treatment weighting, log-rank test, P<.0001) and of locoregional recurrence (P=.0038). No difference was seen in cancer-specific mortality (Gray's test, P=.86) or distant metastasis (P=.99) between the 2 groups. An increased cumulative incidence of cardiac death was seen in the 3D-CRT group (P=.049), but most deaths were undocumented (5-year estimate, 11.7% in 3D-CRT vs 5.4% in IMRT group, Gray's test, P=.0029). Conclusions: Overall survival, locoregional control, and noncancer-related death were significantly better after IMRT than after 3D-CRT. Although these results need confirmation, IMRT should be considered for the treatment of esophageal cancer.« less
Advanced Issues in Propensity Scores: Longitudinal and Missing Data
ERIC Educational Resources Information Center
Kupzyk, Kevin A.; Beal, Sarah J.
2017-01-01
In order to investigate causality in situations where random assignment is not possible, propensity scores can be used in regression adjustment, stratification, inverse-probability treatment weighting, or matching. The basic concepts behind propensity scores have been extensively described. When data are longitudinal or missing, the estimation and…
Three Essays on Estimating Causal Treatment Effects
ERIC Educational Resources Information Center
Deutsch, Jonah
2013-01-01
This dissertation is composed of three distinct chapters, each of which addresses issues of estimating treatment effects. The first chapter empirically tests the Value-Added (VA) model using school lotteries. The second chapter, co-authored with Michael Wood, considers properties of inverse probability weighting (IPW) in simple treatment effect…
Propensity Score Weighting with Error-Prone Covariates
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Lockwood, J. R.; Setodji, Claude M.
2011-01-01
Inverse probability weighting (IPW) estimates are widely used in applications where data are missing due to nonresponse or censoring or in observational studies of causal effects where the counterfactuals cannot be observed. This extensive literature has shown the estimators to be consistent and asymptotically normal under very general conditions,…
NASA Astrophysics Data System (ADS)
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Nie, Xiaokai; Coca, Daniel
2018-01-01
The paper introduces a matrix-based approach to estimate the unique one-dimensional discrete-time dynamical system that generated a given sequence of probability density functions whilst subjected to an additive stochastic perturbation with known density.
Aerosol Robotic Network (AERONET) Version 3 Aerosol Optical Depth and Inversion Products
NASA Astrophysics Data System (ADS)
Giles, D. M.; Holben, B. N.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Sorokin, M. G.; Slutsker, I.
2017-12-01
The Aerosol Robotic Network (AERONET) surface-based aerosol optical depth (AOD) database has been a principal component of many Earth science remote sensing applications and modelling for more than two decades. During this time, the AERONET AOD database had utilized a semiautomatic quality assurance approach (Smirnov et al., 2000). Data quality automation developed for AERONET Version 3 (V3) was achieved by augmenting and improving upon the combination of Version 2 (V2) automatic and manual procedures to provide a more refined near real time (NRT) and historical worldwide database of AOD. The combined effect of these new changes provides a historical V3 AOD Level 2.0 data set comparable to V2 Level 2.0 AOD. The recently released V3 Level 2.0 AOD product uses Level 1.5 data with automated cloud screening and quality controls and applies pre-field and post-field calibrations and wavelength-dependent temperature characterizations. For V3, the AERONET aerosol retrieval code inverts AOD and almucantar sky radiances using a full vector radiative transfer called Successive ORDers of scattering (SORD; Korkin et al., 2017). The full vector code allows for potentially improving the real part of the complex index of refraction and the sphericity parameter and computing the radiation field in the UV (e.g., 380nm) and degree of linear depolarization. Effective lidar ratio and depolarization ratio products are also available with the V3 inversion release. Inputs to the inversion code were updated to the accommodate H2O, O3 and NO2 absorption to be consistent with the computation of V3 AOD. All of the inversion products are associated with estimated uncertainties that include the random error plus biases due to the uncertainty in measured AOD, absolute sky radiance calibration, and retrieved MODIS BRDF for snow-free and snow covered surfaces. The V3 inversion products use the same data quality assurance criteria as V2 inversions (Holben et al. 2006). The entire AERONET V3 almucantar inversion database was computed using the NASA High End Computing resources at NASA Ames Research Center and NASA Goddard Space Flight Center. In addition to a description of data products, this presentation will provide a comparison of the V3 AOD and inversion climatology comparison of the V3 Level 2.0 and V2 Level 2.0 for sites with varying aerosol types.
Wetlands Evaluation Technique (WET). Volume 1: Literature Review and Evaluation Rationale.
1991-10-01
low potential evapotranspiration, and having basin morphologies con- ducive to storing large amounts of water, probably have some capacity for aug...menting low flows. For example, in a study of 38 Minnesota drainage basins , Ackroyd et al. (1967/MN:R) concluded that lakes and wetlands, in general...layer that is less permeable to ground water exchange. This may even isolate or seal a basin from the ground water. However, Born et al. (1979/ WI:L
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
NASA Astrophysics Data System (ADS)
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).
Kehrer-Sawatzki, H; Sandig, C A; Goidts, V; Hameister, H
2005-01-01
During this study, we analysed the pericentric inversion that distinguishes human chromosome 12 (HSA12) from the homologous chimpanzee chromosome (PTR10). Two large chimpanzee-specific duplications of 86 and 23 kb were observed in the breakpoint regions, which most probably occurred associated with the inversion. The inversion break in PTR10p caused the disruption of the SLCO1B3 gene in exon 11. However, the 86-kb duplication includes the functional SLCO1B3 locus, which is thus retained in the chimpanzee, although inverted to PTR10q. The second duplication spans 23 kb and does not contain expressed sequences. Eleven genes map to a region of about 1 Mb around the breakpoints. Six of these eleven genes are not among the differentially expressed genes as determined previously by comparing the human and chimpanzee transcriptome of fibroblast cell lines, blood leukocytes, liver and brain samples. These findings imply that the inversion did not cause major expression differences of these genes. Comparative FISH analysis with BACs spanning the inversion breakpoints in PTR on metaphase chromosomes of gorilla (GGO) confirmed that the pericentric inversion of the chromosome 12 homologs in GGO and PTR have distinct breakpoints and that humans retain the ancestral arrangement. These findings coincide with the trend observed in hominoid karyotype evolution that humans have a karyotype close to an ancestral one, while African great apes present with more derived chromosome arrangements. Copyright (c) 2005 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Aguilera, Irene; Friedrich, Christoph; Bihlmayer, Gustav; Blügel, Stefan
2013-07-01
We present GW calculations of the topological insulators Bi2Se3, Bi2Te3, and Sb2Te3 within the all-electron full-potential linearized augmented-plane-wave formalism. Quasiparticle effects produce significant qualitative changes in the band structures of these materials when compared to density functional theory (DFT), especially at the Γ point, where band inversion takes place. There, the widely used perturbative one-shot GW approach can produce unphysical band dispersions, as the quasiparticle wave functions are forced to be identical to the noninteracting single-particle states. We show that a treatment beyond the perturbative approach, which incorporates the off-diagonal GW matrix elements and thus enables many-body hybridization to be effective in the quasiparticle wave functions, is crucial in these cases to describe the characteristics of the band inversion around the Γ point in an appropriate way. In addition, this beyond one-shot GW approach allows us to calculate the values of the Z2 topological invariants and compare them with those previously obtained within DFT.
Daffner, Kirk R.; Alperin, Brittany R.; Mott, Katherine K.; Tusch, Erich; Holcomb, Phillip J.
2015-01-01
Previous work demonstrated age-associated increases in the anterior P2 and age-related decreases in the anterior N2 in response to novel stimuli. Principal component analysis (PCA) was used to determine if the inverse relationship between these components was due to their temporal and spatial overlap. PCA revealed an early anterior P2, sensitive to task relevance, and a late anterior P2, responsive to novelty, both exhibiting age-related amplitude increases. A PCA factor representing the anterior N2, sensitive to novelty, exhibited age-related amplitude decreases. The late P2 and N2 to novels inversely correlated. Larger late P2 amplitude to novels was associated with better behavioral performance. Age-related differences in the anterior P2 and N2 to novel stimuli likely represent age-associated changes in independent cognitive operations. Enhanced anterior P2 activity (indexing augmentation in motivational salience) may be a compensatory mechanism for diminished anterior N2 activity (indexing reduced ability of older adults to process ambiguous representations). PMID:25596483
Non-Gaussianity in a quasiclassical electronic circuit
NASA Astrophysics Data System (ADS)
Suzuki, Takafumi J.; Hayakawa, Hisao
2017-05-01
We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.
The Self-Organization of a Spoken Word
Holden, John G.; Rajaraman, Srinivasan
2012-01-01
Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213
ERBE Geographic Scene and Monthly Snow Data
NASA Technical Reports Server (NTRS)
Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.
1997-01-01
The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.
Stochastic sediment property inversion in Shallow Water 06.
Michalopoulou, Zoi-Heleni
2017-11-01
Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.
Probability interpretations of intraclass reliabilities.
Ellis, Jules L
2013-11-20
Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. Copyright © 2013 John Wiley & Sons, Ltd.
A posteriori error estimates in voice source recovery
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2017-12-01
The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.
NASA Astrophysics Data System (ADS)
Nawaz, Muhammad Atif; Curtis, Andrew
2018-04-01
We introduce a new Bayesian inversion method that estimates the spatial distribution of geological facies from attributes of seismic data, by showing how the usual probabilistic inverse problem can be solved using an optimization framework still providing full probabilistic results. Our mathematical model consists of seismic attributes as observed data, which are assumed to have been generated by the geological facies. The method infers the post-inversion (posterior) probability density of the facies plus some other unknown model parameters, from the seismic attributes and geological prior information. Most previous research in this domain is based on the localized likelihoods assumption, whereby the seismic attributes at a location are assumed to depend on the facies only at that location. Such an assumption is unrealistic because of imperfect seismic data acquisition and processing, and fundamental limitations of seismic imaging methods. In this paper, we relax this assumption: we allow probabilistic dependence between seismic attributes at a location and the facies in any neighbourhood of that location through a spatial filter. We term such likelihoods quasi-localized.
Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian
NASA Astrophysics Data System (ADS)
Teneng, Dean
2013-09-01
We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.
A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake
NASA Astrophysics Data System (ADS)
Chen, T.; Luo, H.
2013-12-01
On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.
Inverse Symmetry in Complete Genomes and Whole-Genome Inverse Duplication
Kong, Sing-Guan; Fan, Wen-Lang; Chen, Hong-Da; Hsu, Zi-Ting; Zhou, Nengji; Zheng, Bo; Lee, Hoong-Chien
2009-01-01
The cause of symmetry is usually subtle, and its study often leads to a deeper understanding of the bearer of the symmetry. To gain insight into the dynamics driving the growth and evolution of genomes, we conducted a comprehensive study of textual symmetries in 786 complete chromosomes. We focused on symmetry based on our belief that, in spite of their extreme diversity, genomes must share common dynamical principles and mechanisms that drive their growth and evolution, and that the most robust footprints of such dynamics are symmetry related. We found that while complement and reverse symmetries are essentially absent in genomic sequences, inverse–complement plus reverse–symmetry is prevalent in complex patterns in most chromosomes, a vast majority of which have near maximum global inverse symmetry. We also discovered relations that can quantitatively account for the long observed but unexplained phenomenon of -mer skews in genomes. Our results suggest segmental and whole-genome inverse duplications are important mechanisms in genome growth and evolution, probably because they are efficient means by which the genome can exploit its double-stranded structure to enrich its code-inventory. PMID:19898631
Inverse and forward modeling under uncertainty using MRE-based Bayesian approach
NASA Astrophysics Data System (ADS)
Hou, Z.; Rubin, Y.
2004-12-01
A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.
ERIC Educational Resources Information Center
Buzawa, Eve; And Others
1995-01-01
Reports results of a study testing the hypothesis that an inverse relationship exists between level of intimacy between perpetrator and victim in incidents of violence and likelihood of arrest. Notwithstanding relevant elements of probable cause, such as the presence of weapons, witnesses, injury, and the offender, results supported the…
Comparing Performance of Methods to Deal with Differential Attrition in Lottery Based Evaluations
ERIC Educational Resources Information Center
Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey
2016-01-01
The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…
1993-02-01
amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection
Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai
2013-01-01
This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.
Inverse probability weighting for covariate adjustment in randomized studies
Li, Xiaochun; Li, Lingling
2013-01-01
SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Strategy evolution driven by switching probabilities in structured multi-agent systems
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Chen, Zengqiang; Li, Zhiqi
2017-10-01
Evolutionary mechanism driving the commonly seen cooperation among unrelated individuals is puzzling. Related models for evolutionary games on graphs traditionally assume that players imitate their successful neighbours with higher benefits. Notably, an implicit assumption here is that players are always able to acquire the required pay-off information. To relax this restrictive assumption, a contact-based model has been proposed, where switching probabilities between strategies drive the strategy evolution. However, the explicit and quantified relation between a player's switching probability for her strategies and the number of her neighbours remains unknown. This is especially a key point in heterogeneously structured system, where players may differ in the numbers of their neighbours. Focusing on this, here we present an augmented model by introducing an attenuation coefficient and evaluate its influence on the evolution dynamics. Results show that the individual influence on others is negatively correlated with the contact numbers specified by the network topologies. Results further provide the conditions under which the coexisting strategies can be calculated analytically.
Crime and punishment: Does it pay to punish?
NASA Astrophysics Data System (ADS)
Iglesias, J. R.; Semeshenko, V.; Schneider, E. M.; Gordon, M. B.
2012-08-01
Crime is the result of a rational distinctive balance between the benefits and costs of an illegal act. This idea was proposed by Becker more than forty years ago (Becker (1968) [1]). In this paper, we simulate a simple artificial society, in which agents earn fixed wages and can augment (or lose) wealth as a result of a successful (or not) act of crime. The probability of apprehension depends on the gravity of the crime, and the punishment takes the form of imprisonment and fines. We study the costs of the law enforcement system required for keeping crime within acceptable limits, and compare it with the harm produced by crime. A sharp phase transition is observed as a function of the probability of punishment, and this transition exhibits a clear hysteresis effect, suggesting that the cost of reversing a deteriorated situation might be much higher than that of maintaining a relatively low level of delinquency. Besides, we analyze economic consequences that arise from crimes under different scenarios of criminal activity and probabilities of apprehension.
Janowicz, Diane M; Tenner-Racz, Klara; Racz, Paul; Humphreys, Tricia L; Schnizlein-Bick, Carol; Fortney, Kate R; Zwickl, Beth; Katz, Barry P; Campbell, James J; Ho, David D; Spinola, Stanley M
2007-05-15
We infected 11 HIV-seropositive volunteers whose CD4(+) cell counts were >350 cells/ microL (7 of whom were receiving antiretrovirals) with Haemophilus ducreyi. The papule and pustule formation rates were similar to those observed in HIV-seronegative historical control subjects. No subject experienced a sustained change in CD4(+) cell count or HIV RNA level. The cellular infiltrate in biopsy samples obtained from the HIV-seropositive and HIV-seronegative subjects did not differ with respect to the percentage of leukocytes, neutrophils, macrophages, or T cells. The CD4(+):CD8(+) cell ratio in biopsy samples from the HIV-seropositive subjects was 1:3, the inverse of the ratio seen in the HIV-seronegative subjects (P<.0001). Although CD4(+) cells proliferated in lesions, in situ hybridization and reverse-transcription polymerase chain reaction for HIV RNA was negative. We conclude that experimental infection in HIV-seropositive persons is clinically similar to infection in HIV-seronegative persons and does not cause local or augment systemic viral replication. Thus, prompt treatment of chancroid may abrogate increases in viral replication associated with natural disease.
Abdomen and spinal cord segmentation with augmented active shape models
Xu, Zhoubing; Conrad, Benjamin N.; Baucom, Rebeccah B.; Smith, Seth A.; Poulose, Benjamin K.; Landman, Bennett A.
2016-01-01
Abstract. Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC. PMID:27610400
Modelling the Probability of Landslides Impacting Road Networks
NASA Astrophysics Data System (ADS)
Taylor, F. E.; Malamud, B. D.
2012-04-01
During a landslide triggering event, the threat of landslides blocking roads poses a risk to logistics, rescue efforts and communities dependant on those road networks. Here we present preliminary results of a stochastic model we have developed to evaluate the probability of landslides intersecting a simple road network during a landslide triggering event and apply simple network indices to measure the state of the road network in the affected region. A 4000 x 4000 cell array with a 5 m x 5 m resolution was used, with a pre-defined simple road network laid onto it, and landslides 'randomly' dropped onto it. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m2 This statistical distribution was chosen based on three substantially complete triggered landslide inventories recorded in existing literature. The number of landslide areas (NL) selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. NL = 400 landslide areas chosen randomly for each iteration, and was based on several existing triggered landslide event inventories. A simple road network was chosen, in a 'T' shape configuration, with one road 1 x 4000 cells (5 m x 20 km) in a 'T' formation with another road 1 x 2000 cells (5 m x 10 km). The landslide areas were then randomly 'dropped' over the road array and indices such as the location, size (ABL) and number of road blockages (NBL) recorded. This process was performed 500 times (iterations) in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event with 400 landslides over a 400 km2 region, the number of road blocks per iteration, NBL,ranges from 0 to 7. The average blockage area for the 500 iterations (A¯ BL) is about 3000 m2, which closely matches the value of A¯ L for the triggered landslide inventories. We further find that over the 500 iterations, the probability of a given number of road blocks occurring on any given iteration, p(NBL) as a function of NBL, follows reasonably well a three-parameter inverse gamma probability density distribution with an exponential rollover (i.e., the most frequent value) at NBL = 1.3. In this paper we have begun to calculate the probability of the number of landslides blocking roads during a triggering event, and have found that this follows an inverse-gamma distribution, which is similar to that found for the statistics of landslide areas resulting from triggers. As we progress to model more realistic road networks, this work will aid in both long-term and disaster management for road networks by allowing probabilistic assessment of road network potential damage during different magnitude landslide triggering event scenarios.
Ecology and shell chemistry of Loxoconcha matagordensis
Cronin, T. M.; Kamiya, T.; Dwyer, G.S.; Belkin, H.; Vann, C.D.; Schwede, S.; Wagner, R.
2005-01-01
Studies of the seasonal ecology and shell chemistry of the ostracode Loxoconcha matagordensis and related species of Loxoconcha from regions off eastern North America reveal that shell size and trace elemental (Mg/Ca ratio) composition are useful in paleothermometry using fossil populations. Seasonal sampling of populations from Chesapeake Bay, augmented by samples from Florida Bay, indicate that shell size is inversely proportional to water temperature and that Mg/Ca ratios are positively correlated with the water temperature in which the adult carapace was secreted. Microprobe analyses of sectioned valves reveal intra-shell variability in Mg/Ca ratios but this does not strongly influence the utility of whole shell Mg/Ca analyses for paleoclimate application.
Prissel, Mark A; Roukis, Thomas S
2014-12-01
Lateral ankle instability is a common mechanical problem that often requires surgical management when conservative efforts fail. Historically, myriad open surgical approaches have been proposed. Recently, consideration for arthroscopic management of lateral ankle instability has become popular, with promising results. Unfortunately, recurrent inversion ankle injury following lateral ankle stabilization can occur and require revision surgery. To date, arthroscopic management for revision lateral ankle stabilization has not been described. We present a novel arthroscopic technique combining an arthroscopic lateral ankle stabilization kit with a suture anchor ligament augmentation system for revision as well as complex primary lateral ankle stabilization. © 2014 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Incorporation of real-time component information using equipment condition assessment (ECA) through the developmentof enhanced risk monitors (ERM) for active components in advanced reactor (AR) and advanced small modular reactor (SMR) designs. We incorporate time-dependent failure probabilities from prognostic health management (PHM) systems to dynamically update the risk metric of interest. This information is used to augment data used for supervisory control and plant-wide coordination of multiple modules by providing the incremental risk incurred due to aging and demands placed on components that support mission requirements.
Studies of the Ignition and Combustion of Boron Particles for Air - Augmented Rocket Applications
1974-10-01
Information Service. I TIS I V ~C 60 0U ’MA: 0-L JCSW1;•, :j J~ ............ . ....... ..................... .4151 _1 Conditions of Reproduction...ation is probably not sufficient to be clearly demonstrated by our experiments. 32 REFERENCES FOR SECTIONS I - V 1. Talley, C.P., and Henderson, U.V...studies that the combustion of boron Occurts in- twVo cleaned, the particle V %\\ill not iunite. successive staues. After heat uip to -about 1800- I Kn’(17
1991-10-01
low potential evapotranspiration, and having basin morphologies con- ducive to storing large amounts of water, probably have some capacity for aug...menting low flows. For example, in a study of 38 Minnesota drainage basins , Ackroyd et al. (1967/MN:R) concluded that lakes and wetlands, in general...ganic layer that is less permeable to ground water exchange. This may even isolate or seal a basin from the ground water. However, Born et al. (1979
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
NASA Astrophysics Data System (ADS)
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; Nicholson, D. M.; Johnson, Duane D.
2014-11-01
The Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an efficient site-centered, electronic-structure technique for addressing an assembly of N scatterers. Wave functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number Lmax=(l,mmax), while scattering matrices, which determine spectral properties, are truncated at Lt r=(l,mt r) where phase shifts δl >ltr are negligible. Historically, Lmax is set equal to Lt r, which is correct for large enough Lmax but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for Lmax>Lt r with δl >ltr set to zero [X.-G. Zhang and W. H. Butler, Phys. Rev. B 46, 7433 (1992), 10.1103/PhysRevB.46.7433]. We present a numerically efficient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R3 process with rank N (ltr+1 ) 2 ] and includes higher-L contributions via linear algebra [R2 process with rank N (lmax+1) 2 ]. The augmented-KKR approach yields properly normalized wave functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe, and L 1 0 CoPt and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus Lmax for a given Lt r.
Green's function multiple-scattering theory with a truncated basis set: An augmented-KKR formalism
Alam, Aftab; Khan, Suffian N.; Smirnov, A. V.; ...
2014-11-04
Korringa-Kohn-Rostoker (KKR) Green's function, multiple-scattering theory is an ecient sitecentered, electronic-structure technique for addressing an assembly of N scatterers. Wave-functions are expanded in a spherical-wave basis on each scattering center and indexed up to a maximum orbital and azimuthal number L max = (l,m) max, while scattering matrices, which determine spectral properties, are truncated at L tr = (l,m) tr where phase shifts δl>l tr are negligible. Historically, L max is set equal to L tr, which is correct for large enough L max but not computationally expedient; a better procedure retains higher-order (free-electron and single-site) contributions for L maxmore » > L tr with δl>l tr set to zero [Zhang and Butler, Phys. Rev. B 46, 7433]. We present a numerically ecient and accurate augmented-KKR Green's function formalism that solves the KKR equations by exact matrix inversion [R 3 process with rank N(l tr + 1) 2] and includes higher-L contributions via linear algebra [R 2 process with rank N(l max +1) 2]. Augmented-KKR approach yields properly normalized wave-functions, numerically cheaper basis-set convergence, and a total charge density and electron count that agrees with Lloyd's formula. We apply our formalism to fcc Cu, bcc Fe and L1 0 CoPt, and present the numerical results for accuracy and for the convergence of the total energies, Fermi energies, and magnetic moments versus L max for a given L tr.« less
Dealing with non-unique and non-monotonic response in particle sizing instruments
NASA Astrophysics Data System (ADS)
Rosenberg, Phil
2017-04-01
A number of instruments used as de-facto standards for measuring particle size distributions are actually incapable of uniquely determining the size of an individual particle. This is due to non-unique or non-monotonic response functions. Optical particle counters have non monotonic response due to oscillations in the Mie response curves, especially for large aerosol and small cloud droplets. Scanning mobility particle sizers respond identically to two particles where the ratio of particle size to particle charge is approximately the same. Images of two differently sized cloud or precipitation particles taken by an optical array probe can have similar dimensions or shadowed area depending upon where they are in the imaging plane. A number of methods exist to deal with these issues, including assuming that positive and negative errors cancel, smoothing response curves, integrating regions in measurement space before conversion to size space and matrix inversion. Matrix inversion (also called kernel inversion) has the advantage that it determines the size distribution which best matches the observations, given specific information about the instrument (a matrix which specifies the probability that a particle of a given size will be measured in a given instrument size bin). In this way it maximises use of the information in the measurements. However this technique can be confused by poor counting statistics which can cause erroneous results and negative concentrations. Also an effective method for propagating uncertainties is yet to be published or routinely implemented. Her we present a new alternative which overcomes these issues. We use Bayesian methods to determine the probability that a given size distribution is correct given a set of instrument data and then we use Markov Chain Monte Carlo methods to sample this many dimensional probability distribution function to determine the expectation and (co)variances - hence providing a best guess and an uncertainty for the size distribution which includes contributions from the non-unique response curve, counting statistics and can propagate calibration uncertainties.
Pogoda, Janice M.; Gross, Noah B.; Arakaki, Xianghong; Fonteh, Alfred N.; Cowan, Robert P.
2016-01-01
Objective We investigated whether dietary sodium intake from respondents of a national cross‐sectional nutritional study differed by history of migraine or severe headaches. Background Several lines of evidence support a disruption of sodium homeostasis in migraine. Design Our analysis population was 8819 adults in the 1999–2004 National Health and Nutrition Examination Survey (NHANES) with reliable data on diet and headache history. We classified respondents who reported a history of migraine or severe headaches as having probable history of migraine. To reduce the diagnostic conflict from medication overuse headache, we excluded respondents who reported taking analgesic medications. Dietary sodium intake was measured using validated estimates of self‐reported total grams of daily sodium consumption and was analyzed as the residual value from the linear regression of total grams of sodium on total calories. Multivariable logistic regression that accounted for the stratified, multistage probability cluster sampling design of NHANES was used to analyze the relationship between migraine and dietary sodium. Results Odds of probable migraine history decreased with increasing dietary sodium intake (odds ratio = 0.93, 95% confidence interval = 0.87, 1.00, P = .0455). This relationship was maintained after adjusting for age, sex, and body mass index (BMI) with slightly reduced significance (P = .0505). In women, this inverse relationship was limited to those with lower BMI (P = .007), while in men the relationship did not differ by BMI. We likely excluded some migraineurs by omitting frequent analgesic users; however, a sensitivity analysis suggested little effect from this exclusion. Conclusions This study is the first evidence of an inverse relationship between migraine and dietary sodium intake. These results are consistent with altered sodium homeostasis in migraine and our hypothesis that dietary sodium may affect brain extracellular fluid sodium concentrations and neuronal excitability. PMID:27016121
Pogoda, Janice M; Gross, Noah B; Arakaki, Xianghong; Fonteh, Alfred N; Cowan, Robert P; Harrington, Michael G
2016-04-01
We investigated whether dietary sodium intake from respondents of a national cross-sectional nutritional study differed by history of migraine or severe headaches. Several lines of evidence support a disruption of sodium homeostasis in migraine. Our analysis population was 8819 adults in the 1999-2004 National Health and Nutrition Examination Survey (NHANES) with reliable data on diet and headache history. We classified respondents who reported a history of migraine or severe headaches as having probable history of migraine. To reduce the diagnostic conflict from medication overuse headache, we excluded respondents who reported taking analgesic medications. Dietary sodium intake was measured using validated estimates of self-reported total grams of daily sodium consumption and was analyzed as the residual value from the linear regression of total grams of sodium on total calories. Multivariable logistic regression that accounted for the stratified, multistage probability cluster sampling design of NHANES was used to analyze the relationship between migraine and dietary sodium. Odds of probable migraine history decreased with increasing dietary sodium intake (odds ratio = 0.93, 95% confidence interval = 0.87, 1.00, P = .0455). This relationship was maintained after adjusting for age, sex, and body mass index (BMI) with slightly reduced significance (P = .0505). In women, this inverse relationship was limited to those with lower BMI (P = .007), while in men the relationship did not differ by BMI. We likely excluded some migraineurs by omitting frequent analgesic users; however, a sensitivity analysis suggested little effect from this exclusion. This study is the first evidence of an inverse relationship between migraine and dietary sodium intake. These results are consistent with altered sodium homeostasis in migraine and our hypothesis that dietary sodium may affect brain extracellular fluid sodium concentrations and neuronal excitability. © 2016 The Authors Headache published by Wiley Periodicals, Inc. on behalf of American Headache Society.
A strategy for improved computational efficiency of the method of anchored distributions
NASA Astrophysics Data System (ADS)
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
Liu, X; Zhai, Z
2008-02-01
Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.
NASA Astrophysics Data System (ADS)
Rosas-Carbajal, M.; Linde, N.; Peacock, J.; Zyserman, F. I.; Kalscheuer, T.; Thiel, S.
2015-12-01
Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved. We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.
Rosas-Carbajal, Marina; Linde, Nicolas; Peacock, Jared R.; Zyserman, F. I.; Kalscheuer, Thomas; Thiel, Stephan
2015-01-01
Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved.We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.
Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas
2016-06-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols.
Dinov, Ivo D.; Siegrist, Kyle; Pearl, Dennis K.; Kalinin, Alexandr; Christou, Nicolas
2015-01-01
Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome, which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the learning assessment protocols. PMID:27158191
Computer-aided diagnosis with potential application to rapid detection of disease outbreaks.
Burr, Tom; Koster, Frederick; Picard, Rick; Forslund, Dave; Wokoun, Doug; Joyce, Ed; Brillman, Judith; Froman, Phil; Lee, Jack
2007-04-15
Our objectives are to quickly interpret symptoms of emergency patients to identify likely syndromes and to improve population-wide disease outbreak detection. We constructed a database of 248 syndromes, each syndrome having an estimated probability of producing any of 85 symptoms, with some two-way, three-way, and five-way probabilities reflecting correlations among symptoms. Using these multi-way probabilities in conjunction with an iterative proportional fitting algorithm allows estimation of full conditional probabilities. Combining these conditional probabilities with misdiagnosis error rates and incidence rates via Bayes theorem, the probability of each syndrome is estimated. We tested a prototype of computer-aided differential diagnosis (CADDY) on simulated data and on more than 100 real cases, including West Nile Virus, Q fever, SARS, anthrax, plague, tularaemia and toxic shock cases. We conclude that: (1) it is important to determine whether the unrecorded positive status of a symptom means that the status is negative or that the status is unknown; (2) inclusion of misdiagnosis error rates produces more realistic results; (3) the naive Bayes classifier, which assumes all symptoms behave independently, is slightly outperformed by CADDY, which includes available multi-symptom information on correlations; as more information regarding symptom correlations becomes available, the advantage of CADDY over the naive Bayes classifier should increase; (4) overlooking low-probability, high-consequence events is less likely if the standard output summary is augmented with a list of rare syndromes that are consistent with observed symptoms, and (5) accumulating patient-level probabilities across a larger population can aid in biosurveillance for disease outbreaks. c 2007 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Backus, George E.
1999-01-01
The purpose of the grant was to study how prior information about the geomagnetic field can be used to interpret surface and satellite magnetic measurements, to generate quantitative descriptions of prior information that might be so used, and to use this prior information to obtain from satellite data a model of the core field with statistically justifiable error estimates. The need for prior information in geophysical inversion has long been recognized. Data sets are finite, and faithful descriptions of aspects of the earth almost always require infinite-dimensional model spaces. By themselves, the data can confine the correct earth model only to an infinite-dimensional subset of the model space. Earth properties other than direct functions of the observed data cannot be estimated from those data without prior information about the earth. Prior information is based on what the observer already knows before the data become available. Such information can be "hard" or "soft". Hard information is a belief that the real earth must lie in some known region of model space. For example, the total ohmic dissipation in the core is probably less that the total observed geothermal heat flow out of the earth's surface. (In principle, ohmic heat in the core can be recaptured to help drive the dynamo, but this effect is probably small.) "Soft" information is a probability distribution on the model space, a distribution that the observer accepts as a quantitative description of her/his beliefs about the earth. The probability distribution can be a subjective prior in the sense of Bayes or the objective result of a statistical study of previous data or relevant theories.
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki; Watanabe, Tsutomu
2014-03-01
We start from Gibrat's law and quasi-inversion symmetry for three firm size variables (i.e., tangible fixed assets K, number of employees L, and sales Y) and derive a partial differential equation to be satisfied by the joint probability density function of K and L. We then transform K and L, which are correlated, into two independent variables by applying surface openness used in geomorphology and provide an analytical solution to the partial differential equation. Using worldwide data on the firm size variables for companies, we confirm that the estimates on the power-law exponents of K, L, and Y satisfy a relationship implied by the theory.
ERIC Educational Resources Information Center
Collier, Daniel A.; Rosch, David M.; Houston, Derek A.
2017-01-01
International student enrollment has experienced dramatic increases on U.S. campuses. Using a national dataset, the study explores and compares international and domestic students' incoming and post-training levels of motivation to lead, leadership self-efficacy, and leadership skill using inverse-probability weighting of propensity scores to…
A superstatistical model of metastasis and cancer survival
NASA Astrophysics Data System (ADS)
Leon Chen, L.; Beck, Christian
2008-05-01
We introduce a superstatistical model for the progression statistics of malignant cancer cells. The metastatic cascade is modeled as a complex nonequilibrium system with several macroscopic pathways and inverse-chi-square distributed parameters of the underlying Poisson processes. The predictions of the model are in excellent agreement with observed survival-time probability distributions of breast cancer patients.
Dietary predictors of arterial stiffness in a cohort with type 1 and type 2 diabetes.
Petersen, K S; Keogh, J B; Meikle, P J; Garg, M L; Clifton, P M
2015-02-01
To determine the dietary predictors of central blood pressure, augmentation index and pulse wave velocity (PWV) in subjects with type 1 and type 2 diabetes. Participants were diagnosed with type 1 or type 2 diabetes and had PWV and/or pulse wave analysis performed. Dietary intake was measured using the Dietary Questionnaire for Epidemiological Studies Version 2 Food Frequency Questionnaire. Serum lipid species and carotenoids were measured, using liquid chromatography electrospray ionization-tandem mass spectrometry and high performance liquid chromatography, as biomarkers of dairy and vegetable intake, respectively. Associations were determined using linear regression adjusted for potential confounders. PWV (n = 95) was inversely associated with reduced fat dairy intake (β = -0.01; 95% CI -0.02, -0.01; p = 0 < 0.05) in particular yoghurt consumption (β = -0.04; 95% CI -0.09, -0.01; p = 0 < 0.05) after multivariate adjustment. Total vegetable consumption was negatively associated with PWV in the whole cohort after full adjustment (β = -0.04; 95% CI -0.07, -0.01; p < 0.05). Individual lipid species, particularly those containing 14:0, 15:0, 16:0, 17:0 and 17:1 fatty acids, known to be of ruminant origin, in lysophosphatidylcholine, cholesterol ester, diacylglycerol, phosphatidylcholine, sphingomyelin and triacylglycerol classes were positively associated with intake of full fat dairy, after adjustment for multiple comparisons. However, there was no association between serum lipid species and PWV. There were no dietary predictors of central blood pressure or augmentation index after multivariate adjustment. In this cohort of subjects with diabetes reduced fat dairy intake and vegetable consumption were inversely associated with PWV. The lack of a relationship between serum lipid species and PWV suggests that the fatty acid composition of dairy may not explain the beneficial effect. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shin, S.H.; Meroney, R.N.; Neff, D.E.
1991-03-01
Measurements of the behavior of simulated liquefied natural gas clouds dispersing over small-scale model placed in environmental wind tunnels permits evaluations of the fluid physics of dense cloud movement and dispersion in a controlled environment. A large data base on the interaction of simulated LNG plumes with the Falcon test configuration of vapor barrier fences and vortex generators was obtained. The purpose of the reported test program is to provide post-field-spill wind tunnel experiments to augment the LNG Vapor Fence Field Program data obtained during the Falcon Test Series in 1987. The goal of the program is to determine themore » probable response of a dense LNG Vapor cloud to vortex inducer obstacles and fences, examine the sensitivity of results to various scaling arguments which might augment limit, or extend the value of the field and wind-tunnel tests, and identify important details of the spill behavior which were not predicted during the pretest planning phase.« less
A Real-Time Augmented Reality System to See-Through Cars.
Rameau, Francois; Ha, Hyowon; Joo, Kyungdon; Choi, Jinsoo; Park, Kibaek; Kweon, In So
2016-11-01
One of the most hazardous driving scenario is the overtaking of a slower vehicle, indeed, in this case the front vehicle (being overtaken) can occlude an important part of the field of view of the rear vehicle's driver. This lack of visibility is the most probable cause of accidents in this context. Recent research works tend to prove that augmented reality applied to assisted driving can significantly reduce the risk of accidents. In this paper, we present a real-time marker-less system to see through cars. For this purpose, two cars are equipped with cameras and an appropriate wireless communication system. The stereo vision system mounted on the front car allows to create a sparse 3D map of the environment where the rear car can be localized. Using this inter-car pose estimation, a synthetic image is generated to overcome the occlusion and to create a seamless see-through effect which preserves the structure of the scene.
Implications of caesarean section for children's school achievement: A population-based study.
Smithers, Lisa G; Mol, Ben W; Wilkinson, Chris; Lynch, John W
2016-08-01
Caesarean birth is one of the most frequently performed major obstetrical interventions. Although there is speculation that caesarean at term may have consequences for children's later health and development, longer-term studies are needed. We aimed to evaluate risks to poor school achievement among children born by caesarean section compared with spontaneous vaginal birth. This population-based observational study involved linkage of routinely collected perinatal data with children's school assessments. Perinatal data included all children born in South Australia from 1999 to 2005. Participants were children born by elective caesarean (exposed, n = 650) or vaginal birth (unexposed, n = 2959), to women who previously had a caesarean delivery. School assessments were reported via a standardised national assessment program for children attending grade three (at ~eight years of age). Assessments included reading, writing, spelling, grammar and numeracy and were categorised according to performing at above or ≤National Minimum Standards (NMS). Statistical analyses involved augmented inverse probability weighting (apiw) and accounted for a range of maternal, perinatal and sociodemographic characteristics. Children performing ≤NMS for vaginal birth versus caesarean section were as follows: reading 144/640 (23%) and 688/2921 (24%), writing 69/636(11%) and 351/2917 (12%), spelling 128/646 (20%) and 684/2937 (23%), grammar 132/646 (20%) and 655/2937 (22%), and numeracy 151/634 (24%) and 729/2922 (25%). Both the raw data and the aipw analyses suggested little differences in school achievement between children born by caesarean versus vaginal birth. Analyses that carefully controlled for a wide range of confounders suggest that caesarean section does not increase the risk of poor school outcomes at age eight. © 2016 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.
Orellana, Liliana; Rotnitzky, Andrea; Robins, James M
2010-01-01
Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.
Methodology for building confidence measures
NASA Astrophysics Data System (ADS)
Bramson, Aaron L.
2004-04-01
This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Li, Dewei; Xi, Yugeng
2013-07-01
This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.
Koelsch, Stefan; Busch, Tobias; Jentschke, Sebastian; Rohrmeier, Martin
2016-02-02
Within the framework of statistical learning, many behavioural studies investigated the processing of unpredicted events. However, surprisingly few neurophysiological studies are available on this topic, and no statistical learning experiment has investigated electroencephalographic (EEG) correlates of processing events with different transition probabilities. We carried out an EEG study with a novel variant of the established statistical learning paradigm. Timbres were presented in isochronous sequences of triplets. The first two sounds of all triplets were equiprobable, while the third sound occurred with either low (10%), intermediate (30%), or high (60%) probability. Thus, the occurrence probability of the third item of each triplet (given the first two items) was varied. Compared to high-probability triplet endings, endings with low and intermediate probability elicited an early anterior negativity that had an onset around 100 ms and was maximal at around 180 ms. This effect was larger for events with low than for events with intermediate probability. Our results reveal that, when predictions are based on statistical learning, events that do not match a prediction evoke an early anterior negativity, with the amplitude of this mismatch response being inversely related to the probability of such events. Thus, we report a statistical mismatch negativity (sMMN) that reflects statistical learning of transitional probability distributions that go beyond auditory sensory memory capabilities.
NASA Astrophysics Data System (ADS)
Yusoh, R.; Saad, R.; Saidin, M.; Muhammad, S. B.; Anda, S. T.
2018-04-01
Both electrical resistivity and seismic refraction profiling has become a common method in pre-investigations for visualizing subsurface structure. The encouragement to use these methods is that combined of both methods can decrease the obscure inherent to the distinctive use of these methods. Both method have their individual software packages for data inversion, but potential to combine certain geophysical methods are restricted; however, the research algorithms that have this functionality was exists and are evaluated personally. The interpretation of subsurface were improve by combining inversion data from both method by influence each other models using closure coupling; thus, by implementing both methods to support each other which could improve the subsurface interpretation. These methods were applied on a field dataset from a pre-investigation for archeology in finding the material deposits of impact crater. There were no major changes in the inverted model by combining data inversion for this archetype which probably due to complex geology. The combine data analysis shows the deposit material start from ground surface to 20 meter depth which the class separation clearly separate the deposit material.
INFO-RNA--a fast approach to inverse RNA folding.
Busch, Anke; Backofen, Rolf
2006-08-01
The structure of RNA molecules is often crucial for their function. Therefore, secondary structure prediction has gained much interest. Here, we consider the inverse RNA folding problem, which means designing RNA sequences that fold into a given structure. We introduce a new algorithm for the inverse folding problem (INFO-RNA) that consists of two parts; a dynamic programming method for good initial sequences and a following improved stochastic local search that uses an effective neighbor selection method. During the initialization, we design a sequence that among all sequences adopts the given structure with the lowest possible energy. For the selection of neighbors during the search, we use a kind of look-ahead of one selection step applying an additional energy-based criterion. Afterwards, the pre-ordered neighbors are tested using the actual optimization criterion of minimizing the structure distance between the target structure and the mfe structure of the considered neighbor. We compared our algorithm to RNAinverse and RNA-SSD for artificial and biological test sets. Using INFO-RNA, we performed better than RNAinverse and in most cases, we gained better results than RNA-SSD, the probably best inverse RNA folding tool on the market. www.bioinf.uni-freiburg.de?Subpages/software.html.
Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.
2013-01-01
The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.
Inverse structure functions in the canonical wind turbine array boundary layer
NASA Astrophysics Data System (ADS)
Viggiano, Bianca; Gion, Moira; Ali, Naseem; Tutkun, Murat; Cal, Raúl Bayoán
2015-11-01
Insight into the statistical behavior of the flow past an array of wind turbines is useful in determining how to improve power extraction from the overall available energy. Considering a wind tunnel experiment, hot-wire anemometer velocity signals are obtained at the centerline of a 3 x 3 canonical wind turbine array boundary layer. Two downstream locations are considered referring to the near- and far-wake, and 21 vertical points were acquired per profile. Velocity increments are used to quantify the ordinary and inverse structure functions at both locations and their relationship between the scaling exponents is noted. It is of interest to discern if there is evidence of an inverted scaling. The inverse structure functions will also be discussed from the standpoint of the proximity to the array. Observations will also address if inverted scaling exponents follow a power law behavior and furthermore, extended self-similarity of the second moment is used to obtain the scaling exponent of other moments. Inverse structure functions of moments one through eight are tested via probability density functions and the behavior of the negative moment is investigated as well. National Science Foundation-CBET-1034581.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
NASA Astrophysics Data System (ADS)
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes.
Li, Dong; Svensson, J; Thomsen, H; Medina, F; Werner, A; Wolf, R
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
NASA Astrophysics Data System (ADS)
Realpe Campaña, Julian David; Porsani, Jorge Luís; Bortolozo, Cassiano Antonio; Serejo de Oliveira, Gabriela; Monteiro dos Santos, Fernando Acácio
2017-03-01
Results of a TEM profile by using the fixed-loop array and an analysis of the induced magnetic field are presented in this work performed in the northwest region of São Paulo State, Brazil, Paraná Basin. The objectives of this research were to map the sedimentary and crystalline aquifers in the area and analyzing the behavior of the magnetic field by observation of magnetic profiles. TEM measurements in the three spatial components were taken to create magnetic profiles of the induced (secondary) magnetic field. The TEM data were acquired using a fixed transmitter loop of 200 m × 200 m and a 3D coil receiver moving along a profile line of 1000 m. Magnetic profiles of dBz, dBx and dBy components showed symmetrical spatial behavior related with loop geometry. z-component showed a behavior probably related to superparamagnetic effect (SPM). dBz data was used to perform individual 1D inversion for each position and to generate an interpolated pseudo-2D geoelectric profile. The results showed two low resistivity zones: the first shallow, between 10 m and 70 m deep, probably related to the Adamantina Formation (sedimentary aquifer). The second between 200 m and 300 m depth, probably related to a fractured zone filled with water or clay inside the basalt layer of the Serra Geral Formation (crystalline aquifer). These results agree with the well logs information available in the studied region.
Analytical tools and isolation of TOF events
NASA Technical Reports Server (NTRS)
Wolf, H.
1974-01-01
Analytical tools are presented in two reports. The first is a probability analysis of the orbital distribution of events in relation to dust flux density observed in Pioneer 8 and 9 distributions. A distinction is drawn between asymmetries caused by random fluctuations and systematic variations, by calculating the probability of any particular asymmetry. The second article discusses particle trajectories for a repulsive force field. The force on a particle due to solar radiation pressure is directed along the particle's radius vector, from the sun, and is inversely proportional to its distance from the sun. Equations of motion which describe both solar radiation pressure and gravitational attraction are presented.
NASA Astrophysics Data System (ADS)
Rebolledo, M. A.; Martinez-Betorz, J. A.
1989-04-01
In this paper the accuracy in the determination of the period of an oscillating signal, when obtained from the photon statistics time-interval probability, is studied as a function of the precision (the inverse of the cutoff frequency of the photon counting system) with which time intervals are measured. The results are obtained by means of an experiment with a square-wave signal, where the Fourier or square-wave transforms of the time-interval probability are measured. It is found that for values of the frequency of the signal near the cutoff frequency the errors in the period are small.
Refractory pulse counting processes in stochastic neural computers.
McNeill, Dean K; Card, Howard C
2005-03-01
This letter quantitiatively investigates the effect of a temporary refractory period or dead time in the ability of a stochastic Bernoulli processor to record subsequent pulse events, following the arrival of a pulse. These effects can arise in either the input detectors of a stochastic neural network or in subsequent processing. A transient period is observed, which increases with both the dead time and the Bernoulli probability of the dead-time free system, during which the system reaches equilibrium. Unless the Bernoulli probability is small compared to the inverse of the dead time, the mean and variance of the pulse count distributions are both appreciably reduced.
First Detected Arrival of a Quantum Walker on an Infinite Line
NASA Astrophysics Data System (ADS)
Thiel, Felix; Barkai, Eli; Kessler, David A.
2018-01-01
The first detection of a quantum particle on a graph is shown to depend sensitively on the distance ξ between the detector and initial location of the particle, and on the sampling time τ . Here, we use the recently introduced quantum renewal equation to investigate the statistics of first detection on an infinite line, using a tight-binding lattice Hamiltonian with nearest-neighbor hops. Universal features of the first detection probability are uncovered and simple limiting cases are analyzed. These include the large ξ limit, the small τ limit, and the power law decay with the attempt number of the detection probability over which quantum oscillations are superimposed. For large ξ the first detection probability assumes a scaling form and when the sampling time is equal to the inverse of the energy band width nonanalytical behaviors arise, accompanied by a transition in the statistics. The maximum total detection probability is found to occur for τ close to this transition point. When the initial location of the particle is far from the detection node we find that the total detection probability attains a finite value that is distance independent.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
NASA Astrophysics Data System (ADS)
Chaleil, A.; Le Flanchec, V.; Binet, A.; Nègre, J. P.; Devaux, J. F.; Jacob, V.; Millerioux, M.; Bayle, A.; Balleyguier, P.; Prazeres, R.
2016-12-01
An inverse Compton scattering source is under development at the ELSA linac of CEA, Bruyères-le-Châtel. Ultra-short X-ray pulses are produced by inverse Compton scattering of 30 ps-laser pulses by relativistic electron bunches. The source will be able to operate in single shot mode as well as in recurrent mode with 72.2 MHz pulse trains. Within this framework, an optical multipass system that multiplies the number of emitted X-ray photons in both regimes has been designed in 2014, then implemented and tested on ELSA facility in the course of 2015. The device is described from both geometrical and timing viewpoints. It is based on the idea of folding the laser optical path to pile-up laser pulses at the interaction point, thus increasing the interaction probability. The X-ray output gain measurements obtained using this system are presented and compared with calculated expectations.
Inverse Statistics and Asset Allocation Efficiency
NASA Astrophysics Data System (ADS)
Bolgorian, Meysam
In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.
Duality between Time Series and Networks
Campanharo, Andriana S. L. O.; Sirer, M. Irmak; Malmgren, R. Dean; Ramos, Fernando M.; Amaral, Luís A. Nunes.
2011-01-01
Studying the interaction between a system's components and the temporal evolution of the system are two common ways to uncover and characterize its internal workings. Recently, several maps from a time series to a network have been proposed with the intent of using network metrics to characterize time series. Although these maps demonstrate that different time series result in networks with distinct topological properties, it remains unclear how these topological properties relate to the original time series. Here, we propose a map from a time series to a network with an approximate inverse operation, making it possible to use network statistics to characterize time series and time series statistics to characterize networks. As a proof of concept, we generate an ensemble of time series ranging from periodic to random and confirm that application of the proposed map retains much of the information encoded in the original time series (or networks) after application of the map (or its inverse). Our results suggest that network analysis can be used to distinguish different dynamic regimes in time series and, perhaps more importantly, time series analysis can provide a powerful set of tools that augment the traditional network analysis toolkit to quantify networks in new and useful ways. PMID:21858093
Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.
Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter
2012-08-01
An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Developing the (d,p γ) reaction as a surrogate for (n, γ) in inverse kinematics
NASA Astrophysics Data System (ADS)
Lepailleur, Alexandr; Sims, Harry; Garland, Heather; Baugher, Travis; Cizewski, Jolie A.; Ratkiewicz, Andrew; Walter, Daivid; Pain, Steven D.; Smith, Karl; Goddess Collaboration Collaboration
2017-09-01
The r-process that proceeds via (n, γ) reactions on neutron-rich nuclei is responsible for the synthesis of about half of the elements heavier than iron. Because (n, γ) measurements on short-lived isotopes are not possible, the (d,p γ) reaction is being investigated as a surrogate for (n, γ) . The experimental setup GODDESS (Gammasphere ORRUBA: Dual Detectors for Experimental Structure Studies) has been developed especially for this purpose. The Oak Ridge Rutgers University Barrel Array (ORRUBA) of position-sensitive silicon strip detectors was augmented with annular arrays of segmented strip detectors at backward and forward angles, resulting in a high-angular coverage for light ejectiles (20 to 160 degrees in the laboratory frame). The 134Xe(d,p γ) reaction, used to commission the setup, was measured in inverse kinematics with stable beams from ATLAS impinged on C2D4 targets. Reaction protons were measured (ORRUBA) in coincidence with gamma rays (Gammasphere). An overview of GODDESS and preliminary results from the 134Xe(d,p γ) study will be presented. Work supported in part by U.S. D.O.E. and National Science Foundation.
Constraining LLSVP Buoyancy With Tidal Tomography
NASA Astrophysics Data System (ADS)
Lau, H. C. P.; Mitrovica, J. X.; Davis, J. L.; Tromp, J.; Yang, H. Y.; Al-Attar, D.
2017-12-01
Using a global GPS data set of high precision measurements of the Earth's body tide, we perform a tomographic inversion to constrain the integrated buoyancy of the Large Low Shear Velocity Provinces (LLSVPs) at the base of the mantle. As a consequence of the long-wavelength and low frequency nature of the Earth's body tide, these observations are particularly sensitivity to LLSVP buoyancy, a property of Earth's mantle that remains a source of ongoing debate. Using a probabilistic approach we find that the data are best fit when the bottom two thirds ( 700 km) of the LLSVPs have an integrated excess density of 0.60%. The detailed distribution of this buoyancy, for example whether it primarily resides in a thin layer at the base of the mantle, will require further testing and the augmentation of the inversions to include independent data sets (e.g., seismic observations). In any case, our inference of excess density requires the preservation of chemical heterogeneity associated with the enrichment of high-density chemical components, possibly linked to subducted oceanic plates and/or primordial material, in the deep mantle. This conclusion has important implications for the stability of these structures and, in turn, the history and ongoing evolution of the Earth system.
Inverse relationship between physical activity and arterial stiffness in adults with hypertension.
O'Donovan, Cuisle; Lithander, Fiona E; Raftery, Tara; Gormley, John; Mahmud, Azra; Hussey, Juliette
2014-02-01
Physical activity has beneficial effects on arterial stiffness among healthy adults. There is a lack of data on this relationship in adults with hypertension. The majority of studies which have examined physical activity and arterial stiffness have used subjective measures of activity. The aim of this study was to investigate the relationship between objectively measured habitual physical activity and arterial stiffness in individuals with newly diagnosed essential hypertension. Adults attending an outpatient hypertension clinic were recruited into this cross sectional study. Physical activity was measured using a triaxial accelerometer. Pulse wave velocity (PWV) and augmentation index (AIx) were measured using applanation tonometry. Participant's full lipid profile and glucose were determined through the collection of a fasting blood sample. Fifty-three adults [51(14) years, 26 male] participated, 16 of whom had the metabolic syndrome. Inactivity was positively correlated with PWV (r = .53, P < .001) and AIx (r = .48, P < .001). There were significant inverse associations between habitual physical activity of all intensities and both AIx and PWV. In stepwise regression, after adjusting for potential confounders, physical activity was a significant predictor of AIx and PWV. Habitual physical activity of all intensities is associated with reduced arterial stiffness among adults with hypertension.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
A common biological mechanism in cancer and Alzheimer’s disease?
Behrens, Maria I; Lendon, Corinne; Roe, Catherine M.
2009-01-01
Cancer and Alzheimer’s disease (AD) are two common disorders for which the final pathophysiological mechanism is not yet clearly defined. In a prospective longitudinal study we have previously shown an inverse association between AD and cancer, such that the rate of developing cancer in general with time was significantly slower in participants with AD, while participants with a history of cancer had a slower rate of developing AD. In cancer, cell regulation mechanisms are disrupted with augmentation of cell survival and/or proliferation, whereas conversely, AD is associated with increased neuronal death, either caused by, or concomitant with, beta amyloid (Aβ) and tau deposition. The possibility that perturbations of mechanisms involved in cell survival/death regulation could be involved in both disorders is discussed. Genetic polymorphisms, DNA methylation or other mechanisms that induce changes in activity of molecules with key roles in determining the decision to “repair and live”- or “die” could be involved in the pathogenesis of the two disorders. As examples, the role of p53, Pin1 and the Wnt signaling pathway are discussed as potential candidates that, speculatively, may explain inverse associations between AD and cancer. PMID:19519301
Morel, F; Laudier, B; Guérif, F; Couet, M L; Royère, D; Roux, C; Bresson, J L; Amice, V; De Braekeleer, M; Douet-Guilbert, N
2007-01-01
Pericentric inversions are structural chromosomal abnormalities resulting from two breaks, one on either side of the centromere, within the same chromosome, followed by 180 degrees rotation and reunion of the inverted segment. They can perturb spermatogenesis and lead to the production of unbalanced gametes through the formation of an inversion loop. We report here the analysis of the meiotic segregation in spermatozoa from six pericentric inversion carriers by multicolour fluorescence in-situ hybridization (FISH) and review the literature. The frequencies of the non-recombinant products (inversion or normal chromosomes) were 80% for the inv(20), 91.41% for the inv(12), 99.43% for the inv(2), 68.12% for the inv(1), 97% for the inv(8)(p12q21) and 60.94% for the inv(8)(p12q24.1). The meiotic segregation of 20 pericentric inversions (including ours) is now available. The frequency of unbalanced spermatozoa varies from 0 to 37.85%. The probability of a crossover within the inverted segment is affected by the chromosome and region involved, the length of the inverted segment and the location of the breakpoints. No recombinant chromosomes were produced when the inverted segment involved <30% of the chromosome length (independent of the size of the inverted segment). Between 30 and 50%, few recombinant chromosomes were produced, inducing a slightly increased risk of aneusomy of recombination in the offspring. The risk of aneusomy became very important when the inverted segment was >50% of the chromosome length. Studies on spermatozoa from inversion carriers help in the comprehension of the mechanisms of meiotic segregation. They should be integrated in the genetic exploration of the infertile men to give them a personalized risk assessment of unbalanced spermatozoa.
NASA Astrophysics Data System (ADS)
Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao
2018-01-01
Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Wysocki, Andrea; Kane, Robert L; Golberstein, Ezra; Dowd, Bryan; Lum, Terry; Shippee, Tetyana
2014-06-01
To compare the probability of experiencing a potentially preventable hospitalization (PPH) between older dual eligible Medicaid home and community-based service (HCBS) users and nursing home residents. Three years of Medicaid and Medicare claims data (2003-2005) from seven states, linked to area characteristics from the Area Resource File. A primary diagnosis of an ambulatory care sensitive condition on the inpatient hospital claim was used to identify PPHs. We used inverse probability of treatment weighting to mitigate the potential selection of HCBS versus nursing home use. The most frequent conditions accounting for PPHs were the same among the HCBS users and nursing home residents and included congestive heart failure, pneumonia, chronic obstructive pulmonary disease, urinary tract infection, and dehydration. Compared to nursing home residents, elderly HCBS users had an increased probability of experiencing both a PPH and a non-PPH. HCBS users' increased probability for potentially and non-PPHs suggests a need for more proactive integration of medical and long-term care. © Health Research and Educational Trust.
ERIC Educational Resources Information Center
Lee, Jaekyung; Reeves, Todd
2012-01-01
This study examines the impact of high-stakes school accountability, capacity, and resources under NCLB on reading and math achievement outcomes through comparative interrupted time-series analyses of 1990-2009 NAEP state assessment data. Through hierarchical linear modeling latent variable regression with inverse probability of treatment…
ERIC Educational Resources Information Center
Trafimow, David
2017-01-01
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
ERIC Educational Resources Information Center
Kovalchik, Stephanie A.; Martino, Steven C.; Collins, Rebecca L.; Shadel, William G.; D'Amico, Elizabeth J.; Becker, Kirsten
2018-01-01
Ecological momentary assessment (EMA) is a popular assessment method in psychology that aims to capture events, emotions, and cognitions in real time, usually repeatedly throughout the day. Because EMA typically involves more intensive monitoring than traditional assessment methods, missing data are commonly an issue and this missingness may bias…
Sprouting of old-growth redwood stumps...first year after logging
Robert L. Neal
1967-01-01
A survey of 104 old-growth stumps on the Redwood Experimental Forest, in northern California showed that (a) probability of a stump sprouting varied inversely with its diameter; (b) number of sprouts per sprouting stump and height of tallest sprout were not related to stump diameter; (c) lower portions of stumps sprouted more often and produced more sprouts than did...
Causal inference with measurement error in outcomes: Bias analysis and estimation methods.
Shu, Di; Yi, Grace Y
2017-01-01
Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.
Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data.
Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J
2018-03-01
Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568-575).
Inverse-Probability-Weighted Estimation for Monotone and Nonmonotone Missing Data
Sun, BaoLuo; Perkins, Neil J; Cole, Stephen R; Harel, Ofer; Mitchell, Emily M; Schisterman, Enrique F; Tchetgen Tchetgen, Eric J
2018-01-01
Abstract Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The inverse probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568–575). PMID:29165557
NASA Astrophysics Data System (ADS)
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
Fernberg, Ulrika; Fernström, Maria; Hurtig-Wennlöf, Anita
2017-11-01
Background Early changes in the large muscular arteries are already associated with risk factors as hypertension and obesity in adolescence and young adulthood. The present study examines the association between arterial stiffness measurements, pulse wave velocity and augmentation index and lifestyle-related factors, body composition and cardiorespiratory fitness, in young, healthy, Swedish adults. Design This study used a population-based cross-sectional sample. Methods The 834 participants in the study were self-reported healthy, non-smoking, age 18-25 years. Augmentation index and pulse wave velocity were measured with applanation tonometry. Cardiorespiratory fitness was measured by ergometer bike test to estimate maximal oxygen uptake. Body mass index (kg/m 2 ) was calculated and categorised according to classification by the World Health Organisation. Results Young Swedish adults with obesity and low cardiorespiratory fitness have significantly higher pulse wave velocity and augmentation index than non-obese young adults with medium or high cardiorespiratory fitness. The observed U-shaped association between pulse wave velocity and body mass index categories in women indicates that it might be more beneficial to be normal weight than underweight when assessing the arterial stiffness with pulse wave velocity. The highest mean pulse wave velocity was found in overweight/obese individuals with low cardiorespiratory fitness. The lowest mean pulse wave velocity was found in normal weight individuals with high cardiorespiratory fitness. Cardiorespiratory fitness had a stronger effect than body mass index on arterial stiffness in multiple regression analyses. Conclusions The inverse association between cardiorespiratory fitness and arterial stiffness is observed already in young adults. The study result highlights the importance of high cardiorespiratory fitness, but also that underweight individuals may be a possible risk group that needs to be further studied.
Cao, Xia; Li, Xin-Min; Mousseau, Darrell D
2009-07-31
Calcium (Ca(2+)) is known to augment monoamine oxidase-A (MAO-A) activity in cell cultures as well as in brain extracts from several species. This association between Ca(2+) and MAO-A could contribute to their respective roles in cytotoxicity. However, the effect of Ca(2+) on MAO-A function in human brain has as yet to be examined as does the contribution of specific signalling cascades. We examined the effects of Ca(2+) on MAO-A activity and on [(3)H]Ro 41-1049 binding to MAO-A in human cerebellar extracts, and compared this to its effects on MAO-A activity in glial C6 cells following the targeting of signalling pathways using specific chemical inhibitors. Ca(2+) enhances MAO-A activity as well as the association of [(3)H]Ro 41-1049 to MAO-A in human cerebellar extracts. The screening of neuronal and glial cell cultures reveals that MAO-A activity does not always correlate with the expression of either mao-A mRNA or MAO-A protein. Inhibition of the individual PI3K/Akt, ERK and p38(MAPK) signalling pathways in glial C6 cells all augment basal MAO-A activity. Inhibition of the p38(MAPK) pathway also augments Ca(2+)-sensitive MAO-A activity. We also observe the inverse relation between p38(MAPK) activation and MAO-A function in C6 cultures grown to full confluence. The Ca(2+)-sensitive component to MAO-A activity is present in human brain and in vitro studies link it to the p38(MAPK) pathway. This means of influencing MAO-A function could explain its role in pathologies as diverse as neurodegeneration and cancers.
Redundant actuator development study. [flight control systems for supersonic transport aircraft
NASA Technical Reports Server (NTRS)
Ryder, D. R.
1973-01-01
Current and past supersonic transport configurations are reviewed to assess redundancy requirements for future airplane control systems. Secondary actuators used in stability augmentation systems will probably be the most critical actuator application and require the highest level of redundancy. Two methods of actuator redundancy mechanization have been recommended for further study. Math models of the recommended systems have been developed for use in future computer simulations. A long range plan has been formulated for actuator hardware development and testing in conjunction with the NASA Flight Simulator for Advanced Aircraft.
Pierce, Jordan E; McDowell, Jennifer E
2016-02-01
Cognitive control supports flexible behavior adapted to meet current goals and can be modeled through investigation of saccade tasks with varying cognitive demands. Basic prosaccades (rapid glances toward a newly appearing stimulus) are supported by neural circuitry, including occipital and posterior parietal cortex, frontal and supplementary eye fields, and basal ganglia. These trials can be contrasted with complex antisaccades (glances toward the mirror image location of a stimulus), which are characterized by greater functional magnetic resonance imaging (MRI) blood oxygenation level-dependent (BOLD) signal in the aforementioned regions and recruitment of additional regions such as dorsolateral prefrontal cortex. The current study manipulated the cognitive demands of these saccade tasks by presenting three rapid event-related runs of mixed saccades with a varying probability of antisaccade vs. prosaccade trials (25, 50, or 75%). Behavioral results showed an effect of trial-type probability on reaction time, with slower responses in runs with a high antisaccade probability. Imaging results exhibited an effect of probability in bilateral pre- and postcentral gyrus, bilateral superior temporal gyrus, and medial frontal gyrus. Additionally, the interaction between saccade trial type and probability revealed a strong probability effect for prosaccade trials, showing a linear increase in activation parallel to antisaccade probability in bilateral temporal/occipital, posterior parietal, medial frontal, and lateral prefrontal cortex. In contrast, antisaccade trials showed elevated activation across all runs. Overall, this study demonstrated that improbable performance of a typically simple prosaccade task led to augmented BOLD signal to support changing cognitive control demands, resulting in activation levels similar to the more complex antisaccade task. Copyright © 2016 the American Physiological Society.
M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey
Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.
2016-01-01
We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.
Diagnostics for the optimization of an 11 keV inverse Compton scattering x-ray source
NASA Astrophysics Data System (ADS)
Chauchat, A.-S.; Brasile, J.-P.; Le Flanchec, V.; Nègre, J.-P.; Binet, A.; Ortega, J.-M.
2013-04-01
In a scope of a collaboration between Thales Communications & Security and CEA DAM DIF, 11 keV Xrays were produced by inverse Compton scattering on the ELSA facility. In this type of experiment, X-ray observation lies in the use of accurate electron and laser beam interaction diagnostics and on fitted X-ray detectors. The low interaction probability between < 100 μm width, 12 ps [rms] length electron and photon pulses requires careful optimization of pulse spatial and temporal covering. Another issue was to observe 11 keV X-rays in the ambient radioactive noise of the linear accelerator. For that, we use a very sensitive detection scheme based on radio luminescent screens.
Silicon-carbon bond inversions driven by 60-keV electrons in graphene.
Susi, Toma; Kotakoski, Jani; Kepaptsoglou, Demie; Mangler, Clemens; Lovejoy, Tracy C; Krivanek, Ondrej L; Zan, Recep; Bangert, Ursel; Ayala, Paola; Meyer, Jannik C; Ramasse, Quentin
2014-09-12
We demonstrate that 60-keV electron irradiation drives the diffusion of threefold-coordinated Si dopants in graphene by one lattice site at a time. First principles simulations reveal that each step is caused by an electron impact on a C atom next to the dopant. Although the atomic motion happens below our experimental time resolution, stochastic analysis of 38 such lattice jumps reveals a probability for their occurrence in a good agreement with the simulations. Conversions from three- to fourfold coordinated dopant structures and the subsequent reverse process are significantly less likely than the direct bond inversion. Our results thus provide a model of nondestructive and atomically precise structural modification and detection for two-dimensional materials.
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-01-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye
2016-06-07
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
NASA Astrophysics Data System (ADS)
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-06-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
Laucho-Contreras, Maria E; Polverino, Francesca; Tesfaigzi, Yohannes; Pilon, Aprile; Celli, Bartolome R; Owen, Caroline A
2016-07-01
Club cell protein 16 (CC16) is the most abundant protein in bronchoalveolar lavage fluid. CC16 has anti-inflammatory properties in smoke-exposed lungs, and chronic obstructive pulmonary disease (COPD) is associated with CC16 deficiency. Herein, we explored whether CC16 is a therapeutic target for COPD. We reviewed the literature on the factors that regulate airway CC16 expression, its biologic functions and its protective activities in smoke-exposed lungs using PUBMED searches. We generated hypotheses on the mechanisms by which CC16 limits COPD development, and discuss its potential as a new therapeutic approach for COPD. CC16 plasma and lung levels are reduced in smokers without airflow obstruction and COPD patients. In COPD patients, airway CC16 expression is inversely correlated with severity of airflow obstruction. CC16 deficiency increases smoke-induced lung pathologies in mice by its effects on epithelial cells, leukocytes, and fibroblasts. Experimental augmentation of CC16 levels using recombinant CC16 in cell culture systems, plasmid and adenoviral-mediated over-expression of CC16 in epithelial cells or smoke-exposed murine airways reduces inflammation and cellular injury. Additional studies are necessary to assess the efficacy of therapies aimed at restoring airway CC16 levels as a new disease-modifying therapy for COPD patients.
NASA Astrophysics Data System (ADS)
Contreras, Arturo Javier
This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.
Ma, Jihua; Luo, Antao; Wu, Lin; Wan, Wei; Zhang, Peihua; Ren, Zhiqiang; Zhang, Shuo; Qian, Chunping; Shryock, John C; Belardinelli, Luiz
2012-04-15
An increase in intracellular Ca(2+) concentration ([Ca(2+)](i)) augments late sodium current (I(Na.L)) in cardiomyocytes. This study tests the hypothesis that both Ca(2+)-calmodulin-dependent protein kinase II (CaMKII) and protein kinase C (PKC) mediate the effect of increased [Ca(2+)](i) to increase I(Na.L). Whole cell and open cell-attached patch clamp techniques were used to record I(Na.L) in rabbit ventricular myocytes dialyzed with solutions containing various concentrations of [Ca(2+)](i). Dialysis of cells with [Ca(2+)](i) from 0.1 to 0.3, 0.6, and 1.0 μM increased I(Na.L) in a concentration-dependent manner from 0.221 ± 0.038 to 0.554 ± 0.045 pA/pF (n = 10, P < 0.01) and was associated with an increase in mean Na(+) channel open probability and prolongation of channel mean open-time (n = 7, P < 0.01). In the presence of 0.6 μM [Ca(2+)](i), KN-93 (10 μM) and bisindolylmaleimide (BIM, 2 μM) decreased I(Na.L) by 45.2 and 54.8%, respectively. The effects of KN-93 and autocamtide-2-related inhibitory peptide II (2 μM) were not different. A combination of KN-93 and BIM completely reversed the increase in I(Na.L) as well as the Ca(2+)-induced changes in Na(+) channel mean open probability and mean open-time induced by 0.6 μM [Ca(2+)](i). Phorbol myristoyl acetate increased I(Na.L) in myocytes dialyzed with 0.1 μM [Ca(2+)](i); the effect was abolished by Gö-6976. In summary, both CaMKII and PKC are involved in [Ca(2+)](i)-mediated augmentation of I(Na.L) in ventricular myocytes. Inhibition of CaMKII and/or PKC pathways may be a therapeutic target to reduce myocardial dysfunction and cardiac arrhythmias caused by calcium overload.
NASA Astrophysics Data System (ADS)
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.
The effect of parental involvement laws on teen birth control use.
Sabia, Joseph J; Anderson, D Mark
2016-01-01
In Volume 32, Issue 5 of this journal, Colman, Dee, and Joyce (CDJ) used data from the National Youth Risk Behavior Surveys (NYRBS) and found that parental involvement (PI) laws had no effect on the probability that minors abstain from sex or use contraception. We re-examine this question, augmenting the NYRBS with data from the State Youth Risk Behavior Surveys (SYRBS), and use a variety of identification strategies to control for state-level time-varying unmeasured heterogeneity. Consistent with CDJ, we find that PI laws have no effect on minor teen females' abstinence decisions. However, when we exploit additional state policy variation unavailable to CDJ and use non-minor teens as a within-state control group, we find evidence to suggest that PI laws are associated with an increase in the probability that sexually active minor teen females use birth control. Copyright © 2015 Elsevier B.V. All rights reserved.
Augmenting Phase Space Quantization to Introduce Additional Physical Effects
NASA Astrophysics Data System (ADS)
Robbins, Matthew P. G.
Quantum mechanics can be done using classical phase space functions and a star product. The state of the system is described by a quasi-probability distribution. A classical system can be quantized in phase space in different ways with different quasi-probability distributions and star products. A transition differential operator relates different phase space quantizations. The objective of this thesis is to introduce additional physical effects into the process of quantization by using the transition operator. As prototypical examples, we first look at the coarse-graining of the Wigner function and the damped simple harmonic oscillator. By generalizing the transition operator and star product to also be functions of the position and momentum, we show that additional physical features beyond damping and coarse-graining can be introduced into a quantum system, including the generalized uncertainty principle of quantum gravity phenomenology, driving forces, and decoherence.
A human reliability based usability evaluation method for safety-critical software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, R. L.; Tran, T. Q.; Gertman, D. I.
2006-07-01
Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less
Sarker, A K; Seth, T N
1975-01-01
Intramuscular injections of testosterone propionate (Perandren, CIBA) at a dose level of 2.5 mg per day for 10 days into adult female parakeet caused an increment of differentiated follicles in the ovary. The histological study of the testosterone treated oviduct of the bird showed well developed villi with a significant number of tubular glands particularly in the middle and distal parts of the oviduct. The high level of alkaline phosphatase activity and ascorbic acid concentration in the distal part of the oviduct in treated birds probably increase the power of hatchable eggs which has a close relationship with the enzyme and vitamin C concentration in the uterus. The testosterone treatment causes a marked depletion of granulosal vitamins from ovary but augments the ascorbate mobilization in the thecal region to a very great extent probably due to more LH secretion from the pituitary.
Estimating trace-suspect match probabilities for singleton Y-STR haplotypes using coalescent theory.
Andersen, Mikkel Meyer; Caliebe, Amke; Jochens, Arne; Willuweit, Sascha; Krawczak, Michael
2013-02-01
Estimation of match probabilities for singleton haplotypes of lineage markers, i.e. for haplotypes observed only once in a reference database augmented by a suspect profile, is an important problem in forensic genetics. We compared the performance of four estimators of singleton match probabilities for Y-STRs, namely the count estimate, both with and without Brenner's so-called 'kappa correction', the surveying estimate, and a previously proposed, but rarely used, coalescent-based approach implemented in the BATWING software. Extensive simulation with BATWING of the underlying population history, haplotype evolution and subsequent database sampling revealed that the coalescent-based approach is characterized by lower bias and lower mean squared error than the uncorrected count estimator and the surveying estimator. Moreover, in contrast to the two count estimators, both the surveying and the coalescent-based approach exhibited a good correlation between the estimated and true match probabilities. However, although its overall performance is thus better than that of any other recognized method, the coalescent-based estimator is still computation-intense on the verge of general impracticability. Its application in forensic practice therefore will have to be limited to small reference databases, or to isolated cases of particular interest, until more powerful algorithms for coalescent simulation have become available. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
PHOTOTROPISM OF GERMINATING MYCELIA OF SOME PARASITIC FUNGI
uredinales on young wheat plants; Distribution and significance of the phototropism of germinating mycelia -- confirmation of older data, examination of...eight additional uredinales, probable meaning of negative phototropism for the occurrence of infection; Analysis of the stimulus physiology of the...reaction -- the minimum effective illumination intensity, the effective special region, inversion of the phototropic reaction in liquid paraffin, the negative light- growth reaction, the light-sensitive zone.
NASA Astrophysics Data System (ADS)
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
Toward a comprehensive theory for the sweeping of trapped radiation by inert orbiting matter
NASA Technical Reports Server (NTRS)
Fillius, Walker
1988-01-01
There is a need to calculate loss rates when trapped Van Allen radiation encounters inert orbiting material such as planetary rings and satellites. An analytic expression for the probability of a hit in a bounce encounter is available for all cases where the absorber is spherical and the particles are gyrotropically distributed on a cylindrical flux tube. The hit probability is a function of the particle's pitch angle, the size of the absorber, and the distance between flux tube and absorber, when distances are scaled to the gyroradius of a particle moving perpendicular to the magnetic field. Using this expression, hit probabilities have been computed in drift encounters for all regimes of particle energies and absorber sizes. This technique generalizes the approach to sweeping lifetimes, and is particularly suitable for attacking the inverse problem, where one is given a sweeping signature and wants to deduce the properties of the absorber(s).
Optimal random search for a single hidden target.
Snider, Joseph
2011-01-01
A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
A test of geographic assignment using isotope tracers in feathers of known origin
Wunder, Michael B.; Kester, C.L.; Knopf, F.L.; Rye, R.O.
2005-01-01
We used feathers of known origin collected from across the breeding range of a migratory shorebird to test the use of isotope tracers for assigning breeding origins. We analyzed δD, δ13C, and δ15N in feathers from 75 mountain plover (Charadrius montanus) chicks sampled in 2001 and from 119 chicks sampled in 2002. We estimated parameters for continuous-response inverse regression models and for discrete-response Bayesian probability models from data for each year independently. We evaluated model predictions with both the training data and by using the alternate year as an independent test dataset. Our results provide weak support for modeling latitude and isotope values as monotonic functions of one another, especially when data are pooled over known sources of variation such as sample year or location. We were unable to make even qualitative statements, such as north versus south, about the likely origin of birds using both δD and δ13C in inverse regression models; results were no better than random assignment. Probability models provided better results and a more natural framework for the problem. Correct assignment rates were highest when considering all three isotopes in the probability framework, but the use of even a single isotope was better than random assignment. The method appears relatively robust to temporal effects and is most sensitive to the isotope discrimination gradients over which samples are taken. We offer that the problem of using isotope tracers to infer geographic origin is best framed as one of assignment, rather than prediction.
Karim, Mohammad Ehsanul; Platt, Robert W
2017-06-15
Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Farmer, William H.; Over, Thomas M.; Vogel, Richard M.
2015-01-01
Understanding the spatial structure of daily streamflow is essential for managing freshwater resources, especially in poorly-gaged regions. Spatial scaling assumptions are common in flood frequency prediction (e.g., index-flood method) and the prediction of continuous streamflow at ungaged sites (e.g. drainage-area ratio), with simple scaling by drainage area being the most common assumption. In this study, scaling analyses of daily streamflow from 173 streamgages in the southeastern US resulted in three important findings. First, the use of only positive integer moment orders, as has been done in most previous studies, captures only the probabilistic and spatial scaling behavior of flows above an exceedance probability near the median; negative moment orders (inverse moments) are needed for lower streamflows. Second, assessing scaling by using drainage area alone is shown to result in a high degree of omitted-variable bias, masking the true spatial scaling behavior. Multiple regression is shown to mitigate this bias, controlling for regional heterogeneity of basin attributes, especially those correlated with drainage area. Previous univariate scaling analyses have neglected the scaling of low-flow events and may have produced biased estimates of the spatial scaling exponent. Third, the multiple regression results show that mean flows scale with an exponent of one, low flows scale with spatial scaling exponents greater than one, and high flows scale with exponents less than one. The relationship between scaling exponents and exceedance probabilities may be a fundamental signature of regional streamflow. This signature may improve our understanding of the physical processes generating streamflow at different exceedance probabilities.
Austin, Peter C; Schuster, Tibor
2016-10-01
Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. © The Author(s) 2014.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Cellular Automata Generalized To An Inferential System
NASA Astrophysics Data System (ADS)
Blower, David J.
2007-11-01
Stephen Wolfram popularized elementary one-dimensional cellular automata in his book, A New Kind of Science. Among many remarkable things, he proved that one of these cellular automata was a Universal Turing Machine. Such cellular automata can be interpreted in a different way by viewing them within the context of the formal manipulation rules from probability theory. Bayes's Theorem is the most famous of such formal rules. As a prelude, we recapitulate Jaynes's presentation of how probability theory generalizes classical logic using modus ponens as the canonical example. We emphasize the important conceptual standing of Boolean Algebra for the formal rules of probability manipulation and give an alternative demonstration augmenting and complementing Jaynes's derivation. We show the complementary roles played in arguments of this kind by Bayes's Theorem and joint probability tables. A good explanation for all of this is afforded by the expansion of any particular logic function via the disjunctive normal form (DNF). The DNF expansion is a useful heuristic emphasized in this exposition because such expansions point out where relevant 0s should be placed in the joint probability tables for logic functions involving any number of variables. It then becomes a straightforward exercise to rely on Boolean Algebra, Bayes's Theorem, and joint probability tables in extrapolating to Wolfram's cellular automata. Cellular automata are seen as purely deductive systems, just like classical logic, which probability theory is then able to generalize. Thus, any uncertainties which we might like to introduce into the discussion about cellular automata are handled with ease via the familiar inferential path. Most importantly, the difficult problem of predicting what cellular automata will do in the far future is treated like any inferential prediction problem.
Linkage Disequilibrium and Inversion-Typing of the Drosophila melanogaster Genome Reference Panel
Houle, David; Márquez, Eladio J.
2015-01-01
We calculated the linkage disequilibrium between all pairs of variants in the Drosophila Genome Reference Panel with minor allele count ≥5. We used r2 ≥ 0.5 as the cutoff for a highly correlated SNP. We make available the list of all highly correlated SNPs for use in association studies. Seventy-six percent of variant SNPs are highly correlated with at least one other SNP, and the mean number of highly correlated SNPs per variant over the whole genome is 83.9. Disequilibrium between distant SNPs is also common when minor allele frequency (MAF) is low: 37% of SNPs with MAF < 0.1 are highly correlated with SNPs more than 100 kb distant. Although SNPs within regions with polymorphic inversions are highly correlated with somewhat larger numbers of SNPs, and these correlated SNPs are on average farther away, the probability that a SNP in such regions is highly correlated with at least one other SNP is very similar to SNPs outside inversions. Previous karyotyping of the DGRP lines has been inconsistent, and we used LD and genotype to investigate these discrepancies. When previous studies agreed on inversion karyotype, our analysis was almost perfectly concordant with those assignments. In discordant cases, and for inversion heterozygotes, our results suggest errors in two previous analyses or discordance between genotype and karyotype. Heterozygosities of chromosome arms are, in many cases, surprisingly highly correlated, suggesting strong epsistatic selection during the inbreeding and maintenance of the DGRP lines. PMID:26068573
Hu, Kai; Liu, Dan; Niemann, Markus; Hatle, Liv; Herrmann, Sebastian; Voelker, Wolfram; Ertl, Georg; Bijnens, Bart; Weidemann, Frank
2011-11-01
For the clinical assessment of patients with dyspnea, the inversion of the early (E) and late (A) transmitral flow during Valsalva maneuver (VM) frequently helps to distinguish pseudonormal from normal filling pattern. However, in an important number of patients, VM fails to reveal the change from dominant early mitral flow velocity toward larger late velocity. From December 2009 to October 2010, we selected consecutive patients with abnormal filling with (n=25) and without E/A inversion (n=25) during VM. Transmitral, tricuspid, and pulmonary Doppler traces were recorded and the degree of insufficiency was estimated. After evaluating all standard echocardiographic morphological, functional, and flow-related parameters, it became evident that the failure to unmask the pseudonormal filling pattern by VM was related to the degree of the tricuspid insufficiency (TI). TI was graded as mild in 24 of 25 patients in the group with E/A inversion during VM, whereas TI was graded as moderate to severe in 24 of the 25 patients with pseudonormal diastolic function without E/A inversion during VM. Our data suggest that TI is a major factor to prevent E/A inversion during a VM in patients with pseudonormal diastolic function. This probably is due to a decrease in TI resulting in an increase in forward flow rather than the expected decrease during the VM. Thus, whenever a pseudonormal diastolic filling pattern is suspected, the use of a VM is not an informative discriminator in the presence of moderate or severe TI.
Haase, B; Jude, R; Brooks, S A; Leeb, T
2008-06-01
The tobiano white-spotting pattern is one of several known depigmentation phenotypes in horses and is desired by many horse breeders and owners. The tobiano spotting phenotype is inherited as an autosomal dominant trait. Horses that are heterozygous or homozygous for the tobiano allele (To) are phenotypically indistinguishable. A SNP associated with To had previously been identified in intron 13 of the equine KIT gene and was used for an indirect gene test. The test was useful in several horse breeds. However, genotyping this sequence variant in the Lewitzer horse breed revealed that 14% of horses with the tobiano pattern did not show the polymorphism in intron 13 and consequently the test was not useful to identify putative homozygotes for To within this breed. Speculations were raised that an independent mutation might cause the tobiano spotting pattern in this breed. Recently, the putative causative mutation for To was described as a large chromosomal inversion on equine chromosome 3. One of the inversion breakpoints is approximately 70 kb downstream of the KIT gene and probably disrupts a regulatory element of the KIT gene. We obtained genotypes for the intron 13 SNP and the chromosomal inversion for 204 tobiano spotted horses and 24 control animals of several breeds. The genotyping data confirmed that the chromosomal inversion was perfectly associated with the To allele in all investigated horses. Therefore, the new test is suitable to discriminate heterozygous To/+ and homozygous To/To horses in the investigated breeds.
Anderson, Kyle; Segall, Paul
2013-01-01
Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.
A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization
NASA Astrophysics Data System (ADS)
Liu, Shuang; Hu, Xiangyun; Liu, Tianyou
2014-07-01
Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.
Linkage Disequilibrium and Inversion-Typing of the Drosophila melanogaster Genome Reference Panel.
Houle, David; Márquez, Eladio J
2015-06-10
We calculated the linkage disequilibrium between all pairs of variants in the Drosophila Genome Reference Panel with minor allele count ≥5. We used r(2) ≥ 0.5 as the cutoff for a highly correlated SNP. We make available the list of all highly correlated SNPs for use in association studies. Seventy-six percent of variant SNPs are highly correlated with at least one other SNP, and the mean number of highly correlated SNPs per variant over the whole genome is 83.9. Disequilibrium between distant SNPs is also common when minor allele frequency (MAF) is low: 37% of SNPs with MAF < 0.1 are highly correlated with SNPs more than 100 kb distant. Although SNPs within regions with polymorphic inversions are highly correlated with somewhat larger numbers of SNPs, and these correlated SNPs are on average farther away, the probability that a SNP in such regions is highly correlated with at least one other SNP is very similar to SNPs outside inversions. Previous karyotyping of the DGRP lines has been inconsistent, and we used LD and genotype to investigate these discrepancies. When previous studies agreed on inversion karyotype, our analysis was almost perfectly concordant with those assignments. In discordant cases, and for inversion heterozygotes, our results suggest errors in two previous analyses or discordance between genotype and karyotype. Heterozygosities of chromosome arms are, in many cases, surprisingly highly correlated, suggesting strong epsistatic selection during the inbreeding and maintenance of the DGRP lines. Copyright © 2015 Houle and Márquez.
Moment tensor inversions using strong motion waveforms of Taiwan TSMIP data, 1993–2009
Chang, Kaiwen; Chi, Wu-Cheng; Gung, Yuancheng; Dreger, Douglas; Lee, William H K.; Chiu, Hung-Chie
2011-01-01
Earthquake source parameters are important for earthquake studies and seismic hazard assessment. Moment tensors are among the most important earthquake source parameters, and are now routinely derived using modern broadband seismic networks around the world. Similar waveform inversion techniques can also apply to other available data, including strong-motion seismograms. Strong-motion waveforms are also broadband, and recorded in many regions since the 1980s. Thus, strong-motion data can be used to augment moment tensor catalogs with a much larger dataset than that available from the high-gain, broadband seismic networks. However, a systematic comparison between the moment tensors derived from strong motion waveforms and high-gain broadband waveforms has not been available. In this study, we inverted the source mechanisms of Taiwan earthquakes between 1993 and 2009 by using the regional moment tensor inversion method using digital data from several hundred stations in the Taiwan Strong Motion Instrumentation Program (TSMIP). By testing different velocity models and filter passbands, we were able to successfully derive moment tensor solutions for 107 earthquakes of Mw >= 4.8. The solutions for large events agree well with other available moment tensor catalogs derived from local and global broadband networks. However, for Mw = 5.0 or smaller events, we consistently over estimated the moment magnitudes by 0.5 to 1.0. We have tested accelerograms, and velocity waveforms integrated from accelerograms for the inversions, and found the results are similar. In addition, we used part of the catalogs to study important seismogenic structures in the area near Meishan Taiwan which was the site of a very damaging earthquake a century ago, and found that the structures were dominated by events with complex right-lateral strike-slip faulting during the recent decade. The procedures developed from this study may be applied to other strong-motion datasets to compliment or fill gaps in catalogs from regional broadband networks and teleseismic networks.
Inverse Relationship between Progesterone Receptor and Myc in Endometrial Cancer
Dai, Donghai; Meng, Xiangbing; Thiel, Kristina W.; Leslie, Kimberly K.; Yang, Shujie
2016-01-01
Endometrial cancer, the most common gynecologic malignancy, is a hormonally-regulated disease. Response to progestin therapy positively correlates with hormone receptor expression, in particular progesterone receptor (PR). However, many advanced tumors lose PR expression. We recently reported that the efficacy of progestin therapy can be significantly enhanced by combining progestin with epigenetic modulators, which we term “molecularly enhanced progestin therapy.” What remained unclear was the mechanism of action and if estrogen receptor α (ERα), the principle inducer of PR, is necessary to restore functional expression of PR via molecularly enhanced progestin therapy. Therefore, we modeled advanced endometrial tumors that have lost both ERα and PR expression by generating ERα-null endometrial cancer cell lines. CRISPR-Cas9 technology was used to delete ERα at the genomic level. Our data demonstrate that treatment with a histone deacetylase inhibitor (HDACi) was sufficient to restore functional PR expression, even in cells devoid of ERα. Our studies also revealed that HDACi treatment results in marked downregulation of the oncogene Myc. We established that PR is a negative transcriptional regulator of Myc in endometrial cancer in the presence or absence of ERα, which is in contrast to studies in breast cancer cells. First, estrogen stimulation augmented PR expression and decreased Myc in endometrial cancer cell lines. Second, progesterone increased PR activity yet blunted Myc mRNA and protein expression. Finally, overexpression of PR by adenoviral transduction in ERα-null endometrial cancer cells significantly decreased expression of Myc and Myc-regulated genes. Analysis of the Cancer Genome Atlas (TCGA) database of endometrial tumors identified an inverse correlation between PR and Myc mRNA levels, with a corresponding inverse correlation between PR and Myc downstream transcriptional targets SRD5A1, CDK2 and CCNB1. Together, these data reveal a previously unanticipated inverse relationship between the tumor suppressor PR and the oncogene Myc in endometrial cancer. PMID:26859414
Hu, Xingdi; Chen, Xinguang; Cook, Robert L.; Chen, Ding-Geng; Okafor, Chukwuemeka
2016-01-01
Background The probabilistic discrete event systems (PDES) method provides a promising approach to study dynamics of underage drinking using cross-sectional data. However, the utility of this approach is often limited because the constructed PDES model is often non-identifiable. The purpose of the current study is to attempt a new method to solve the model. Methods A PDES-based model of alcohol use behavior was developed with four progression stages (never-drinkers [ND], light/moderate-drinker [LMD], heavy-drinker [HD], and ex-drinker [XD]) linked with 13 possible transition paths. We tested the proposed model with data for participants aged 12–21 from the 2012 National Survey on Drug Use and Health (NSDUH). The Moore-Penrose (M-P) generalized inverse matrix method was applied to solve the proposed model. Results Annual transitional probabilities by age groups for the 13 drinking progression pathways were successfully estimated with the M-P generalized inverse matrix approach. Result from our analysis indicates an inverse “J” shape curve characterizing pattern of experimental use of alcohol from adolescence to young adulthood. We also observed a dramatic increase for the initiation of LMD and HD after age 18 and a sharp decline in quitting light and heavy drinking. Conclusion Our findings are consistent with the developmental perspective regarding the dynamics of underage drinking, demonstrating the utility of the M-P method in obtaining a unique solution for the partially-observed PDES drinking behavior model. The M-P approach we tested in this study will facilitate the use of the PDES approach to examine many health behaviors with the widely available cross-sectional data. PMID:26511344
The Inverse Bagging Algorithm: Anomaly Detection by Inverse Bootstrap Aggregating
NASA Astrophysics Data System (ADS)
Vischia, Pietro; Dorigo, Tommaso
2017-03-01
For data sets populated by a very well modeled process and by another process of unknown probability density function (PDF), a desired feature when manipulating the fraction of the unknown process (either for enhancing it or suppressing it) consists in avoiding to modify the kinematic distributions of the well modeled one. A bootstrap technique is used to identify sub-samples rich in the well modeled process, and classify each event according to the frequency of it being part of such sub-samples. Comparisons with general MVA algorithms will be shown, as well as a study of the asymptotic properties of the method, making use of a public domain data set that models a typical search for new physics as performed at hadronic colliders such as the Large Hadron Collider (LHC).
Biomass estimation for Virginia pine trees and stands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Madgwick, H.A.I.
1980-03-01
Stands of Virginia Pine (Pinus virginiana Mill.) occur on much abandoned farm land in the Appalachian Mountains and Piedmont of Virginia. Natural stands are an important source of pulpwood, and these are being augmented by plantations. Increased intensity of utilization necessitates the estimation of component weights of the trees. Data from 501 trees from 10 stands were used to develop equations for estimating dry weight of stem wood, stem bark, total stem 1-year-old needles, total needles, live branches, and total branches of individual trees. Stand weight of stems was closely related to stand basal area and mean height. Stand live-branchmore » weight varies inversely with stocking. Weight of 1-year-old foliage on the stands increased with stocking and site index. 13 references.« less
Optimal mistuning for enhanced aeroelastic stability of transonic fans
NASA Technical Reports Server (NTRS)
Hall, K. C.; Crawley, E. F.
1983-01-01
An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.
NASA Astrophysics Data System (ADS)
Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang
2014-04-01
This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.
Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin
2016-04-01
Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Steinmetz, G. G.
1983-01-01
Vertical-motion cues supplied by a g-seat to augment platform motion cues in the other five degrees of freedom were evaluated in terms of their effect on objective performance measures obtained during simulated transport landings under visual conditions. In addition to evaluating the effects of the vertical cueing, runway width and magnification effects were investigated. The g-seat was evaluated during fixed base and moving-base operations. Although performance with the g-seat only improved slightly over that with fixed-base operation, combined g-seat platform operation showed no improvement over improvement over platform-only operation. When one runway width at one magnification factor was compared with another width at a different factor, the visual results indicated that the runway width probably had no effect on pilot-vehicle performance. The new performance differences that were detected may be more readily attributed to the extant (existing throughout) increase in vertical velocity induced by the magnification factor used to change the runway width, rather than to the width itself.
Reichle, Joe; Drager, Kathryn; Caron, Jessica; Parker-McGowan, Quannah
2016-11-01
This article examines the growth of aided augmentative and alternative communication (AAC) in providing support to children and youth with significant communication needs. Addressing current trends and offering a discussion of needs and probable future advances is framed around five guiding principles initially introduced by Williams, Krezman, and McNaughton. These include: (1) communication is a basic right and the use of AAC, especially at a young age, can help individuals realize their communicative potential; (2) AAC, like traditional communication, requires it to be fluid with the ability to adapt to different environments and needs; (3) AAC must be individualized and appropriate for each user; (4) AAC must support full participation in society across all ages and interests; and (5) individuals who use AAC have the right to be involved in all aspects of research, development, and intervention. In each of these areas current advances, needs, and future predictions are offered and discussed in terms of researchers' and practitioners' efforts to a continued upward trajectory of research and translational service delivery. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Condition Monitoring for Helicopter Data. Appendix A
NASA Technical Reports Server (NTRS)
Wen, Fang; Willett, Peter; Deb, Somnath
2000-01-01
In this paper the classical "Westland" set of empirical accelerometer helicopter data is analyzed with the aim of condition monitoring for diagnostic purposes. The goal is to determine features for failure events from these data, via a proprietary signal processing toolbox, and to weigh these according to a variety of classification algorithms. As regards signal processing, it appears that the autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; it has also been found that augmentation of these by harmonic and other parameters can improve classification significantly. As regards classification, several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior on training data and is thus able to quantify probability of error in an exact manner, such that features may be discarded or coarsened appropriately.
A tool for the estimation of the distribution of landslide area in R
NASA Astrophysics Data System (ADS)
Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.
2012-04-01
We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.
2014-01-01
Background Characterizing intra-urban variation in air quality is important for epidemiological investigation of health outcomes and disparities. To date, however, few studies have been designed to capture spatial variation during select hours of the day, or to examine the roles of meteorology and complex terrain in shaping intra-urban exposure gradients. Methods We designed a spatial saturation monitoring study to target local air pollution sources, and to understand the role of topography and temperature inversions on fine-scale pollution variation by systematically allocating sampling locations across gradients in key local emissions sources (vehicle traffic, industrial facilities) and topography (elevation) in the Pittsburgh area. Street-level integrated samples of fine particulate matter (PM2.5), black carbon (BC), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ozone (O3) were collected during morning rush and probable inversion hours (6-11 AM), during summer and winter. We hypothesized that pollution concentrations would be: 1) higher under inversion conditions, 2) exacerbated in lower-elevation areas, and 3) vary by season. Results During July - August 2011 and January - March 2012, we observed wide spatial and seasonal variability in pollution concentrations, exceeding the range measured at regulatory monitors. We identified elevated concentrations of multiple pollutants at lower-elevation sites, and a positive association between inversion frequency and NO2 concentration. We examined temporal adjustment methods for deriving seasonal concentration estimates, and found that the appropriate reference temporal trend differs between pollutants. Conclusions Our time-stratified spatial saturation approach found some evidence for modification of inversion-concentration relationships by topography, and provided useful insights for refining and interpreting GIS-based pollution source indicators for Land Use Regression modeling. PMID:24735818
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi
2013-12-10
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less
NASA Astrophysics Data System (ADS)
Sebastian, Nita; Kim, Seongryong; Tkalčić, Hrvoje; Sippl, Christian
2017-04-01
The purpose of this study is to develop an integrated inference on the lithospheric structure of NE China using three passive seismic networks comprised of 92 stations. The NE China plain consists of complex lithospheric domains characterised by the co-existence of complex geodynamic processes such as crustal thinning, active intraplate cenozoic volcanism and low velocity anomalies. To estimate lithospheric structures with greater detail, we chose to perform the joint inversion of independent data sets such as receiver functions and surface wave dispersion curves (group and phase velocity). We perform a joint inversion based on principles of Bayesian transdimensional optimisation techniques (Kim etal., 2016). Unlike in the previous studies of NE China, the complexity of the model is determined from the data in the first stage of the inversion, and the data uncertainty is computed based on Bayesian statistics in the second stage of the inversion. The computed crustal properties are retrieved from an ensemble of probable models. We obtain major structural inferences with well constrained absolute velocity estimates, which are vital for inferring properties of the lithosphere and bulk crustal Vp/Vs ratio. The Vp/Vs estimate obtained from joint inversions confirms the high Vp/Vs ratio ( 1.98) obtained using the H-Kappa method beneath some stations. Moreover, we could confirm the existence of a lower crustal velocity beneath several stations (eg: station SHS) within the NE China plain. Based on these findings we attempt to identify a plausible origin for structural complexity. We compile a high-resolution 3D image of the lithospheric architecture of the NE China plain.
Shmool, Jessie Lc; Michanowicz, Drew R; Cambal, Leah; Tunno, Brett; Howell, Jeffery; Gillooly, Sara; Roper, Courtney; Tripathy, Sheila; Chubb, Lauren G; Eisl, Holger M; Gorczynski, John E; Holguin, Fernando E; Shields, Kyra Naumoff; Clougherty, Jane E
2014-04-16
Characterizing intra-urban variation in air quality is important for epidemiological investigation of health outcomes and disparities. To date, however, few studies have been designed to capture spatial variation during select hours of the day, or to examine the roles of meteorology and complex terrain in shaping intra-urban exposure gradients. We designed a spatial saturation monitoring study to target local air pollution sources, and to understand the role of topography and temperature inversions on fine-scale pollution variation by systematically allocating sampling locations across gradients in key local emissions sources (vehicle traffic, industrial facilities) and topography (elevation) in the Pittsburgh area. Street-level integrated samples of fine particulate matter (PM2.5), black carbon (BC), nitrogen dioxide (NO2), sulfur dioxide (SO2), and ozone (O3) were collected during morning rush and probable inversion hours (6-11 AM), during summer and winter. We hypothesized that pollution concentrations would be: 1) higher under inversion conditions, 2) exacerbated in lower-elevation areas, and 3) vary by season. During July - August 2011 and January - March 2012, we observed wide spatial and seasonal variability in pollution concentrations, exceeding the range measured at regulatory monitors. We identified elevated concentrations of multiple pollutants at lower-elevation sites, and a positive association between inversion frequency and NO2 concentration. We examined temporal adjustment methods for deriving seasonal concentration estimates, and found that the appropriate reference temporal trend differs between pollutants. Our time-stratified spatial saturation approach found some evidence for modification of inversion-concentration relationships by topography, and provided useful insights for refining and interpreting GIS-based pollution source indicators for Land Use Regression modeling.
Mirus, B.B.; Perkins, K.S.; Nimmo, J.R.; Singha, K.
2009-01-01
To understand their relation to pedogenic development, soil hydraulic properties in the Mojave Desert were investi- gated for three deposit types: (i) recently deposited sediments in an active wash, (ii) a soil of early Holocene age, and (iii) a highly developed soil of late Pleistocene age. Eff ective parameter values were estimated for a simplifi ed model based on Richards' equation using a fl ow simulator (VS2D), an inverse algorithm (UCODE-2005), and matric pressure and water content data from three ponded infi ltration experiments. The inverse problem framework was designed to account for the eff ects of subsurface lateral spreading of infi ltrated water. Although none of the inverse problems converged on a unique, best-fi t parameter set, a minimum standard error of regression was reached for each deposit type. Parameter sets from the numerous inversions that reached the minimum error were used to develop probability distribu tions for each parameter and deposit type. Electrical resistance imaging obtained for two of the three infi ltration experiments was used to independently test fl ow model performance. Simulations for the active wash and Holocene soil successfully depicted the lateral and vertical fl uxes. Simulations of the more pedogenically developed Pleistocene soil did not adequately replicate the observed fl ow processes, which would require a more complex conceptual model to include smaller scale heterogeneities. The inverse-modeling results, however, indicate that with increasing age, the steep slope of the soil water retention curve shitis toward more negative matric pressures. Assigning eff ective soil hydraulic properties based on soil age provides a promising framework for future development of regional-scale models of soil moisture dynamics in arid environments for land-management applications. ?? Soil Science Society of America.
NASA Astrophysics Data System (ADS)
Wéber, Zoltán
2018-06-01
Estimating the mechanisms of small (M < 4) earthquakes is quite challenging. A common scenario is that neither the available polarity data alone nor the well predictable near-station seismograms alone are sufficient to obtain reliable focal mechanism solutions for weak events. To handle this situation we introduce here a new method that jointly inverts waveforms and polarity data following a probabilistic approach. The procedure called joint waveform and polarity (JOWAPO) inversion maps the posterior probability density of the model parameters and estimates the maximum likelihood double-couple mechanism, the optimal source depth and the scalar seismic moment of the investigated event. The uncertainties of the solution are described by confidence regions. We have validated the method on two earthquakes for which well-determined focal mechanisms are available. The validation tests show that including waveforms in the inversion considerably reduces the uncertainties of the usually poorly constrained polarity solutions. The JOWAPO method performs best when it applies waveforms from at least two seismic stations. If the number of the polarity data is large enough, even single-station JOWAPO inversion can produce usable solutions. When only a few polarities are available, however, single-station inversion may result in biased mechanisms. In this case some caution must be taken when interpreting the results. We have successfully applied the JOWAPO method to an earthquake in North Hungary, whose mechanism could not be estimated by long-period waveform inversion. Using 17 P-wave polarities and waveforms at two nearby stations, the JOWAPO method produced a well-constrained focal mechanism. The solution is very similar to those obtained previously for four other events that occurred in the same earthquake sequence. The analysed event has a strike-slip mechanism with a P axis oriented approximately along an NE-SW direction.
[Influence of Restricting the Ankle Joint Complex Motions on Gait Stability of Human Body].
Li, Yang; Zhang, Junxia; Su, Hailong; Wang, Xinting; Zhang, Yan
2016-10-01
The purpose of this study is to determine how restricting inversion-eversion and pronation-supination motions of the ankle joint complex influences the stability of human gait.The experiment was carried out on a slippery level ground walkway.Spatiotemporal gait parameter,kinematics and kinetics data as well as utilized coefficient of friction(UCOF)were compared between two conditions,i.e.with restriction of the ankle joint complex inversion-eversion and pronation-supination motions(FIXED)and without restriction(FREE).The results showed that FIXED could lead to a significant increase in velocity and stride length and an obvious decrease in double support time.Furthermore,FIXED might affect the motion angle range of knee joint and ankle joint in the sagittal plane.In FIXED condition,UCOF was significantly increased,which could lead to an increase of slip probability and a decrease of gait stability.Hence,in the design of a walker,bipedal robot or prosthetic,the structure design which is used to achieve the ankle joint complex inversion-eversion and pronation-supination motions should be implemented.
The upper atmosphere of Uranus - Mean temperature and temperature variations
NASA Technical Reports Server (NTRS)
Dunham, E.; Elliot, J. L.; Gierasch, P. J.
1980-01-01
The number-density, pressure, and temperature profiles of the Uranian atmosphere in the pressure interval from 0.3 to 30 dynes/sq cm are derived from observations of the occultation of SAO 158687 by Uranus on 1977 March 10, observations made from the Kuiper Airborne Observatory and the Cape Town station of the South African Astronomical Observatory. The mean temperature is found to be about 95 K, but peak-to-peak variations from 10 K to 20 K or more exist on a scale of 150 km or 3 scale heights. The existence of a thermal inversion is established, but the inversion is much weaker than the analogous inversion on Neptune. The mean temperature can be explained by solar heating in the 3.3 micron methane band with a methane mixing ratio of 4 x 10 to the -6th combined with the cooling effect of ethane with a mixing ratio of not greater than 4 x 10 to the -6th. The temperature variations are probably due to a photochemical process that has formed a Chapman layer.
Optimal aperture synthesis radar imaging
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Chau, J. L.
2006-03-01
Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.
Evidence for a Dayside Thermal Inversion and High Metallicity for the Hot Jupiter WASP-18b
NASA Astrophysics Data System (ADS)
Sheppard, Kyle B.; Mandell, Avi M.; Tamburo, Patrick; Gandhi, Siddharth; Pinhas, Arazi; Madhusudhan, Nikku; Deming, Drake
2017-12-01
We find evidence for a strong thermal inversion in the dayside atmosphere of the highly irradiated hot Jupiter WASP-18b ({T}{eq}=2411 {{K}}, M=10.3 {M}J) based on emission spectroscopy from Hubble Space Telescope secondary eclipse observations and Spitzer eclipse photometry. We demonstrate a lack of water vapor in either absorption or emission at 1.4 μm. However, we infer emission at 4.5 μm and absorption at 1.6 μm that we attribute to CO, as well as a non-detection of all other relevant species (e.g., TiO, VO). The most probable atmospheric retrieval solution indicates a C/O ratio of 1 and a high metallicity ({{C}}/{{H}}={283}-138+395× solar). The derived composition and T/P profile suggest that WASP-18b is the first example of both a planet with a non-oxide driven thermal inversion and a planet with an atmospheric metallicity inconsistent with that predicted for Jupiter-mass planets at > 2σ . Future observations are necessary to confirm the unusual planetary properties implied by these results.
1990-07-01
permeation chromatography (GPC) have been applied to lubricant type samples. 8 Most recently the newly introduced supercritical fluid chromatography (SFC... fluids , such as lubricants and hydraulic fluids can also be examined using various inverse chromatography procedures. Another mode, known as reaction...introduction of new gaseous extraction techniques, e.g., supercritical fluid extraction, procedures such as IGC will probably be developed for vastly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hilton, Harry H.
Protocols are developed for formulating optimal viscoelastic designer functionally graded materials tailored to best respond to prescribed loading and boundary conditions. In essence, an inverse approach is adopted where material properties instead of structures per se are designed and then distributed throughout structural elements. The final measure of viscoelastic material efficacy is expressed in terms of failure probabilities vs. survival time000.
How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach
2009-12-01
Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of
Zhang, Ying; Alonzo, Todd A
2016-11-01
In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three-way ROC analysis focuses on ordinal tests. We propose verification bias-correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U-statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Singh, Ankur; Arora, Monika; English, Dallas R; Mathur, Manu R
2015-01-01
Socioeconomic differences in tobacco use have been reported, but there is a lack of evidence on how they vary according to types of tobacco use. This study explored socioeconomic differences associated with cigarette, bidi, smokeless tobacco (SLT), and dual use (smoking and smokeless tobacco use) in India and tested whether these differences vary by gender and residential area. Secondary analysis of Global Adult Tobacco Survey (GATS) 2009-10 (n = 69,296) was conducted. The primary outcomes were self-reported cigarette, bidi smoking, SLT, and dual use. The main explanatory variables were wealth, education, and occupation. Associations were assessed using multinomial logistic regressions. 69,030 adults participated in the study. Positive association was observed between wealth and prevalence of cigarette smoking while inverse associations were observed for bidi smoking, SLT, and dual use after adjustment for potential confounders. Inverse associations with education were observed for all four types after adjusting for confounders. Significant interactions were observed for gender and area in the association between cigarette, bidi, and smokeless tobacco use with wealth and education. The probability of cigarette smoking was higher for wealthier individuals while the probability of bidi smoking, smokeless tobacco use, and dual use was higher for those with lesser wealth and education.
Bayesian Orbit Computation Tools for Objects on Geocentric Orbits
NASA Astrophysics Data System (ADS)
Virtanen, J.; Granvik, M.; Muinonen, K.; Oszkiewicz, D.
2013-08-01
We consider the space-debris orbital inversion problem via the concept of Bayesian inference. The methodology has been put forward for the orbital analysis of solar system small bodies in early 1990's [7] and results in a full solution of the statistical inverse problem given in terms of a posteriori probability density function (PDF) for the orbital parameters. We demonstrate the applicability of our statistical orbital analysis software to Earth orbiting objects, both using well-established Monte Carlo (MC) techniques (for a review, see e.g. [13] as well as recently developed Markov-chain MC (MCMC) techniques (e.g., [9]). In particular, we exploit the novel virtual observation MCMC method [8], which is based on the characterization of the phase-space volume of orbital solutions before the actual MCMC sampling. Our statistical methods and the resulting PDFs immediately enable probabilistic impact predictions to be carried out. Furthermore, this can be readily done also for very sparse data sets and data sets of poor quality - providing that some a priori information on the observational uncertainty is available. For asteroids, impact probabilities with the Earth from the discovery night onwards have been provided, e.g., by [11] and [10], the latter study includes the sampling of the observational-error standard deviation as a random variable.
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Zhao, Yinshan; Shirani, Afsaneh; Kingwell, Elaine; Evans, Charity; van der Kop, Mia; Oger, Joel; Tremlett, Helen
2014-01-01
Longitudinal observational data are required to assess the association between exposure to β-interferon medications and disease progression among relapsing-remitting multiple sclerosis (MS) patients in the “real-world” clinical practice setting. Marginal structural Cox models (MSCMs) can provide distinct advantages over traditional approaches by allowing adjustment for time-varying confounders such as MS relapses, as well as baseline characteristics, through the use of inverse probability weighting. We assessed the suitability of MSCMs to analyze data from a large cohort of 1,697 relapsing-remitting MS patients in British Columbia, Canada (1995–2008). In the context of this observational study, which spanned more than a decade and involved patients with a chronic yet fluctuating disease, the recently proposed “normalized stabilized” weights were found to be the most appropriate choice of weights. Using this model, no association between β-interferon exposure and the hazard of disability progression was found (hazard ratio = 1.36, 95% confidence interval: 0.95, 1.94). For sensitivity analyses, truncated normalized unstabilized weights were used in additional MSCMs and to construct inverse probability weight-adjusted survival curves; the findings did not change. Additionally, qualitatively similar conclusions from approximation approaches to the weighted Cox model (i.e., MSCM) extend confidence in the findings. PMID:24939980
Singh, Ankur; Arora, Monika; English, Dallas R.; Mathur, Manu R.
2015-01-01
Socioeconomic differences in tobacco use have been reported, but there is a lack of evidence on how they vary according to types of tobacco use. This study explored socioeconomic differences associated with cigarette, bidi, smokeless tobacco (SLT), and dual use (smoking and smokeless tobacco use) in India and tested whether these differences vary by gender and residential area. Secondary analysis of Global Adult Tobacco Survey (GATS) 2009-10 (n = 69,296) was conducted. The primary outcomes were self-reported cigarette, bidi smoking, SLT, and dual use. The main explanatory variables were wealth, education, and occupation. Associations were assessed using multinomial logistic regressions. 69,030 adults participated in the study. Positive association was observed between wealth and prevalence of cigarette smoking while inverse associations were observed for bidi smoking, SLT, and dual use after adjustment for potential confounders. Inverse associations with education were observed for all four types after adjusting for confounders. Significant interactions were observed for gender and area in the association between cigarette, bidi, and smokeless tobacco use with wealth and education. The probability of cigarette smoking was higher for wealthier individuals while the probability of bidi smoking, smokeless tobacco use, and dual use was higher for those with lesser wealth and education. PMID:26273649
Analysis of the variability in ground-motion synthesis and inversion
Spudich, Paul A.; Cirella, Antonella; Scognamiglio, Laura; Tinti, Elisa
2017-12-07
In almost all past inversions of large-earthquake ground motions for rupture behavior, the goal of the inversion is to find the “best fitting” rupture model that predicts ground motions which optimize some function of the difference between predicted and observed ground motions. This type of inversion was pioneered in the linear-inverse sense by Olson and Apsel (1982), who minimized the square of the difference between observed and simulated motions (“least squares”) while simultaneously minimizing the rupture-model norm (by setting the null-space component of the rupture model to zero), and has been extended in many ways, one of which is the use of nonlinear inversion schemes such as simulated annealing algorithms that optimize some other misfit function. For example, the simulated annealing algorithm of Piatanesi and others (2007) finds the rupture model that minimizes a “cost” function which combines a least-squares and a waveform-correlation measure of misfit.All such inversions that look for a unique “best” model have at least three problems. (1) They have removed the null-space component of the rupture model—that is, an infinite family of rupture models that all fit the data equally well have been narrowed down to a single model. Some property of interest in the rupture model might have been discarded in this winnowing process. (2) Smoothing constraints are commonly used to yield a unique “best” model, in which case spatially rough rupture models will have been discarded, even if they provide a good fit to the data. (3) No estimate of confidence in the resulting rupture models can be given because the effects of unknown errors in the Green’s functions (“theory errors”) have not been assessed. In inversion for rupture behavior, these theory errors are generally larger than the data errors caused by ground noise and instrumental limitations, and so overfitting of the data is probably ubiquitous for such inversions.Recently, attention has turned to the inclusion of theory errors in the inversion process. Yagi and Fukahata (2011) made an important contribution by presenting a method to estimate the uncertainties in predicted large-earthquake ground motions due to uncertainties in the Green’s functions. Here we derive their result and compare it with the results of other recent studies that look at theory errors in a Bayesian inversion context particularly those by Bodin and others (2012), Duputel and others (2012), Dettmer and others (2014), and Minson and others (2014).Notably, in all these studies, the estimates of theory error were obtained from theoretical considerations alone; none of the investigators actually measured Green’s function errors. Large earthquakes typically have aftershocks, which, if their rupture surfaces are physically small enough, can be considered point evaluations of the real Green’s functions of the Earth. Here we simulate smallaftershock ground motions with (erroneous) theoretical Green’s functions. Taking differences between aftershock ground motions and simulated motions to be the “theory error,” we derive a statistical model of the sources of discrepancies between the theoretical and real Green’s functions. We use this model with an extended frequency-domain version of the time-domain theory of Yagi and Fukahata (2011) to determine the expected variance 2 τ caused by Green’s function error in ground motions from a larger (nonpoint) earthquake that we seek to model.We also differ from the above-mentioned Bayesian inversions in our handling of the nonuniqueness problem of seismic inversion. We follow the philosophy of Segall and Du (1993), who, instead of looking for a best-fitting model, looked for slip models that answered specific questions about the earthquakes they studied. In their Bayesian inversions, they inductively derived a posterior probability-density function (PDF) for every model parameter. We instead seek to find two extremal rupture models whose ground motions fit the data within the error bounds given by 2 τ , as quantified by using a chi-squared test described below. So, we can ask questions such as, “What are the rupture models with the highest and lowest average rupture speed consistent with the theory errors?” Having found those models, we can then say with confidence that the true rupture speed is somewhere between those values. Although the Bayesian approach gives a complete solution to the inverse problem, it is computationally demanding: Minson and others (2014) needed 1010 forward kinematic simulations to derive their posterior probability distribution. In our approach, only about107 simulations are needed. Moreover, in practical application, only a small set of rupture models may be needed to answer the relevant questions—for example, determining the maximum likelihood solution (achievable through standard inversion techniques) and the two rupture models bounding some property of interest.The specific property that we wish to investigate is the correlation between various rupturemodel parameters, such as peak slip velocity and rupture velocity, in models of real earthquakes. In some simulations of ground motions for hypothetical large earthquakes, such as those by Aagaard and others (2010) and the Southern California Earthquake Center Broadband Simulation Platform (Graves and Pitarka, 2015), rupture speed is assumed to correlate locally with peak slip, although there is evidence that rupture speed should correlate better with peak slip speed, owing to its dependence on local stress drop. We may be able to determine ways to modify Piatanesi and others’s (2007) inversion’s “cost” function to find rupture models with either high or low degrees of correlation between pairs of rupture parameters. We propose a cost function designed to find these two extremal models.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
incaRNAfbinv: a web server for the fragment-based design of RNA sequences
Drory Retwitzer, Matan; Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme; Barash, Danny
2016-01-01
Abstract In recent years, new methods for computational RNA design have been developed and applied to various problems in synthetic biology and nanotechnology. Lately, there is considerable interest in incorporating essential biological information when solving the inverse RNA folding problem. Correspondingly, RNAfbinv aims at including biologically meaningful constraints and is the only program to-date that performs a fragment-based design of RNA sequences. In doing so it allows the design of sequences that do not necessarily exactly fold into the target, as long as the overall coarse-grained tree graph shape is preserved. Augmented by the weighted sampling algorithm of incaRNAtion, our web server called incaRNAfbinv implements the method devised in RNAfbinv and offers an interactive environment for the inverse folding of RNA using a fragment-based design approach. It takes as input: a target RNA secondary structure; optional sequence and motif constraints; optional target minimum free energy, neutrality and GC content. In addition to the design of synthetic regulatory sequences, it can be used as a pre-processing step for the detection of novel natural occurring RNAs. The two complementary methodologies RNAfbinv and incaRNAtion are merged together and fully implemented in our web server incaRNAfbinv, available at http://www.cs.bgu.ac.il/incaRNAfbinv. PMID:27185893
Rotational accelerations stabilize leading edge vortices on revolving fly wings.
Lentink, David; Dickinson, Michael H
2009-08-01
The aerodynamic performance of hovering insects is largely explained by the presence of a stably attached leading edge vortex (LEV) on top of their wings. Although LEVs have been visualized on real, physically modeled, and simulated insects, the physical mechanisms responsible for their stability are poorly understood. To gain fundamental insight into LEV stability on flapping fly wings we expressed the Navier-Stokes equations in a rotating frame of reference attached to the wing's surface. Using these equations we show that LEV dynamics on flapping wings are governed by three terms: angular, centripetal and Coriolis acceleration. Our analysis for hovering conditions shows that angular acceleration is proportional to the inverse of dimensionless stroke amplitude, whereas Coriolis and centripetal acceleration are proportional to the inverse of the Rossby number. Using a dynamically scaled robot model of a flapping fruit fly wing to systematically vary these dimensionless numbers, we determined which of the three accelerations mediate LEV stability. Our force measurements and flow visualizations indicate that the LEV is stabilized by the ;quasi-steady' centripetal and Coriolis accelerations that are present at low Rossby number and result from the propeller-like sweep of the wing. In contrast, the unsteady angular acceleration that results from the back and forth motion of a flapping wing does not appear to play a role in the stable attachment of the LEV. Angular acceleration is, however, critical for LEV integrity as we found it can mediate LEV spiral bursting, a high Reynolds number effect. Our analysis and experiments further suggest that the mechanism responsible for LEV stability is not dependent on Reynolds number, at least over the range most relevant for insect flight (100
NASA Technical Reports Server (NTRS)
Mcdade, Ian C.
1991-01-01
Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.
NASA Astrophysics Data System (ADS)
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2017-12-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.
NASA Astrophysics Data System (ADS)
Linde, N.; Vrugt, J. A.
2009-04-01
Geophysical models are increasingly used in hydrological simulations and inversions, where they are typically treated as an artificial data source with known uncorrelated "data errors". The model appraisal problem in classical deterministic linear and non-linear inversion approaches based on linearization is often addressed by calculating model resolution and model covariance matrices. These measures offer only a limited potential to assign a more appropriate "data covariance matrix" for future hydrological applications, simply because the regularization operators used to construct a stable inverse solution bear a strong imprint on such estimates and because the non-linearity of the geophysical inverse problem is not explored. We present a parallelized Markov Chain Monte Carlo (MCMC) scheme to efficiently derive the posterior spatially distributed radar slowness and water content between boreholes given first-arrival traveltimes. This method is called DiffeRential Evolution Adaptive Metropolis (DREAM_ZS) with snooker updater and sampling from past states. Our inverse scheme does not impose any smoothness on the final solution, and uses uniform prior ranges of the parameters. The posterior distribution of radar slowness is converted into spatially distributed soil moisture values using a petrophysical relationship. To benchmark the performance of DREAM_ZS, we first apply our inverse method to a synthetic two-dimensional infiltration experiment using 9421 traveltimes contaminated with Gaussian errors and 80 different model parameters, corresponding to a model discretization of 0.3 m × 0.3 m. After this, the method is applied to field data acquired in the vadose zone during snowmelt. This work demonstrates that fully non-linear stochastic inversion can be applied with few limiting assumptions to a range of common two-dimensional tomographic geophysical problems. The main advantage of DREAM_ZS is that it provides a full view of the posterior distribution of spatially distributed soil moisture, which is key to appropriately treat geophysical parameter uncertainty and infer hydrologic models.
Predictors of Acute Bacterial Meningitis in Children from a Malaria-Endemic Area of Papua New Guinea
Laman, Moses; Manning, Laurens; Greenhill, Andrew R.; Mare, Trevor; Michael, Audrey; Shem, Silas; Vince, John; Lagani, William; Hwaiwhanje, Ilomo; Siba, Peter M.; Mueller, Ivo; Davis, Timothy M. E.
2012-01-01
Predictors of acute bacterial meningitis (ABM) were assessed in 554 children in Papua New Guinea 0.2–10 years of age who were hospitalized with culture-proven meningitis, probable meningitis, or non-meningitic illness investigated by lumbar puncture. Forty-seven (8.5%) had proven meningitis and 36 (6.5%) had probable meningitis. Neck stiffness, Kernig’s and Brudzinski’s signs and, in children < 18 months of age, a bulging fontanel had positive likelihood ratios (LRs) ≥ 4.3 for proven/probable ABM. Multiple seizures and deep coma were less predictive (LR = 1.5–2.1). Single seizures and malaria parasitemia had low LRs (≤ 0.5). In logistic regression including clinical variables, Kernig’s sign and deep coma were positively associated with ABM, and a single seizure was negatively associated (P ≤ 0.01). In models including microscopy, neck stiffness and deep coma were positively associated with ABM and parasitemia was negatively associated with ABM (P ≤ 0.04). In young children, a bulging fontanel added to the model (P < 0.001). Simple clinical features predict ABM in children in Papua New Guinea but malaria microscopy augments diagnostic precision. PMID:22302856
Granular Segregation Driven by Particle Interactions
NASA Astrophysics Data System (ADS)
Lozano, C.; Zuriguel, I.; Garcimartín, A.; Mullin, T.
2015-05-01
We report the results of an experimental study of particle-particle interactions in a horizontally shaken granular layer that undergoes a second order phase transition from a binary gas to a segregation liquid as the packing fraction C is increased. By focusing on the behavior of individual particles, the effect of C is studied on (1) the process of cluster formation, (2) cluster dynamics, and (3) cluster destruction. The outcomes indicate that the segregation is driven by two mechanisms: attraction between particles with the same properties and random motion with a characteristic length that is inversely proportional to C . All clusters investigated are found to be transient and the probability distribution functions of the separation times display a power law tail, indicating that the splitting probability decreases with time.
Occupation probabilities and fluctuations in the asymmetric simple inclusion process
NASA Astrophysics Data System (ADS)
Reuveni, Shlomi; Hirschberg, Ori; Eliazar, Iddo; Yechiali, Uri
2014-04-01
The asymmetric simple inclusion process (ASIP), a lattice-gas model of unidirectional transport and aggregation, was recently proposed as an "inclusion" counterpart of the asymmetric simple exclusion process. In this paper we present an exact closed-form expression for the probability that a given number of particles occupies a given set of consecutive lattice sites. Our results are expressed in terms of the entries of Catalan's trapezoids—number arrays which generalize Catalan's numbers and Catalan's triangle. We further prove that the ASIP is asymptotically governed by the following: (i) an inverse square-root law of occupation, (ii) a square-root law of fluctuation, and (iii) a Rayleigh law for the distribution of interexit times. The universality of these results is discussed.
Brahmajothi, Mulugu V; Mason, S Nicholas; Whorton, A Richard; McMahon, Timothy J; Auten, Richard L
2010-07-15
The pathway by which inhaled NO gas enters pulmonary alveolar epithelial cells has not been directly tested. Although the expected mechanism is diffusion, another route is the formation of S-nitroso-L-cysteine, which then enters the cell through the L-type amino acid transporter (LAT). To determine if NO gas also enters alveolar epithelium this way, we exposed alveolar epithelial-rat type I, type II, L2, R3/1, and human A549-cells to NO gas at the air liquid interface in the presence of L- and D-cysteine+/-LAT competitors. NO gas exposure concentration dependently increased intracellular NO and S-nitrosothiol levels in the presence of L- but not D-cysteine, which was inhibited by LAT competitors, and was inversely proportional to diffusion distance. The effect of L-cysteine on NO uptake was also concentration dependent. Without preincubation with L-cysteine, NO uptake was significantly reduced. We found similar effects using ethyl nitrite gas in place of NO. Exposure to either gas induced activation of soluble guanylyl cylase in a parallel manner, consistent with LAT dependence. We conclude that NO gas uptake by alveolar epithelium achieves NO-based signaling predominantly by forming extracellular S-nitroso-L-cysteine that is taken up through LAT, rather than by diffusion. Augmenting extracellular S-nitroso-L-cysteine formation may augment pharmacological actions of inhaled NO gas. Copyright 2010 Elsevier Inc. All rights reserved.
Chen, Li; Lodge, Daniel J
2015-01-01
Background: Schizophrenia is a debilitating disorder that affects 1% of the US population. While the exogenous administration of cannabinoids such as tetrahydrocannabinol is reported to exacerbate psychosis in schizophrenia patients, augmenting the levels of endogenous cannabinoids has gained attention as a possible alternative therapy to schizophrenia due to clinical and preclinical observations. Thus, patients with schizophrenia demonstrate an inverse relationship between psychotic symptoms and levels of the endocannabinoid anandamide. In addition, increasing endocannabinoid levels (by blockade of enzymatic degradation) has been reported to attenuate social withdrawal in a preclinical model of schizophrenia. Here we examine the effects of increasing endogenous cannabinoids on dopamine neuron activity in the sub-chronic phencyclidine (PCP) model. Aberrant dopamine system function is thought to underlie the positive symptoms of schizophrenia. Methods: Using in vivo extracellular recordings in chloral hydrate–anesthetized rats, we now demonstrate an increase in dopamine neuron population activity in PCP-treated rats. Results: Interestingly, endocannabinoid upregulation, induced by URB-597, was able to normalize this aberrant dopamine neuron activity. Furthermore, we provide evidence that the ventral pallidum is the site where URB-597 acts to restore ventral tegmental area activity. Conclusions: Taken together, we provide preclinical evidence that augmenting endogenous cannabinoids may be an effective therapy for schizophrenia, acting in part to restore ventral pallidal activity. PMID:25539511
Laucho-Contreras, Maria E.; Polverino, Francesca; Tesfaigzi, Yohannes; Pilon, Aprile; Celli, Bartolome R.; Owen, Caroline A.
2016-01-01
Introduction Club cell protein 16 (CC16) is the most abundant protein in bronchoalveolar lavage fluid. CC16 has anti-inflammatory properties in smoke-exposed lungs, and chronic obstructive pulmonary disease (COPD) is associated with CC16 deficiency. Herein, we explored whether CC16 is a therapeutic target for COPD. Areas Covered We reviewed the literature on the factors that regulate airway CC16 expression, its biologic functions and its protective activities in smoke-exposed lungs using PUBMED searches. We generated hypotheses on the mechanisms by which CC16 limits COPD development, and discuss its potential as a new therapeutic approach for COPD. Expert Opinion CC16 plasma and lung levels are reduced in smokers without airflow obstruction and COPD patients. In COPD patients, airway CC16 expression is inversely correlated with severity of airflow obstruction. CC16 deficiency increases smoke-induced lung pathologies in mice by its effects on epithelial cells, leukocytes, and fibroblasts. Experimental augmentation of CC16 levels using recombinant CC16 in cell culture systems, plasmid and adenoviral-mediated over-expression of CC16 in epithelial cells or smoke-exposed murine airways reduces inflammation and cellular injury. Additional studies are necessary to assess the efficacy of therapies aimed at restoring airway CC16 levels as a new disease-modifying therapy for COPD patients. PMID:26781659
Ideal Cardiovascular Health and Arterial Stiffness in Spanish Adults-The EVIDENT Study.
García-Hermoso, Antonio; Martínez-Vizcaíno, Vicente; Gomez-Marcos, Manuel Ángel; Cavero-Redondo, Iván; Recio-Rodriguez, José Ignacio; García-Ortiz, Luis
2018-05-01
Studies concerning ideal cardiovascular (CV) health and its relationship with arterial stiffness are lacking. This study examined the association between arterial stiffness with ideal CV health as defined by the American Heart Association, across age groups and gender. The cross-sectional study included 1365 adults. Ideal CV health was defined as meeting ideal levels of the following components: 4 behaviors (smoking, body mass index, physical activity, and Mediterranean diet adherence) and 3 factors (total cholesterol, blood pressure, and glycated hemoglobin). Patients were grouped into 3 categories according to their number of ideal CV health metrics: ideal (5-7 metrics), intermediate (3-4 metrics), and poor (0-2 metrics). We analyzed the pulse wave velocity (PWV), the central and radial augmentation indexes, and the ambulatory arterial stiffness index (AASI). The ideal CV health profile was inversely associated with lower arterial radial augmentation index and AASI in both genders, particularly in middle-aged (45-65 years) and in elderly subjects (>65 years). Also in elderly subjects, adjusted models showed that adults with at least 3 health metrics at ideal levels had significantly lower PWV than those with 2 or fewer ideal health metrics. An association was found between a favorable level of ideal CV health metrics and lower arterial stiffness across age groups. Copyright © 2018 National Stroke Association. Published by Elsevier Inc. All rights reserved.
SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Ruan, D
2015-06-15
Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less
NASA Technical Reports Server (NTRS)
Kleinhenz, Julie; Sarmiento, Charles; Marshall, William
2012-01-01
The use of nontoxic propellants in future exploration vehicles would enable safer, more cost-effective mission scenarios. One promising green alternative to existing hypergols is liquid methane (LCH4) with liquid oxygen (LO2). A 100 lbf LO2/LCH4 engine was developed under the NASA Propulsion and Cryogenic Advanced Development project and tested at the NASA Glenn Research Center Altitude Combustion Stand in a low pressure environment. High ignition energy is a perceived drawback of this propellant combination; so this ignition margin test program examined ignition performance versus delivered spark energy. Sensitivity of ignition to spark timing and repetition rate was also explored. Three different exciter units were used with the engine s augmented (torch) igniter. Captured waveforms indicated spark behavior in hot fire conditions was inconsistent compared to the well-behaved dry sparks. This suggests that rising pressure and flow rate increase spark impedance and may at some point compromise an exciter s ability to complete each spark. The reduced spark energies of such quenched deliveries resulted in more erratic ignitions, decreasing ignition probability. The timing of the sparks relative to the pressure/flow conditions also impacted the probability of ignition. Sparks occurring early in the flow could trigger ignition with energies as low as 1 to 6 mJ, though multiple, similarly timed sparks of 55 to 75 mJ were required for reliable ignition. Delayed spark application and reduced spark repetition rate both correlated with late and occasional failed ignitions. An optimum time interval for spark application and ignition therefore coincides with propellant introduction to the igniter.
Koštál, Vladimír; Korbelová, Jaroslava; Poupardin, Rodolphe; Moos, Martin; Šimek, Petr
2016-08-01
The fruit fly Drosophila melanogaster is an insect of tropical origin. Its larval stage is evolutionarily adapted for rapid growth and development under warm conditions and shows high sensitivity to cold. In this study, we further developed an optimal acclimation and freezing protocol that significantly improves larval freeze tolerance (an ability to survive at -5°C when most of the freezable fraction of water is converted to ice). Using the optimal protocol, freeze survival to adult stage increased from 0.7% to 12.6% in the larvae fed standard diet (agar, sugar, yeast, cornmeal). Next, we fed the larvae diets augmented with 31 different amino compounds, administered in different concentrations, and observed their effects on larval metabolomic composition, viability, rate of development and freeze tolerance. While some diet additives were toxic, others showed positive effects on freeze tolerance. Statistical correlation revealed tight association between high freeze tolerance and high levels of amino compounds involved in arginine and proline metabolism. Proline- and arginine-augmented diets showed the highest potential, improving freeze survival to 42.1% and 50.6%, respectively. Two plausible mechanisms by which high concentrations of proline and arginine might stimulate high freeze tolerance are discussed: (i) proline, probably in combination with trehalose, could reduce partial unfolding of proteins and prevent membrane fusions in the larvae exposed to thermal stress (prior to freezing) or during freeze dehydration; (ii) both arginine and proline are exceptional among amino compounds in their ability to form supramolecular aggregates which probably bind partially unfolded proteins and inhibit their aggregation under increasing freeze dehydration. © 2016. Published by The Company of Biologists Ltd.
De Boni, Raquel; do Nascimento Silva, Pedro Luis; Bastos, Francisco Inácio; Pechansky, Flavio; de Vasconcellos, Mauricio Teixeira Leite
2012-01-01
Drinking alcoholic beverages in places such as bars and clubs may be associated with harmful consequences such as violence and impaired driving. However, methods for obtaining probabilistic samples of drivers who drink at these places remain a challenge – since there is no a priori information on this mobile population – and must be continually improved. This paper describes the procedures adopted in the selection of a population-based sample of drivers who drank at alcohol selling outlets in Porto Alegre, Brazil, which we used to estimate the prevalence of intention to drive under the influence of alcohol. The sampling strategy comprises a stratified three-stage cluster sampling: 1) census enumeration areas (CEA) were stratified by alcohol outlets (AO) density and sampled with probability proportional to the number of AOs in each CEA; 2) combinations of outlets and shifts (COS) were stratified by prevalence of alcohol-related traffic crashes and sampled with probability proportional to their squared duration in hours; and, 3) drivers who drank at the selected COS were stratified by their intention to drive and sampled using inverse sampling. Sample weights were calibrated using a post-stratification estimator. 3,118 individuals were approached and 683 drivers interviewed, leading to an estimate that 56.3% (SE = 3,5%) of the drivers intended to drive after drinking in less than one hour after the interview. Prevalence was also estimated by sex and broad age groups. The combined use of stratification and inverse sampling enabled a good trade-off between resource and time allocation, while preserving the ability to generalize the findings. The current strategy can be viewed as a step forward in the efforts to improve surveys and estimation for hard-to-reach, mobile populations. PMID:22514620
Scale invariance and universality in economic phenomena
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Amaral, L. A. N.; Gopikrishnan, P.; Plerou, V.; Salinger, M. A.
2002-03-01
This paper discusses some of the similarities between work being done by economists and by computational physicists seeking to contribute to economics. We also mention some of the differences in the approaches taken and seek to justify these different approaches by developing the argument that by approaching the same problem from different points of view, new results might emerge. In particular, we review two such new results. Specifically, we discuss the two newly discovered scaling results that appear to be `universal', in the sense that they hold for widely different economies as well as for different time periods: (i) the fluctuation of price changes of any stock market is characterized by a probability density function, which is a simple power law with exponent -4 extending over 102 standard deviations (a factor of 108 on the y-axis); this result is analogous to the Gutenberg-Richter power law describing the histogram of earthquakes of a given strength; (ii) for a wide range of economic organizations, the histogram that shows how size of organization is inversely correlated to fluctuations in size with an exponent ≈0.2. Neither of these two new empirical laws has a firm theoretical foundation. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behaviour of the response function at the critical point (zero magnetic field) leads to large fluctuations. We discuss a curious `symmetry breaking' for values of Σ above a certain threshold value Σc here Σ is defined to be the local first moment of the probability distribution of demand Ω - the difference between the number of shares traded in buyer-initiated and seller-initiated trades. This feature is qualitatively identical to the behaviour of the probability density of the magnetization for fixed values of the inverse temperature.
Penile Dislocation with Inversion: A Rare Complication of Blunt Pelvic Injury
Sahadev, Ravindra; Jadhav, Vinay; Munianjanappa, Narendra Babu; Shankar, Gowri
2018-01-01
Penile injuries in children are usually uncommon and are predominantly associated with pelvic trauma or as postcircumcision injuries. The authors present a rare case of penile dislocation with penile inversion in a 5-year-old child occurring due to blunt pelvic injury. The child presented 3 months after pelvic injury with a suprapubic catheter for urinary diversion and absent penis with only penile skin visible. The presence of dislocated penile body was detected on magnetic resonance imaging, which was subsequently confirmed intraoperatively. During the surgery, the dislocated penis was identified and mobilized into its normal anatomical position within the remnant penile skin. Very few cases of penile dislocation have been reported in the literature. Pubic fracture with pulling of suspensory ligament resulting in dislocation of the penis would have been the probable mechanism of injury. PMID:29681700
Theoretical comparison of maser materials for a 32-GHz maser amplifier
NASA Technical Reports Server (NTRS)
Lyons, James R.
1988-01-01
The computational results of a comparison of maser materials for a 32 GHz maser amplifier are presented. The search for a better maser material is prompted by the relatively large amount of pump power required to sustain a population inversion in ruby at frequencies on the order of 30 GHz and above. The general requirements of a maser material and the specific problems with ruby are outlined. The spin Hamiltonian is used to calculate energy levels and transition probabilities for ruby and twelve other materials. A table is compiled of several attractive operating points for each of the materials analyzed. All the materials analyzed possess operating points that could be superior to ruby. To complete the evaluation of the materials, measurements of inversion ratio and pump power requirements must be made in the future.
Modeling the expected lifetime and evolution of a deme's principal genetic sequence.
NASA Astrophysics Data System (ADS)
Clark, Brian
2014-03-01
The principal genetic sequence (PGS) is the most common genetic sequence in a deme. The PGS changes over time because new genetic sequences are created by inversions, compete with the current PGS, and a small fraction become PGSs. A set of coupled difference equations provides a description of the evolution of the PGS distribution function in an ensemble of demes. Solving the set of equations produces the survival probability of a new genetic sequence and the expected lifetime of an existing PGS as a function of inversion size and rate, recombination rate, and deme size. Additionally, the PGS distribution function is used to explain the transition pathway from old to new PGSs. We compare these results to a cellular automaton based representation of a deme and the drosophila species, D. melanogaster and D. yakuba.
NASA Astrophysics Data System (ADS)
Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan
2017-10-01
This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.
NASA Astrophysics Data System (ADS)
Liu, Boda; Liang, Yan
2017-04-01
Markov chain Monte Carlo (MCMC) simulation is a powerful statistical method in solving inverse problems that arise from a wide range of applications. In Earth sciences applications of MCMC simulations are primarily in the field of geophysics. The purpose of this study is to introduce MCMC methods to geochemical inverse problems related to trace element fractionation during mantle melting. MCMC methods have several advantages over least squares methods in deciphering melting processes from trace element abundances in basalts and mantle rocks. Here we use an MCMC method to invert for extent of melting, fraction of melt present during melting, and extent of chemical disequilibrium between the melt and residual solid from REE abundances in clinopyroxene in abyssal peridotites from Mid-Atlantic Ridge, Central Indian Ridge, Southwest Indian Ridge, Lena Trough, and American-Antarctic Ridge. We consider two melting models: one with exact analytical solution and the other without. We solve the latter numerically in a chain of melting models according to the Metropolis-Hastings algorithm. The probability distribution of inverted melting parameters depends on assumptions of the physical model, knowledge of mantle source composition, and constraints from the REE data. Results from MCMC inversion are consistent with and provide more reliable uncertainty estimates than results based on nonlinear least squares inversion. We show that chemical disequilibrium is likely to play an important role in fractionating LREE in residual peridotites during partial melting beneath mid-ocean ridge spreading centers. MCMC simulation is well suited for more complicated but physically more realistic melting problems that do not have analytical solutions.
Jain, Sonia P; Gulhane, Sachin; Pandey, Neha; Bisne, Esha
2015-01-01
Psoriasis is an autoimmune chronic inflammatory skin disease known to be triggered by streptococcal and HIV infections. However, human papilloma virus infection (HPV) as a triggering factor for the development of psoriasis has not been reported yet. We, hereby report a case of plaque type with inverse psoriasis which probably could have been triggered by genital warts (HPV infection) and discuss the possible pathomechanisms for their coexistence and its management.
InAs/GaSb Broken-Gap Heterostructure Laser for Terahertz Spectroscopic Sensing Application
2010-09-01
from interband tunneling from the emitter is insignificant when forward biasing is applied. This means that HHs will accumulate in the right VB well... dependent on in-plane momentum. An important observation from Figs. 3 and 4 is that the interband tunneling probability is significantly less than the CB...leverages resonant electron injection and interband tunneling electron depletion to realize electron population inversion, while at the same time mitigating
Rendering potential wearable robot designs with the LOPES gait trainer.
Koopman, B; van Asseldonk, E H F; van der Kooij, H; van Dijk, W; Ronsse, R
2011-01-01
In recent years, wearable robots (WRs) for rehabilitation, personal assistance, or human augmentation are gaining increasing interest. To make these devices more energy efficient, radical changes to the mechanical structure of the device are being considered. However, it remains very difficult to predict how people will respond to, and interact with, WRs that differ in terms of mechanical design. Users may adjust their gait pattern in response to the mechanical restrictions or properties of the device. The goal of this pilot study is to show the feasibility of rendering the mechanical properties of different potential WR designs using the robotic gait training device LOPES. This paper describes a new method that selectively cancels the dynamics of LOPES itself and adds the dynamics of the rendered WR using two parallel inverse models. Adaptive frequency oscillators were used to get estimates of the joint position, velocity, and acceleration. Using the inverse models, different WR designs can be evaluated, eliminating the need to build several prototypes. As a proof of principle, we simulated the effect of a very simple WR that consisted of a mass attached to the ankles. Preliminary results show that we are partially able to cancel the dynamics of LOPES. Additionally, the simulation of the mass showed an increase in muscle activity but not in the same level as during the control, where subjects actually carried the mass. In conclusion, the results in this paper suggest that LOPES can be used to render different WRs. In addition, it is very likely that the results can be further optimized when more effort is put in retrieving proper estimations for the velocity and acceleration, which are required for the inverse models. © 2011 IEEE
Jacquin, Hugo; Gilson, Amy; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi
2016-05-01
Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
The gravitational law of social interaction
NASA Astrophysics Data System (ADS)
Levy, Moshe; Goldenberg, Jacob
2014-01-01
While a great deal is known about the topology of social networks, there is much less agreement about the geographical structure of these networks. The fundamental question in this context is: how does the probability of a social link between two individuals depend on the physical distance between them? While it is clear that the probability decreases with the distance, various studies have found different functional forms for this dependence. The exact form of the distance dependence has crucial implications for network searchability and dynamics: Kleinberg (2000) [15] shows that the small-world property holds if the probability of a social link is a power-law function of the distance with power -2, but not with any other power. We investigate the distance dependence of link probability empirically by analyzing four very different sets of data: Facebook links, data from the electronic version of the Small-World experiment, email messages, and data from detailed personal interviews. All four datasets reveal the same empirical regularity: the probability of a social link is proportional to the inverse of the square of the distance between the two individuals, analogously to the distance dependence of the gravitational force. Thus, it seems that social networks spontaneously converge to the exact unique distance dependence that ensures the Small-World property.
The true quantum face of the "exponential" decay: Unstable systems in rest and in motion
NASA Astrophysics Data System (ADS)
Urbanowski, K.
2017-12-01
Results of theoretical studies and numerical calculations presented in the literature suggest that the survival probability P0(t) has the exponential form starting from times much smaller than the lifetime τ up to times t ⪢τ and that P0(t) exhibits inverse power-law behavior at the late time region for times longer than the so-called crossover time T ⪢ τ (The crossover time T is the time when the late time deviations of P0(t) from the exponential form begin to dominate). More detailed analysis of the problem shows that in fact the survival probability P0(t) can not take the pure exponential form at any time interval including times smaller than the lifetime τ or of the order of τ and it has has an oscillating form. We also study the survival probability of moving relativistic unstable particles with definite momentum . These studies show that late time deviations of the survival probability of these particles from the exponential-like form of the decay law, that is the transition times region between exponential-like and non-exponential form of the survival probability, should occur much earlier than it follows from the classical standard considerations.
Chakraborty, Monojit; Chowdhury, Anamika; Bhusan, Richa; DasGupta, Sunando
2015-10-20
Droplet motion on a surface with chemical energy induced wettability gradient has been simulated using molecular dynamics (MD) simulation to highlight the underlying physics of molecular movement near the solid-liquid interface including the contact line friction. The simulations mimic experiments in a comprehensive manner wherein microsized droplets are propelled by the surface wettability gradient against forces opposed to motion. The liquid-wall Lennard-Jones interaction parameter and the substrate temperature are varied to explore their effects on the three-phase contact line friction coefficient. The contact line friction is observed to be a strong function of temperature at atomistic scales, confirming their experimentally observed inverse functionality. Additionally, the MD simulation results are successfully compared with those from an analytical model for self-propelled droplet motion on gradient surfaces.
FDTD modelling of induced polarization phenomena in transient electromagnetics
NASA Astrophysics Data System (ADS)
Commer, Michael; Petrov, Peter V.; Newman, Gregory A.
2017-04-01
The finite-difference time-domain scheme is augmented in order to treat the modelling of transient electromagnetic signals containing induced polarization effects from 3-D distributions of polarizable media. Compared to the non-dispersive problem, the discrete dispersive Maxwell system contains costly convolution operators. Key components to our solution for highly digitized model meshes are Debye decomposition and composite memory variables. We revert to the popular Cole-Cole model of dispersion to describe the frequency-dependent behaviour of electrical conductivity. Its inversely Laplace-transformed Debye decomposition results in a series of time convolutions between electric field and exponential decay functions, with the latter reflecting each Debye constituents' individual relaxation time. These function types in the discrete-time convolution allow for their substitution by memory variables, annihilating the otherwise prohibitive computing demands. Numerical examples demonstrate the efficiency and practicality of our algorithm.
Road marking features extraction using the VIAPIX® system
NASA Astrophysics Data System (ADS)
Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.
2016-07-01
Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.
NASA Technical Reports Server (NTRS)
Henderson, M. L.
1979-01-01
The benefits to high lift system maximum life and, alternatively, to high lift system complexity, of applying analytic design and analysis techniques to the design of high lift sections for flight conditions were determined and two high lift sections were designed to flight conditions. The influence of the high lift section on the sizing and economics of a specific energy efficient transport (EET) was clarified using a computerized sizing technique and an existing advanced airplane design data base. The impact of the best design resulting from the design applications studies on EET sizing and economics were evaluated. Flap technology trade studies, climb and descent studies, and augmented stability studies are included along with a description of the baseline high lift system geometry, a calculation of lift and pitching moment when separation is present, and an inverse boundary layer technique for pressure distribution synthesis and optimization.
Berlow, Noah; Pal, Ranadip
2011-01-01
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
The probability of misassociation between neighboring targets
NASA Astrophysics Data System (ADS)
Areta, Javier A.; Bar-Shalom, Yaakov; Rothrock, Ronald
2008-04-01
This paper presents procedures to calculate the probability that the measurement originating from an extraneous target will be (mis)associated with a target of interest for the cases of Nearest Neighbor and Global association. It is shown that these misassociation probabilities depend, under certain assumptions, on a particular - covariance weighted - norm of the difference between the targets' predicted measurements. For the Nearest Neighbor association, the exact solution, obtained for the case of equal innovation covariances, is based on a noncentral chi-square distribution. An approximate solution is also presented for the case of unequal innovation covariances. For the Global case an approximation is presented for the case of "similar" innovation covariances. In the general case of unequal innovation covariances where this approximation fails, an exact method based on the inversion of the characteristic function is presented. The theoretical results, confirmed by Monte Carlo simulations, quantify the benefit of Global vs. Nearest Neighbor association. These results are applied to problems of single sensor as well as centralized fusion architecture multiple sensor tracking.
On the performance of energy detection-based CR with SC diversity over IG channel
NASA Astrophysics Data System (ADS)
Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka
2017-12-01
Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.
Serine protease activity in m-1 cortical collecting duct cells.
Liu, Lian; Hering-Smith, Kathleen S; Schiro, Faith R; Hamm, L Lee
2002-04-01
An apical serine protease, channel-activating protease 1 (CAP1), augments sodium transport in A6 cells. Prostasin, a novel serine protease originally purified from seminal fluid, has been proposed to be the mammalian ortholog of CAP1. We have recently found functional evidence for a similar protease activity in the M-1 cortical collecting duct cell line. The purposes of the present studies were to determine whether prostasin (or CAP1) is present in collecting duct cells by use of mouse M-1 cells, to sequence mouse prostasin, and to further characterize the identity of the serine protease activity and additional functional features in M-1 cells. Using mouse expressed sequence tag sequences that are highly homologous to the published human prostasin sequence as templates, reverse transcription-polymerase chain reaction and RACE (rapid amplification of cDNA ends) were used to sequence mouse prostasin mRNA, which shows 99% identical to published mouse CAP1 sequence. A single 1800-bp transcript was found by Northern analysis, and this was not altered by aldosterone. Equivalent short-circuit current (I(eq)), which represents sodium transport in these cells, dropped to 59+/-3% of control value within 1 hour of incubation with aprotinin, a serine protease inhibitor. Trypsin increased the I(eq) in aprotinin-treated cells to the value of the control group within 5 minutes. Application of aprotinin not only inhibited amiloride sensitive I(eq) but also reduced transepithelial resistance (R(te)) to 43+/-2%, an effect not expected with simple inhibition of sodium channels. Trypsin partially reversed the effect of aprotinin on R(te). Another serine protease inhibitor, soybean trypsin inhibitor (STI), decreased I(eq) in M-1 cells. STI inhibited I(eq) gradually over 6 hours, and the inhibition of I(eq) by 2 inhibitors was additive. STI decreased transepithelial resistance much less than did aprotinin. Neither aldosterone nor dexamethasone significantly augmented protease activity or prostasin mRNA levels, and in fact, dexamethasone decreased prostasin mRNA expression. In conclusion, although prostasin is present in M-1 cells and probably augments sodium transport in these cells, serine proteases probably have other effects (eg, resistance) in the collecting duct in addition to effects on sodium channels. Steroids do not alter these effects in M-1 cells. Additional proteases are likely also present in mouse collecting duct cells.
Probable Nootropicinduced Psychiatric Adverse Effects: A Series of Four Cases
Ajaltouni, Jean
2015-01-01
The misuse of nootropics—any substance that may alter, improve, or augment cognitive performance, mainly through the stimulation or inhibition of certain neurotransmitters—may potentially be dangerous and deleterious to the human brain, and certain individuals with a history of mental or substance use disorders might be particularly vulnerable to their adverse effects. We describe four cases of probable nootropic-induced psychiatric adverse effects to illustrate this theory. To the best of our knowledge this has not been previously reported in the formal medical literature. We briefly describe the most common classes of nootropics, including their postulated or proven methods of actions, their desired effects, and their adverse side effects, and provide a brief discussion of the cases. Our objective is to raise awareness among physicians in general and psychiatrists and addiction specialists in particular of the potentially dangerous phenomenon of unsupervised nootropic use among young adults who may be especially vulnerable to nootropics’ negative effects. PMID:27222762
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
Factors Influencing the Incidence of Obesity in Australia: A Generalized Ordered Probit Model.
Avsar, Gulay; Ham, Roger; Tannous, W Kathy
2017-02-10
The increasing health costs of and the risks factors associated with obesity are well documented. From this perspective, it is important that the propensity of individuals towards obesity is analyzed. This paper uses longitudinal data from the Household Income and Labour Dynamics in Australia (HILDA) Survey for 2005 to 2010 to model those variables which condition the probability of being obese. The model estimated is a random effects generalized ordered probit, which exploits two sources of heterogeneity; the individual heterogeneity of panel data models and heterogeneity across body mass index (BMI) categories. The latter is associated with non-parallel thresholds in the generalized ordered model, where the thresholds are functions of the conditioning variables, which comprise economic, social, and demographic and lifestyle variables. To control for potential predisposition to obesity, personality traits augment the empirical model. The results support the view that the probability of obesity is significantly determined by the conditioning variables. Particularly, personality is found to be important and these outcomes reinforce other work examining personality and obesity.
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
Gottlieb, Daniel A
2006-03-01
Partial reinforcement often leads to asymptotically higher rates of responding and number of trials with a response than does continuous reinforcement in pigeon autoshaping. However, comparisons typically involve a partial reinforcement schedule that differs from the continuous reinforcement schedule in both time between reinforced trials and probability of reinforcement. Two experiments examined the relative contributions of these two manipulations to asymptotic response rate. Results suggest that the greater responding previously seen with partial reinforcement is primarily due to differential probability of reinforcement and not differential time between reinforced trials. Further, once established, differences in responding are resistant to a change in stimulus and contingency. Secondary response theories of autoshaped responding (theories that posit additional response-augmenting or response-attenuating mechanisms specific to partial or continuous reinforcement) cannot fully accommodate the current body of data. It is suggested that researchers who study pigeon autoshaping train animals on a common task prior to training them under different conditions.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
NASA Astrophysics Data System (ADS)
Yin, X.; Xia, J.; Xu, H.
2016-12-01
Rayleigh and Love waves are two types of surface waves that travel along a free surface.Based on the assumption of horizontal layered homogenous media, Rayleigh-wave phase velocity can be defined as a function of frequency and four groups of earth parameters: P-wave velocity, SV-wave velocity, density and thickness of each layer. Unlike Rayleigh waves, Love-wave phase velocities of a layered homogenous earth model could be calculated using frequency and three groups of earth properties: SH-wave velocity, density, and thickness of each layer. Because the dispersion of Love waves is independent of P-wave velocities, Love-wave dispersion curves are much simpler than Rayleigh wave. The research of joint inversion methods of Rayleigh and Love dispersion curves is necessary. (1) This dissertation adopts the combinations of theoretical analysis and practical applications. In both lateral homogenous media and radial anisotropic media, joint inversion approaches of Rayleigh and Love waves are proposed to improve the accuracy of S-wave velocities.A 10% random white noise and a 20% random white noise are added to the synthetic dispersion curves to check out anti-noise ability of the proposed joint inversion method.Considering the influences of the anomalous layer, Rayleigh and Love waves are insensitive to those layers beneath the high-velocity layer or low-velocity layer and the high-velocity layer itself. Low sensitivities will give rise to high degree of uncertainties of the inverted S-wave velocities of these layers. Considering that sensitivity peaks of Rayleigh and Love waves separate at different frequency ranges, the theoretical analyses have demonstrated that joint inversion of these two types of waves would probably ameliorate the inverted model.The lack of surface-wave (Rayleigh or Love waves) dispersion data may lead to inaccuracy S-wave velocities through the single inversion of Rayleigh or Love waves, so this dissertation presents the joint inversion method of Rayleigh and Love waves which will improve the accuracy of S-wave velocities. Finally, a real-world example is applied to verify the accuracy and stability of the proposed joint inversion method. Keywords: Rayleigh wave; Love wave; Sensitivity analysis; Joint inversion method.
NASA Astrophysics Data System (ADS)
Liang, Guanghui; Ren, Shangjie; Dong, Feng
2018-07-01
The ultrasound/electrical dual-modality tomography utilizes the complementarity of ultrasound reflection tomography (URT) and electrical impedance tomography (EIT) to improve the speed and accuracy of image reconstruction. Due to its advantages of no-invasive, no-radiation and low-cost, ultrasound/electrical dual-modality tomography has attracted much attention in the field of dual-modality imaging and has many potential applications in industrial and biomedical imaging. However, the data fusion of URT and EIT is difficult due to their different theoretical foundations and measurement principles. The most commonly used data fusion strategy in ultrasound/electrical dual-modality tomography is incorporating the structured information extracted from the URT into the EIT image reconstruction process through a pixel-based constraint. Due to the inherent non-linearity and ill-posedness of EIT, the reconstructed images from the strategy suffer from the low resolution, especially at the boundary of the observed inclusions. To improve this condition, an augmented Lagrangian trust region method is proposed to directly reconstruct the shapes of the inclusions from the ultrasound/electrical dual-modality measurements. In the proposed method, the shape of the target inclusion is parameterized by a radial shape model whose coefficients are used as the shape parameters. Then, the dual-modality shape inversion problem is formulated by an energy minimization problem in which the energy function derived from EIT is constrained by an ultrasound measurements model through an equality constraint equation. Finally, the optimal shape parameters associated with the optimal inclusion shape guesses are determined by minimizing the constrained cost function using the augmented Lagrangian trust region method. To evaluate the proposed method, numerical tests are carried out. Compared with single modality EIT, the proposed dual-modality inclusion boundary reconstruction method has a higher accuracy and is more robust to the measurement noise.
How a surgeon becomes superman by visualization of intelligently fused multi-modalities
NASA Astrophysics Data System (ADS)
Erat, Okan; Pauly, Olivier; Weidert, Simon; Thaller, Peter; Euler, Ekkehard; Mutschler, Wolf; Navab, Nassir; Fallavollita, Pascal
2013-03-01
Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2017-01-01
Dynamical systems in nature exhibit self-similar fractal space-time fluctuations on all scales indicating long-range correlations and, therefore, the statistical normal distribution with implicit assumption of independence, fixed mean and standard deviation cannot be used for description and quantification of fractal data sets. The author has developed a general systems theory based on classical statistical physics for fractal fluctuations which predicts the following. (1) The fractal fluctuations signify an underlying eddy continuum, the larger eddies being the integrated mean of enclosed smaller-scale fluctuations. (2) The probability distribution of eddy amplitudes and the variance (square of eddy amplitude) spectrum of fractal fluctuations follow the universal Boltzmann inverse power law expressed as a function of the golden mean. (3) Fractal fluctuations are signatures of quantum-like chaos since the additive amplitudes of eddies when squared represent probability densities analogous to the sub-atomic dynamics of quantum systems such as the photon or electron. (4) The model predicted distribution is very close to statistical normal distribution for moderate events within two standard deviations from the mean but exhibits a fat long tail that are associated with hazardous extreme events. Continuous periodogram power spectral analyses of available GHCN annual total rainfall time series for the period 1900-2008 for Indian and USA stations show that the power spectra and the corresponding probability distributions follow model predicted universal inverse power law form signifying an eddy continuum structure underlying the observed inter-annual variability of rainfall. On a global scale, man-made greenhouse gas related atmospheric warming would result in intensification of natural climate variability, seen immediately in high frequency fluctuations such as QBO and ENSO and even shorter timescales. Model concepts and results of analyses are discussed with reference to possible prediction of climate change. Model concepts, if correct, rule out unambiguously, linear trends in climate. Climate change will only be manifested as increase or decrease in the natural variability. However, more stringent tests of model concepts and predictions are required before applications to such an important issue as climate change. Observations and simulations with climate models show that precipitation extremes intensify in response to a warming climate (O'Gorman in Curr Clim Change Rep 1:49-59, 2015).
Croy, Theodore; Saliba, Susan; Saliba, Ethan; Anderson, Mark W; Hertel, Jay
2013-11-01
Quantifying talocrural joint laxity after ankle sprain is problematic. Stress ultrasonography (US) can image the lateral talocrural joint and allow the measurement of the talofibular interval, which may suggest injury to the anterior talofibular ligament (ATFL). The acute talofibular interval changes after lateral ankle sprain are unknown. Twenty-five participants (9 male, 16 female; age 21.8 ± 3.2 y, height 167.8 ± 34.1 cm, mass 72.7 ± 13.8 kg) with 27 acute, lateral ankle injuries underwent bilateral stress US imaging at baseline (<7 d) and on the affected ankle at 3 wk and 6 wk from injury in 3 ankle conditions: neutral, anterior drawer, and inversion. Talofibular interval (mm) was measured using imaging software and self-reported function (activities of daily living [ADL] and sports) by the Foot and Ankle Ability Measure (FAAM). The talofibular interval increased with anterior-drawer stress in the involved ankle (22.65 ± 3.75 mm; P = .017) over the uninvolved ankle (19.45 ± 2.35 mm; limb × position F1,26 = 4.9, P = .035) at baseline. Inversion stress also resulted in greater interval changes (23.41 ± 2.81 mm) than in the uninvolved ankles (21.13 ± 2.08 mm). A main effect for time was observed for inversion (F2,52 = 4.3, P = .019, 21.93 ± 2.24 mm) but not for anterior drawer (F2,52 = 3.1, P = .055, 21.18 ± 2.34 mm). A significant reduction in the talofibular interval took place between baseline and week 3 inversion measurements only (F1,26 = 5.6, P = .026). FAAM-ADL and sports results increased significantly from baseline to wk 3 (21.9 ± 16.2, P < .0001 and 23.8 ± 16.9, P < .0001) and from wk 3 to wk 6 (2.5 ± 4.4, P = .009 and 10.5 ± 13.2, P = .001). Stress US methods identified increased talofibular interval changes suggestive of talocrural laxity and ATFL injury using anterior drawer and inversion stress that, despite significant improvements in self-reported function, only marginally improved during the 6 wk after ankle sprain. Stress US provides a safe, repeatable, and quantifiable method of measuring the talofibular interval and may augment manual stress examinations in acute ankle injuries.
Dynamical analysis of Grover's search algorithm in arbitrarily high-dimensional search spaces
NASA Astrophysics Data System (ADS)
Jin, Wenliang
2016-01-01
We discuss at length the dynamical behavior of Grover's search algorithm for which all the Walsh-Hadamard transformations contained in this algorithm are exposed to their respective random perturbations inducing the augmentation of the dimension of the search space. We give the concise and general mathematical formulations for approximately characterizing the maximum success probabilities of finding a unique desired state in a large unsorted database and their corresponding numbers of Grover iterations, which are applicable to the search spaces of arbitrary dimension and are used to answer a salient open problem posed by Grover (Phys Rev Lett 80:4329-4332, 1998).
Granular segregation driven by particle interactions.
Lozano, C; Zuriguel, I; Garcimartín, A; Mullin, T
2015-05-01
We report the results of an experimental study of particle-particle interactions in a horizontally shaken granular layer that undergoes a second order phase transition from a binary gas to a segregation liquid as the packing fraction C is increased. By focusing on the behavior of individual particles, the effect of C is studied on (1) the process of cluster formation, (2) cluster dynamics, and (3) cluster destruction. The outcomes indicate that the segregation is driven by two mechanisms: attraction between particles with the same properties and random motion with a characteristic length that is inversely proportional to C. All clusters investigated are found to be transient and the probability distribution functions of the separation times display a power law tail, indicating that the splitting probability decreases with time.
Robust, Adaptive Radar Detection and Estimation
2015-07-21
cost function is not a convex function in R, we apply a transformation variables i.e., let X = σ2R−1 and S′ = 1 σ2 S. Then, the revised cost function in...1 viv H i . We apply this inverse covariance matrix in computing the SINR as well as estimator variance. • Rank Constrained Maximum Likelihood: Our...even as almost all available training samples are corrupted. Probability of Detection vs. SNR We apply three test statistics, the normalized matched